With this formidable challenge in mind, many web developers and designers choose to make use of automated website checking tools. These automated checkers perform testing and validation of a given web page based on specific benchmarks, and release the results in a human-readable format. Automated website checkers are particularly useful for situations where a website is too complex or large to be fully vetted by humans.
What Does an Automated Website Checker Do?
Given a URL or code as input, a website checker will return a list of criteria that have been passed or failed. These criteria will vary depending on what the website checker is looking for: malformed HTML code, broken links, accessibility issues, security issues, cross-browser and cross-device compatibility issues, and more.
Automated website checkers are similar to — but distinct from — other automated testing tools. The difference between them is akin to the difference between proofreading and editing a manuscript. Proofreaders merely search for errors in spelling, grammar, and punctuation, while editors make changes to the manuscript's overall content, tone, and structure. Similarly, automated website checkers only examine a website's code for surface-level errors and issues, while automated testing tools are able to analyze the website's actual performance, such as under periods of heavy usage.
When Should an Automated Website Checker Be Used?
Automated website checkers can be immensely valuable tools. In a matter of seconds, they can find problems and issues that would have otherwise gone overlooked by human developers. Running these tools at regular intervals to search for code rot, such as malformed HTML and broken links, is an excellent idea for practicing web development "hygiene."
On the other hand, excessive reliance on automated checking tools — to the exclusion of manual testing — can be dangerous by lulling users into a false sense of security. When using an automated checking tool, it often seems appealing to tweak the website until all of the tool's criteria have been satisfied, and then leave the rest alone. However, this ignores the fact that some of these criteria might be relatively trivial, as well as the fact that the tool is necessarily limited in scope and cannot find all of the potential problems with a website.
What's more, no automated tool is able to say definitively whether a website is "accessible" or "compliant" with a given set of accessibility standards. For example, the WCAG 2.0 standards require a website's content to be presented in a "meaningful sequence" that can be understood by users regardless of any changes to make the website more accessible. However, the idea of a "meaningful sequence" requires a subjective judgment that must be provided by human evaluators, since automated checking tools aren't yet sufficiently advanced. In order to render such a verdict about accessibility, automated checking must be combined with manual website testing.