Digital Accessibility Index: Learn where the world’s leading brands fall short on accessibility.

See Report

What’s the Difference Between Manual and Automated Accessibility Testing?

Nov 23, 2021

More organizations are recognizing the importance of digital accessibility, and a variety of new resources are available to analyze conformance with the Web Content Accessibility Guidelines (WCAG), the consensus standards for accommodating people with disabilities. 

Accessibility testing resources fall into two categories: 

  • Automated accessibility tests use software to analyze websites for common barriers that might affect people with disabilities.
  • Manual testing services employ a human tester, who accesses the website and creates a report based on their experience. These reports typically include information about WCAG conformance issues and recommended remediation tactics.

Automated tools are especially popular, since they’re widely available, inexpensive, and fast. With a quick scan, web developers can analyze their websites with open-source tools. Unfortunately, this can lead to inaccurate assumptions about a site’s overall accessibility. 

In this article, we’ll discuss the limitations of manual and automated accessibility tests — and we’ll explain how a hybrid testing methodology can provide a comprehensive analysis of your website’s WCAG conformance.

Automated accessibility tests are easy to use, but they’re limited in scope

Many WCAG success criteria are somewhat subjective. For instance, WCAG 1.1.1, “Non-text content,” requires websites to provide a text alternative for all non-text content. Text alternatives should provide users with the same information that they’d receive if they perceived the content visually — and a human will need to determine whether the text provides the same information as the non-text content. 

Automated tools aren’t effective at making subjective decisions, nor do they experience a website in the same way as a human user. Some common issues that automated accessibility tests might miss: 

  • An automated test can identify missing image alternative text (also referred to as alt tags). However, artificial intelligence can’t determine whether alternative text is accurate or descriptive.
  • Likewise, automated tools can’t identify inaccurate form labels, which can cause confusion for people who use assistive technologies like screen readers.
  • Automated tools can’t accurately determine whether your content is clear, concise, and understandable for a typical reader.
  • Automated tools may also offer false positives — they may indicate that a certain element creates an accessibility barrier, but the element might not affect the on-page experience for real users.

Of course, the opposite is also true: Automated accessibility testing can find some issues that human testers might miss. For example, an automated tool can instantly find most color contrast issues, which can be beneficial for designers.

Ultimately, automated testing is useful, but limited. In one study, government accessibility advocates in the United Kingdom intentionally created a webpage with 142 accessibility barriers, then analyzed the page with 13 automated accessibility tools. The best-performing tool was only able to identify 40% of the barriers. The worst-performing tool found 13% of the barriers.

Read: Is Automated Testing Enough for Accessibility Compliance?

Manual accessibility testers can provide guidance for remediation

There’s another great reason to use human testing when evaluating accessibility: Automated tools can identify some barriers, but they can’t explain how those barriers affect real users' experiences. 

Qualified human testers have experience with accessibility technologies, and they can provide actionable feedback that helps developers and designers create better content in the long-term. An experienced tester can tell you exactly why a certain remediation tactic will work. That can be invaluable information — you’ll spend less time fixing problems if you’re able to build with the best practices of accessibility in mind. 

Read: Why Is A Web Accessibility Audit Important?

Accessibility audits should use a combination of manual and automated methods

At the Bureau of Internet Accessibility, we utilize a four-point hybrid testing methodology. Our a11y® analysis platform checks the site against hundreds of WCAG Level A/AA guidelines, logging issues and remediation recommendations. 

Next, a manual tester with a vision-related disability accesses the site using assistive technology. Our testers have certifications in JAWS and NVDA, two of the most popular screen readers, and each expert has five or more years of experience in accessibility testing.

Third, a subject matter expert (SME) performs a second round of human testing. Each WCAG outcome is reviewed and validated. Finally, a senior developer creates a comprehensive report. Our senior developers have extensive experience in accessible mobile app design and web design. 

Related: A Look at Our Four-Point Hybrid Testing

We believe that by combining artificial intelligence with expert human oversight, we’re able to provide our clients with excellent guidance for meeting their accessibility goals. If you’re pursuing an accessibility initiative, regular testing is essential for monitoring your progress — and ensuring the best possible return on your investment. 

Use our free Website Accessibility Checker to scan your site for ADA and WCAG compliance.

Powered By

Recent posts

Worried About Web Accessibility Lawsuits? Start Here.

Sep 8, 2023

Will Generative AI Improve Digital Accessibility?

Sep 5, 2023

How Accessibility Can Help You Grow Your Web Design Business

Sep 1, 2023

Not sure where to start?

Start with a free analysis of your website's accessibility.

GET STARTED