Digital Accessibility Index: Learn where the world’s leading brands fall short on accessibility.

See Report

The True Costs of False Positives in Web Accessibility Testing

Nov 9, 2022

When you test your content for accessibility, you need to have confidence in the results. Unfortunately, automated testing — while extremely useful — isn’t perfect, and automated tools often miss accessibility issues that affect real-life users with disabilities.

For example, if your website has an image of an apple, and you write alternative text that describes it as “an orange,” automated tools won’t identify the issue, since most tools simply check to see whether images are missing alternative text. Artificial intelligence (AI) can’t tell the difference between an apple and an orange (at least, not yet — AI is becoming much more powerful, but that’s a subject for another article). 

When an accessibility tool misses an error, that’s called a false negative. An accessibility issue exists, but the tool falsely declares that it doesn’t exist. 

But there’s another potential issue to keep in mind: false positives. These occur when an accessibility test identifies a barrier that doesn’t really exist.  

If you’re building a web accessibility initiative, it’s important to remember that false positives can create as many issues as false negatives. To ensure the highest level of web accessibility, you’ll need to be aware of how false positives work. Here’s an overview.

Why Automated Accessibility Tests Report False Positives

All automated accessibility tools work by looking for patterns in your website’s code and markup. Most tools (including the Bureau of Internet Accessibility’s free website analysis) compare that code and markup to the Web Content Accessibility Guidelines (WCAG) Level AA success criteria, which are widely considered the international standard for digital accessibility.

Some WCAG success criteria have simple pass-or-fail rulesets. For example, WCAG requires alternative text for non-text content. If your images don’t contain alt text, that’s an obvious "fail.” 

But other issues require a deeper understanding of context. AI tools have a limited ability to understand context, which can result in false positives. For example: 

  • Some images (such as logos and dividers) are purely decorative. Per WCAG Success Criterion (SC) 1.1.1, “Non-text Content,” purely decorative content shouldn’t include alt text, but should be implemented in a way that assistive technologies (AT) can ignore.
  • Many automated accessibility tests flag content that contains the WAI-ARIA aria-hidden attribute. However, there are legitimate reasons to use aria-hidden to hide content from AT.
  • Automated tools scan content for violations of WCAG 2.1 SC 1.4.3, “Contrast (Minimum),” which prohibits low-contrast text. Once again, purely decorative text and logotypes are excepted from WCAG’s color contrast requirements, but AI can’t determine whether text qualifies for these exceptions.

This isn’t a comprehensive list. Many WCAG criteria have exceptions for certain types of content, and other criteria have subjective requirements. That’s by design, since the purpose of accessibility is to accommodate real users, not to meet arbitrary requirements.

Related: How Do Automated Website Checkers Work?

How do false positives impact accessibility remediation?

Even if you’re running a Fortune 500 company, you have limited resources for accessibility remediation. You don’t want your developers spending hours fixing problems that aren’t there — and in some cases, “fixing" a false positive could make your website less accessible.

And while some developers with a working knowledge of WCAG might be able to identify false positives early in the remediation process, others don’t have that experience. Your team might end up rebuilding your product for no reason.

False positives can also throw off benchmarking, preventing you from tracking the progress of your accessibility initiative. Tracking your progress helps you keep your team invested in the hard work of digital accessibility, and when you start with bad data, you can’t demonstrate the value of the work. 

Related: Web Accessibility Remediation: A Quick Guide for Getting Started

Don’t rely on automated accessibility testing

The solution: Don’t rely on automated testing alone. We recommend a hybrid testing methodology, which combines AI testing with manual review from experts (including people who have disabilities, who may be more capable of identifying high-impact accessibility barriers). 

A hybrid accessibility audit gives you accurate benchmarking data and reduces the time spent on remediation. It’s the best way to reach your goals and demonstrate compliance with the Americans with Disabilities Act (ADA) and other non-discrimination laws.

In the accessibility space, we often advise clients to think of automated tools as spell checkers. When you’re writing something, you don’t assume that your spell check is perfect: You use the feedback from the spell check as part of the editing process.

That’s also true for automated accessibility tests. They’re helpful for finding problems, but you still need to review each issue manually to determine whether the problem actually exists — and to determine the best method of remediation.

For guidance with your accessibility initiative, send us a message to connect with a subject matter expert.

Use our free Website Accessibility Checker to scan your site for ADA and WCAG compliance.

Powered By

Recent posts

Worried About Web Accessibility Lawsuits? Start Here.

Sep 8, 2023

Will Generative AI Improve Digital Accessibility?

Sep 5, 2023

How Accessibility Can Help You Grow Your Web Design Business

Sep 1, 2023

Not sure where to start?

Start with a free analysis of your website's accessibility.

GET STARTED