As a web developer, you want your work to be accessible for people with vision disabilities. You follow the Web Content Accessibility Guidelines (WCAG), set clear goals, and test your content with automated tools — but to be thorough, shouldn’t you also test your content with a screen reader?
Not necessarily. Web developers should certainly try screen readers to gain perspective on how assistive technology functions, and we’ve written articles about using products like NVDA (NonVisual Desktop Access) and JAWS (Jobs Access With Speech) for this purpose.
However, testing your content with a screen reader isn’t easy — or especially effective, even if you’re performing basic reviews of your code or markup. Here’s why.
People with vision disabilities experience web pages differently than sighted users
Screen readers aren’t web browsers. They’re dedicated software, and they work with browsers and other applications to present content.
People who use screen readers use them regularly, and they’re proficient with the controls. They can skip around webpages to find hyperlinks or read subheadings, or switch between different modes (which may change the hotkeys that operate the software) to use content in a different way.
If you don’t have years of experience with screen reading software, you don’t have the same proficiency. Put simply, your experience won’t match with the experiences of your users, and your testing will be prone to false positives (accessibility barriers that don’t exist) or false negatives (barriers that you miss while testing your content).
Different combinations of browsers and screen readers can yield different results
Let’s say you spend weeks learning how to use a certain screen reader. If you develop proficiency, can you perform screen reader testing?
Not really — because you have proficiency with a single piece of software, and you’re probably using that software with a single web browser. That’s especially important when you’re testing WAI-ARIA (Web Accessibility Initiative - Accessible Rich Internet Applications) markup.
Here’s an extreme overview of the problem: Web browsers have accessibility APIs (Application Programming Interfaces). These APIs aren’t perfectly consistent — Apple Safari, for instance, uses a different API implementation than Mozilla Firefox or Google Chrome. This affects how browsers support ARIA, and more specifically, how the browser’s keyboard support functions with certain ARIA usage.
In other words, if you access your website using NVDA and Firefox, you might have different results than if you’d performed the same test using Safari. For some types of complex or dynamic content, the differences can be significant.
Developers may assume that “one test is enough"
You’re probably testing your content for a specific reason. Maybe you’ve been tasked with improving your website’s ARIA implementation (and if that’s the case, we suggest reminding your employer that accessibility isn’t a one-person job). Maybe you’re adding a new feature, and you want to make sure that it operates predictably.
Unfortunately, a single accessibility test won’t solve your problem. As we’ve discussed, your results will be limited — and even if you’re testing your content with the best intentions, the “test” may negatively impact your organization’s approach to digital accessibility.
After all, if you tell your employer that you’ve tested a website with a screen reader and that it performed predictably, they may assume that the hard work is done.
Of course, that’s not the case. Digital accessibility needs to be a consistent priority, and it’s not optional: Laws like the Americans with Disabilities Act (ADA) and the Accessibility for Ontarians with Disabilities Act (AODA) require accessible web content. No individual test can guarantee digital compliance, so you shouldn’t rely on a single experience with a screen reader when evaluating your site.
Remember, web accessibility isn’t just about screen readers
Finally, screen reader testing — while important — doesn’t provide much insight into the experiences of users who don’t have vision disabilities. The goal of digital accessibility is to accommodate as many people as possible, which includes people with mobility-related disabilities, neurocognitive differences, and other conditions.
At the Bureau of Internet Accessibility, our four-point hybrid testing methodology includes tests performed by experienced screen reader users who have vision disabilities. However, also perform other manual and automated tests that recognize the full scope of disabilities, and all results are analyzed subject matter experts (SMEs) and senior developers.