Artificial intelligence (A.I.) content has quickly become mainstream thanks to tools like ChatGPT, Jasper, and Google Bard. These resources can create human-like content by using deep learning, a type of machine learning that focuses on “teaching" A.I. models with real-world examples from human writers.
Of course, A.I.-generated content is controversial. Currently, the tools are prone to hallucination — confidently presenting inaccurate information as fact — and prominent linguist Noam Chomsky has criticized ChatGPT as high-tech “plagiarism.”
Regardless of the criticisms, it’s clear that A.I. content is here to stay. If you’re creating content with non-human writers, you’ll need to ask an important question: Is the content accessible for all users, including those with disabilities that affect their hearing, vision, or cognition?
The quick answer: If you review your content carefully, A.I.-generated content is not necessarily bad for accessibility. Here’s what you need to know.
A.I. content can be accessible, but human oversight is important
Deep-learning models can generate text with near-perfect grammar, but currently, tools like ChatGPT err on the side of safety. They rarely have interesting insights, and they tend to follow strict rules — which can lead to boring, hard-to-read content.
Giant walls of complicated text aren’t ideal for internet audiences, and they may be especially frustrating for people with attention disorders, people who use screen readers (software that outputs text as audio), and other users with disabilities.
The solution is to use A.I. chatbots as part of a process, not as the entire process. If you generate a few paragraphs of content, double-check the output for accuracy, then add your own insights, you’re simply using a content tool — you’re not abusing the power of A.I. (or providing a bad experience for users).
Related: 3 Ways That Artificial Intelligence Can Improve Web Accessibility
Check A.I.-created content for common accessibility issue
As is the case with all new technologies, it’s important to think about your entire audience when using A.I. to create content. That means paying attention to the small details:
- Does the A.I.-generated text include definitions for unusual terms, abbreviations, and acronyms?
- Is the content organized in a logical, predictable way?
- If the content contains images, do the images have appropriate alternative text (also called alt text or image alt tags)?
- Have you checked each factual claim for accuracy?
- If the content includes hyperlinks, do the hyperlinks point to a credible source?
It’s also important to break up long content by adding subheadings, bulleted lists, and multimedia. That benefits people with dyslexia, autism, and other cognitive differences, but it also improves the reading experience for every user — and given that the average internet user only reads about 20% of the content on a web page, you’ve got a limited opportunity to capture their attention.
For more guidance, read: 4 Quick Ways to Create Clearer Content and Improve Accessibility
Don't trust A.I. tools to fix accessibility issues that require human judgment
Artificial intelligence can be profoundly beneficial for accessibility when used correctly. Tools designed for accessibility can find (and in some cases, instantly fix) barriers that affect your users, greatly improving your content.
However, no tool can exercise human judgment — at least, not yet. That’s an especially important consideration when your chatbot writes code or markup for your website.
If you’re using ChatGPT or a similar tool to create web content, you shouldn’t trust the tool to handle your HTML or WAI-ARIA (Web Accessibility Initiative - Accessible Rich Internet Applications) markup.
HTML and WAI-ARIA define the semantics of your website, which enables assistive technologies to present the content in a predictable way. In our tests with ChatGPT, the tool made several errors when writing HTML and ARIA:
- ARIA markup defined the state of elements inaccurately. For example, the tool created markup for the aria-expanded attribute, but added aria-hidden=”true,” then inaccurately claimed that aria-hidden was set to false. Learn how WAI-ARIA “hidden" affects accessibility.
- When writing HTML, the tool occasionally presented subheading tags out of their sequential order. For example, an <h4> tag appeared directly below an <h2> tag. Learn why subheading order is important for accessibility.
- We asked ChatGPT to write alternative text for an image of an apple. The text read: “A red apple with a stem and a leaf on top, against a white background.” That’s decent alternative text — but the model added details (such as the white background, stem, and leaf) that we didn’t provide.
These are not serious errors. If you have a working knowledge of accessibility and HTML, you can easily resolve the problems — but if you’re relying on chatbots to automatically handle all of your accessibility improvements, you’re taking a significant risk. Chatbots aren’t designed for that purpose, and they’re not perfect.
To that end, you should make sure that you have an appropriate testing strategy in place. Use a combination of automated and manual tests to analyze your conformance with the Web Content Accessibility Guidelines (WCAG), the international standards for accessibility.
If you’re building a web accessibility strategy, the Bureau of Internet Accessibility can help. We combine powerful A.I. with guidance from human experts to provide a sustainable path for digital compliance. Learn more by sending us a message or get started with a free, automated WCAG Level A/AA website analysis.