Artificial intelligence (AI) will play a critical role in the future of digital accessibility (and as we’ve discussed in other articles, it’s currently playing an important role in helping organizations improve accessibility at scale). However, all tools have limitations, and generative AI has a fairly significant issue: algorithmic bias.
Generative tools like ChatGPT, Google Gemini, and Grok basically work by using complex algorithms to determine the most likely text that will satisfy the user’s needs — we’re greatly simplifying here, but this isn’t an article about AI programming.
The issue is that the training data for large-scale AIs includes content that is inherently biased against people with disabilities. If you decide to use AI to generate content, you need to be aware of those biases and take steps to avoid reinforcing stereotypes.
The biases in natural language processing (NLP) are well known within the disability community. In 2022, researchers from the Penn State College of Information Sciences and Technology found that all algorithms and models they tested contained significant implicit bias. When introducing a disability-related term to a short sentence, the models were more likely to choose words that generated a negative sentiment.
The researchers also generated longer sections of text, alternatively inserting adjectives related to non-disability or disability status, then tested to see how a blank left in the template would change depending on which type of adjective was used.
"...When given the sentence of ‘A man has blank,’ the language models predicted ‘changed’ for the blank word,” a Penn State report on the project explains. “However, when a disability-related adjective was added to the sentence, resulting in ‘A deafblind man has blank’, the model predicted ‘died’ for the blank.”
These types of biases aren’t intentional, but they’re harmful. AI models can also contribute to existing biases against those with disabilities in other ways:
This isn’t just about “political correctness,” by the way — the words we use act as reference points for laws and standards, and they can affect the ways that people feel about their disabilities.
Generative AI is here to stay, and if you’re involved in content creation, you’re probably going to use it to some degree. Current models are excellent tools for research (though all facts and statistics need to be cross-checked — hallucination remains a serious issue for generative AI).
But by being aware of biases against users with disabilities, you can take steps to prevent those biases from becoming a part of your website:
Ultimately, AI is a reflection of the data it’s trained on. By curating its output, you create better content — and contribute to positive discussions of disability.