Digital Accessibility Blog

Ethics of AI in Accessibility: Avoiding Bias when Using Generative AI Text

Written by Marissa | Jun 19, 2025

Artificial intelligence (AI) will play a critical role in the future of digital accessibility (and as we’ve discussed in other articles, it’s currently playing an important role in helping organizations improve accessibility at scale). However, all tools have limitations, and generative AI has a fairly significant issue: algorithmic bias. 

Generative tools like ChatGPT, Google Gemini, and Grok basically work by using complex algorithms to determine the most likely text that will satisfy the user’s needs — we’re greatly simplifying here, but this isn’t an article about AI programming.

The issue is that the training data for large-scale AIs includes content that is inherently biased against people with disabilities. If you decide to use AI to generate content, you need to be aware of those biases and take steps to avoid reinforcing stereotypes. 

How Generative AI Reinforces Biases Against People with Disabilities

The biases in natural language processing (NLP) are well known within the disability community. In 2022, researchers from the Penn State College of Information Sciences and Technology found that all algorithms and models they tested contained significant implicit bias. When introducing a disability-related term to a short sentence, the models were more likely to choose words that generated a negative sentiment. 

The researchers also generated longer sections of text, alternatively inserting adjectives related to non-disability or disability status, then tested to see how a blank left in the template would change depending on which type of adjective was used. 

"...When given the sentence of ‘A man has blank,’ the language models predicted ‘changed’ for the blank word,” a Penn State report on the project explains. “However, when a disability-related adjective was added to the sentence, resulting in ‘A deafblind man has blank’, the model predicted ‘died’ for the blank.”

These types of biases aren’t intentional, but they’re harmful. AI models can also contribute to existing biases against those with disabilities in other ways:

  • Underrepresentation: If a user asks an AI to generate an image or story about a "group of friends" or "a busy office," the output will likely feature only non-disabled individuals unless disability is specified in the prompt. Disability representation is important, and using inclusive imagery — including stock images — can make a difference.
  • Using Outdated Language: AI may default to outdated and offensive terminology (e.g., “confined to a wheelchair”) or use person-first language ("person with autism") when identity-first language ("autistic person") is preferred by some disability communities.
  • Representing Disability As a Tragedy: Content might focus excessively on a person "overcoming" their disability, framing them as an object of pity or a superhuman source of inspiration for non-disabled people. 
  • Erasure Through Censorship: Major models may be configured to avoid subjects that might make users feel uncomfortable. Some degree of censorship is essential for NLPs, but if disabilities are identified as “negative,” models may simply strip out all references to folks with disabilities.

This isn’t just about “political correctness,” by the way — the words we use act as reference points for laws and standards, and they can affect the ways that people feel about their disabilities.

Using Generative AI Responsibly: Don't Blindly Accept the Output

Generative AI is here to stay, and if you’re involved in content creation, you’re probably going to use it to some degree. Current models are excellent tools for research (though all facts and statistics need to be cross-checked — hallucination remains a serious issue for generative AI). 

But by being aware of biases against users with disabilities, you can take steps to prevent those biases from becoming a part of your website: 

  • Treat all AI-generated content as a first draft. Review the text for common stereotypes and double-check all discussions of disability against a style guide (we recommend the National Center on Disability and Journalism’s Style Guide). 
  • Include specific instructions in your prompts to represent people with disabilities respectfully and accurately. Review our article on creating UX personas for people with disabilities — those personas can also be helpful when directing AI.
  • Actively look for opportunities to include representation of disability in neutral, everyday contexts. One excellent resource is Disabled and Here, a disability-led stock image and interview series.
  • Use the feedback features within AI tools to report biased or stereotypical outputs when you encounter them.

Ultimately, AI is a reflection of the data it’s trained on. By curating its output, you create better content — and contribute to positive discussions of disability.