ChatGPT deprioritizes resumes that mention disability during job app screenings

Diego Thomazini/adobe stock Drobot Dean/Adobe Stock (Licensed)

ChatGPT is being used to scan job applications—and penalizing candidates with disabilities

AI’s biases affect its ability to fairly screen resumes.

 

Tricia Crimmins

Tech

Research has found that artificial intelligence is biased against those with marginalized identities, including people with disabilities. Now, a new study from the University of Washington found that ChatGPT, Open AI’s artificial intelligence chatbot, exercises that bias when scanning resumes as part of job applications.

The study—called “Identifying and Improving Disability Bias in GPT-Based Resume Screening”—researched if ChatGPT’s biases came into play when used for hiring and job recruiting purposes.

“The existing underrepresentation of disabled people in the workforce and bias against disabled jobseekers is a substantial concern,” the study states. “Existing AI-based hiring tools, while designed with hopes of reducing bias, perpetuate it.”

Using AI to screen candidate resumes can speed up the hiring process and many companies are utilizing AI for that exact reason.

UW researchers found that when ChatGPT was asked to rank resumes with and without mention of disability, it ranked resumes that didn’t mention disability higher. Mentions of disability include scholarships, awards, membership organizations, and panel presentations that are about or pertain to people with disabilities.

“Some of GPT’s descriptions would color a person’s entire resume based on their disability and claimed that involvement with DEI or disability is potentially taking away from other parts of the resume,” the study’s lead author Kate Glazko told UWNews. “People need to be aware of the system’s biases when using AI for these real-world tasks.”

Artificial intelligence is particularly biased against disability because disability can affect people in more complex ways than race and gender, according to Shari Trewin, the IBM Accessibility Team’s Program Director. And because machine-learning systems focus on norms, if those systems consider people with disabilities outsiders, they will be biased against them.

“The way that machine learning judges people by who it thinks they’re similar to—even when it may never have seen anybody similar to you—is a fundamental limitation in terms of fair treatment for people with disabilities,” Trewin told the MIT Technology Review in 2018.

Trewin also suggested that AI can be made less ableist by giving it rules to follow that ensure it won’t disadvantage people with disabilities, which is exactly what Glazko’s study suggests, too: It recommends that users instruct the AI to “be less ableist” or “embody Disability Justice values” when using it to screen resumes.

Disability justice” is a framework for thinking about disability and people with disabilities that emphasizes intersectionality, self-determination, and agency. The study also states that more work should be done to address AI’s biases.

Others have proven AI’s propensity to be ableist, too. Last year, disability advocate Jeremy Davis asked generative AI to create images of “an autistic person” almost 150 times—all but two of the photos showed thin, white, cisgender men. A prominent stereotype pertaining to autism is that is a condition that only affects white men.

“In order for AI to be an effective tool, you have to be smarter than the AI” Davis said at the time. “We must be aware of its limitations and pitfalls.”


The internet is chaotic—but we’ll break it down for you in one daily email. Sign up for the Daily Dot’s web_crawlr newsletter here to get the best (and worst) of the internet straight into your inbox.

Share this article

*First Published:

 
The Daily Dot