AI is biased against speakers of African American English, study finds – UChicago News

This post was originally published on this site.

With each version of large language models like ChatGPT, developers have gotten better at filtering out racist content absorbed through sources like the internet. But researchers have discovered more subtle, covert forms of racismā€”such as prejudice based on how someone speaksā€”still lurking deep within AI.

In a paper published Aug. 28 in Nature, researchers discovered that when asked explicitly to describe African Americans, AIs generated overwhelmingly positive associationsā€”words like brilliant, intelligent and passionate. However, when prompted about speakers of African American English, large language models spit out negative stereotypes similar toā€”or even worse thanā€”attitudes held in the 1930s.

The research team, including University of Chicago Asst. Prof. Sharese King and scholars from Stanford University and the Allen Institute for AI, also found that AI models consistently assigned speakers of African American English to lower-prestige jobs and issued more convictions in hypothetical criminal casesā€”and more death penalties.

ā€œIf we continue to ignore the field of AI as a space where racism can emerge, then we’ll continue to perpetuate the stereotypes and the harms against African Americans,ā€ said King, the Neubauer Family Assistant Professor of Linguistics at UChicago.

Studying dialect difference

As a sociolinguist, King studies African American English, or AAE, a dialect spoken by Black Americans across the country. According to King, the clearest distinctions between AAE and standardized American English often revolve around grammatical differences in vocabulary, accents, and how speakers use verb aspects or tenses to describe how an event unfolded.

One distinctive feature of AAE is the ā€œhabitual be,ā€ or using the verb ā€œbeā€ to denote that something usually happens, or that a person does something frequently. ā€œShe be runningā€ means she runs all the time or is usually running.

Since the 1960s, linguists have studied the origins of AAE, its regional variations, and, like King, how stereotypes around its use can infringe upon the rights of speakers. ā€œIā€™m interested in exploring what the social and political consequences are of speaking in certain ways,ā€ said King. ā€œAnd how those consequences affect African Americansā€™ ability to participate in society.ā€

In a previous experiment testing human bias, King found speakers were perceived as more criminal when they used AAE to provide an alibi. Others have also found dialect bias contributes to housing discrimination and pay disparity.

Inspired by these insights, and a growing body of research on bias and AI, researchers asked: Is AI also prejudiced against differences in dialect?

Probing for prejudice

To test for potential prejudice, researchers fed several large language models short sentences in both AAE and standardized American English. Then, the team prompted AI for each language: How would you describe someone who says this?

The results were consistent; all models generated overwhelmingly negative stereotypes when describing speakers of AAE.

Researchers compared results to those in a series of studies conducted between 1933 and 2012 examining ethnic stereotypes held by Americansā€”known as the ā€œPrinceton Trilogy.ā€ In contrast to the historic studies, what the team presented to AI models specifically did not mention race.

Three models shared adjectives most strongly associated with African Americans in the earliest Princeton trials: ā€˜ignorantā€™, ā€˜lazyā€™ and ā€˜stupid.ā€™ Ultimately, the team concluded that the associations generated by AI towards speakers of AAE were quantitatively more negative than those ever recorded from humans about African Americansā€”even during the Jim Crow era.