Make wishtv.com your home page

AI chatbots show bias based on people’s names, researchers find

AI chatbots show bias based on people’s names, researchers find

INDIANAPOLIS (WISH) — The response people get from an artificial intelligence chatbot could change based on how “Black” a person’s name sounds, according to researchers at Stanford Law School.

The researchers say they found differences with names associated with race and gender.

The chatbots included OpenAI’s ChatGPT 4 and Google AI’s PaLM-2, USA TODAY reported.

In the study, researchers repeatedly asked the chatbots questions but while changing names. The authors of the study fed the chatbot a name and asked for advice across different scenarios.

“We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women,” the authors wrote. “Names associated with Black women receive the least advantageous outcomes.”

The authors — Amit Haim, Alejandro Salinas, Julian Nyarko — say the biases remained the same across 42 different “prompt templates and several models,” which they say indicated a systemic issue.

USA TODAY used an example from the study to illustrate how such bias could play out. A chatbot could state that a job candidate with a name like Tamika should be offered a $79,375 salary as a lawyer, the paper reported. However, changing the name to something like Todd increased the suggested salary offer to $82,485.

The authors note the biases highlight risks, particularly as businesses incorporate artificial intelligence in their daily operations.

AI models are trained on data from various sources, and the biases suggested that the AI encoded stereotypes based on the data from which they are trained.

CNN reported a statement from OpenAI says bias is an industry-wide problem that it is working to combat.

Google did not respond to a CNN request for comment.