< Explain other AI papers

Presumed Cultural Identity: How Names Shape LLM Responses

Siddhesh Pawar, Arnav Arora, Lucie-Aimée Kaffee, Isabelle Augenstein

2025-02-20

Presumed Cultural Identity: How Names Shape LLM Responses

Summary

This paper talks about how AI language models make assumptions about people's cultural identities based on their names. It's like studying how a computer might guess someone's background just from hearing their name, and how this can lead to stereotyping.

What's the problem?

When people interact with AI chatbots, they often use their names. The AI uses these names to try to personalize its responses. However, this can lead to the AI making broad assumptions about a person's culture or background based just on their name, which might not be accurate or fair. It's similar to how someone might incorrectly guess a person's nationality or culture just from hearing their name.

What's the solution?

The researchers studied how different AI models respond to common questions when given different names. They found that the AIs often made strong assumptions about cultural identity based on names. This helped them understand how these biases work in AI systems and think about ways to make AI personalization more nuanced and less stereotypical.

Why it matters?

This matters because as AI becomes more common in our daily lives, we need to make sure it's not reinforcing harmful stereotypes or making unfair assumptions about people. By understanding how AI systems interpret names, we can work on creating better, fairer AI that respects the complexity of human identity. This could lead to more inclusive and respectful AI assistants, chatbots, and other technologies that interact with people from diverse backgrounds.

Abstract

Names are deeply tied to human identity. They can serve as markers of individuality, cultural heritage, and personal history. However, using names as a core indicator of identity can lead to over-simplification of complex identities. When interacting with LLMs, user names are an important point of information for personalisation. Names can enter chatbot conversations through direct user input (requested by chatbots), as part of task contexts such as CV reviews, or as built-in memory features that store user information for personalisation. We study biases associated with names by measuring cultural presumptions in the responses generated by LLMs when presented with common suggestion-seeking queries, which might involve making assumptions about the user. Our analyses demonstrate strong assumptions about cultural identity associated with names present in LLM generations across multiple cultures. Our work has implications for designing more nuanced personalisation systems that avoid reinforcing stereotypes while maintaining meaningful customisation.