How Well Do LLMs Represent Values Across Cultures? Empirical Analysis of LLM Responses Based on Hofstede Cultural Dimensions
Julia Kharchenko, Tanya Roosta, Aman Chadha, Chirag Shah
2024-06-24

Summary
This paper examines how well Large Language Models (LLMs) understand and represent the values of different cultures. It uses a framework called Hofstede's Cultural Dimensions to analyze whether these models provide culturally appropriate responses based on users' backgrounds.
What's the problem?
People from different cultures have varying values and beliefs, and it's important for AI models to recognize and respect these differences. However, many LLMs may not accurately reflect these cultural values in their responses, which can lead to misunderstandings or inappropriate advice.
What's the solution?
The researchers tested several LLMs by asking them advice questions that relate to the five Hofstede Cultural Dimensions, which include aspects like individualism versus collectivism and power distance. They created personas from 36 different countries and used prompts in various languages to see how consistently the LLMs understood and responded to cultural values. The study found that while LLMs can recognize some cultural differences, they often fail to give responses that align with those values.
Why it matters?
This research is significant because it highlights the need for AI models to be culturally sensitive and aware of the diverse values of users. By improving how LLMs are trained to understand cultural differences, we can create AI systems that provide better, more relevant advice, fostering better communication and understanding across cultures.
Abstract
Large Language Models (LLMs) attempt to imitate human behavior by responding to humans in a way that pleases them, including by adhering to their values. However, humans come from diverse cultures with different values. It is critical to understand whether LLMs showcase different values to the user based on the stereotypical values of a user's known country. We prompt different LLMs with a series of advice requests based on 5 Hofstede Cultural Dimensions -- a quantifiable way of representing the values of a country. Throughout each prompt, we incorporate personas representing 36 different countries and, separately, languages predominantly tied to each country to analyze the consistency in the LLMs' cultural understanding. Through our analysis of the responses, we found that LLMs can differentiate between one side of a value and another, as well as understand that countries have differing values, but will not always uphold the values when giving advice, and fail to understand the need to answer differently based on different cultural values. Rooted in these findings, we present recommendations for training value-aligned and culturally sensitive LLMs. More importantly, the methodology and the framework developed here can help further understand and mitigate culture and language alignment issues with LLMs.