< Explain other AI papers

Survey of Cultural Awareness in Language Models: Text and Beyond

Siddhesh Pawar, Junyeong Park, Jiho Jin, Arnav Arora, Junho Myung, Srishti Yadav, Faiz Ghifari Haznitrama, Inhwa Song, Alice Oh, Isabelle Augenstein

2024-11-05

Survey of Cultural Awareness in Language Models: Text and Beyond

Summary

This paper surveys the importance of cultural awareness in large language models (LLMs) and explores how to make these models more inclusive and sensitive to different cultures when they interact with users.

What's the problem?

As LLMs are used more widely in applications like chatbots and virtual assistants, it's crucial that they understand and respect cultural differences. Many existing models focus only on language diversity and often overlook the deeper aspects of culture, which can lead to misunderstandings or biases in their responses.

What's the solution?

The authors define what cultural awareness means for LLMs, drawing from studies in psychology and anthropology. They review methods for creating datasets that reflect different cultures, strategies for including cultural context in tasks, and ways to evaluate how well LLMs understand cultural nuances. They also discuss the ethical implications of ensuring cultural alignment and how human-computer interaction can promote this inclusivity.

Why it matters?

This research is important because it highlights the need for AI systems to be culturally aware, which can improve user experiences and ensure that technology serves everyone fairly. By addressing cultural biases and promoting inclusivity, developers can create LLMs that better understand and respond to the diverse backgrounds of their users, ultimately leading to more effective communication and interaction.

Abstract

Large-scale deployment of large language models (LLMs) in various applications, such as chatbots and virtual assistants, requires LLMs to be culturally sensitive to the user to ensure inclusivity. Culture has been widely studied in psychology and anthropology, and there has been a recent surge in research on making LLMs more culturally inclusive in LLMs that goes beyond multilinguality and builds on findings from psychology and anthropology. In this paper, we survey efforts towards incorporating cultural awareness into text-based and multimodal LLMs. We start by defining cultural awareness in LLMs, taking the definitions of culture from anthropology and psychology as a point of departure. We then examine methodologies adopted for creating cross-cultural datasets, strategies for cultural inclusion in downstream tasks, and methodologies that have been used for benchmarking cultural awareness in LLMs. Further, we discuss the ethical implications of cultural alignment, the role of Human-Computer Interaction in driving cultural inclusion in LLMs, and the role of cultural alignment in driving social science research. We finally provide pointers to future research based on our findings about gaps in the literature.