Benchmarking LLMs for Political Science: A United Nations Perspective
Yueqing Liang, Liangwei Yang, Chen Wang, Congying Xia, Rui Meng, Xiongxiao Xu, Haoran Wang, Ali Payani, Kai Shu
2025-02-24
Summary
This paper talks about a new way to test how well AI language models (LLMs) can understand and work with important political decisions, specifically in the context of the United Nations.
What's the problem?
While AI language models have gotten really good at understanding and writing text, we don't know much about how well they can handle the complex world of high-stakes political decisions, like those made at the United Nations. These decisions can affect millions of people, so it's crucial to understand if AI can be helpful in this area.
What's the solution?
The researchers created a special test called UNBench using real information from UN Security Council meetings from 1994 to 2024. This test checks how well AI can do four important political tasks: figuring out who wrote a document, predicting how countries will vote, guessing if a proposal will pass, and writing speeches that sound like they're from different countries. These tasks cover the main parts of how the UN makes decisions: writing proposals, voting, and discussing issues.
Why it matters?
This matters because it helps us understand if AI could be useful in global politics. If AI can do well on these tests, it might be able to help diplomats and world leaders make better decisions or understand complex political situations more quickly. However, it also shows us where AI might struggle with political tasks, which is important to know before we start using it for real-world decisions that affect entire countries.
Abstract
Large Language Models (LLMs) have achieved significant advances in natural language processing, yet their potential for high-stake political decision-making remains largely unexplored. This paper addresses the gap by focusing on the application of LLMs to the United Nations (UN) decision-making process, where the stakes are particularly high and political decisions can have far-reaching consequences. We introduce a novel dataset comprising publicly available UN Security Council (UNSC) records from 1994 to 2024, including draft resolutions, voting records, and diplomatic speeches. Using this dataset, we propose the United Nations Benchmark (UNBench), the first comprehensive benchmark designed to evaluate LLMs across four interconnected political science tasks: co-penholder judgment, representative voting simulation, draft adoption prediction, and representative statement generation. These tasks span the three stages of the UN decision-making process--drafting, voting, and discussing--and aim to assess LLMs' ability to understand and simulate political dynamics. Our experimental analysis demonstrates the potential and challenges of applying LLMs in this domain, providing insights into their strengths and limitations in political science. This work contributes to the growing intersection of AI and political science, opening new avenues for research and practical applications in global governance. The UNBench Repository can be accessed at: https://github.com/yueqingliang1/UNBench.