< Explain other AI papers

GLEE: A Unified Framework and Benchmark for Language-based Economic Environments

Eilam Shapira, Omer Madmon, Itamar Reinman, Samuel Joseph Amouyal, Roi Reichart, Moshe Tennenholtz

2024-10-10

GLEE: A Unified Framework and Benchmark for Language-based Economic Environments

Summary

This paper presents GLEE, a new framework and benchmark designed to evaluate how well large language models (LLMs) perform in economic environments where communication is key.

What's the problem?

As LLMs are increasingly used in areas like online shopping and recommendation systems, it's important to understand how they behave in economic interactions. However, there hasn't been a standardized way to measure their performance in these settings. Different studies use various methods and criteria, making it hard to compare results and understand how well these models really work.

What's the solution?

To tackle this issue, the authors developed GLEE, which provides a consistent way to evaluate LLMs in two-player, sequential language-based games that mimic real economic scenarios. They created three main types of games focused on bargaining, negotiation, and persuasion. Additionally, they built an open-source framework that allows researchers to simulate interactions between LLMs and humans, collecting data on their performance across different game setups.

Why it matters?

This research is significant because it helps standardize how we assess the capabilities of LLMs in economic contexts. By providing a clear benchmark and framework for comparison, GLEE can facilitate better understanding and development of AI systems that interact with people in meaningful ways, ultimately improving applications in areas like e-commerce and automated customer service.

Abstract

Large Language Models (LLMs) show significant potential in economic and strategic interactions, where communication via natural language is often prevalent. This raises key questions: Do LLMs behave rationally? Can they mimic human behavior? Do they tend to reach an efficient and fair outcome? What is the role of natural language in the strategic interaction? How do characteristics of the economic environment influence these dynamics? These questions become crucial concerning the economic and societal implications of integrating LLM-based agents into real-world data-driven systems, such as online retail platforms and recommender systems. While the ML community has been exploring the potential of LLMs in such multi-agent setups, varying assumptions, design choices and evaluation criteria across studies make it difficult to draw robust and meaningful conclusions. To address this, we introduce a benchmark for standardizing research on two-player, sequential, language-based games. Inspired by the economic literature, we define three base families of games with consistent parameterization, degrees of freedom and economic measures to evaluate agents' performance (self-gain), as well as the game outcome (efficiency and fairness). We develop an open-source framework for interaction simulation and analysis, and utilize it to collect a dataset of LLM vs. LLM interactions across numerous game configurations and an additional dataset of human vs. LLM interactions. Through extensive experimentation, we demonstrate how our framework and dataset can be used to: (i) compare the behavior of LLM-based agents to human players in various economic contexts; (ii) evaluate agents in both individual and collective performance measures; and (iii) quantify the effect of the economic characteristics of the environments on the behavior of agents.