How Many Parameters Does it Take to Change a Light Bulb? Evaluating Performance in Self-Play of Conversational Games as a Function of Model Characteristics
Nidhir Bhavsar, Jonathan Jordan, Sherzod Hakimov, David Schlangen
2024-06-25

Summary
This paper explores what makes a large language model (LLM) perform well in conversational games, focusing on how different characteristics, like the number of parameters and training methods, affect their abilities.
What's the problem?
Determining what factors contribute to the success of LLMs is challenging. While benchmarks exist to measure performance, it's unclear how specific model features influence their effectiveness in real-world applications, especially in goal-oriented tasks like conversational games.
What's the solution?
The authors analyze a new type of benchmark that tests LLMs through self-play in conversational games. They investigate how performance varies with different model characteristics, such as the number of parameters (which can be thought of as the model's complexity) and the quality of training data. Their findings show that while more parameters generally lead to better performance, there is still a lot of variation in how well models perform within the same size category. They also note that the way models are trained can significantly impact their abilities.
Why it matters?
This research is important because it helps clarify what factors contribute to the effectiveness of LLMs in practical applications. Understanding these relationships can guide future improvements in model design and training, ultimately leading to more capable and reliable AI systems for tasks like conversation and interaction.
Abstract
What makes a good Large Language Model (LLM)? That it performs well on the relevant benchmarks -- which hopefully measure, with some validity, the presence of capabilities that are also challenged in real application. But what makes the model perform well? What gives a model its abilities? We take a recently introduced type of benchmark that is meant to challenge capabilities in a goal-directed, agentive context through self-play of conversational games, and analyse how performance develops as a function of model characteristics like number of parameters, or type of training. We find that while there is a clear relationship between number of parameters and performance, there is still a wide spread of performance points within a given size bracket, which is to be accounted for by training parameters such as fine-tuning data quality and method. From a more practical angle, we also find a certain degree of unpredictability about performance across access methods, possible due to unexposed sampling parameters, and a, very welcome, performance stability against at least moderate weight quantisation during inference.