< Explain other AI papers

Large Language Models Think Too Fast To Explore Effectively

Lan Pan, Hanbo Xie, Robert C. Wilson

2025-01-31

Large Language Models Think Too Fast To Explore Effectively

Summary

This paper talks about how well large language models (LLMs), which are advanced AI systems, can explore and discover new information in open-ended tasks. The researchers used a game called Little Alchemy 2 to test this ability and compare it to human performance.

What's the problem?

While we know LLMs are smart in many ways, we don't know much about their ability to explore and adapt to new situations. This is important because exploration helps in finding new information and solving problems creatively. The researchers wanted to see if LLMs could explore as well as or better than humans, especially in tasks where there's no clear end goal.

What's the solution?

The researchers used Little Alchemy 2, a game where you combine elements to create new ones, as a way to test exploration skills. They compared how LLMs and humans played the game. They found that most LLMs didn't do as well as humans, except for one called o1. They also looked at how the LLMs were making decisions and found that they often relied on uncertainty (trying things they're not sure about) rather than balancing that with empowerment (trying things that might lead to more options later). By analyzing the inner workings of the LLMs, they discovered that the AI tends to make decisions too quickly, without considering all the possibilities.

Why it matters?

This matters because it shows a limitation in how current AI systems explore and learn in open-ended situations. Understanding this can help us make better AI that can adapt and discover new things more like humans do. This could be important for creating AI that can help solve complex, real-world problems where the solution isn't obvious. It also gives us insights into how AI 'thinks' differently from humans, which could lead to improvements in AI design and help us use AI more effectively in various fields.

Abstract

Large Language Models have emerged many intellectual capacities. While numerous benchmarks assess their intelligence, limited attention has been given to their ability to explore, an essential capacity for discovering new information and adapting to novel environments in both natural and artificial systems. The extent to which LLMs can effectively explore, particularly in open-ended tasks, remains unclear. This study investigates whether LLMs can surpass humans in exploration during an open-ended task, using Little Alchemy 2 as a paradigm, where agents combine elements to discover new ones. Results show most LLMs underperform compared to humans, except for the o1 model, with those traditional LLMs relying primarily on uncertainty driven strategies, unlike humans who balance uncertainty and empowerment. Representational analysis of the models with Sparse Autoencoders revealed that uncertainty and choices are represented at earlier transformer blocks, while empowerment values are processed later, causing LLMs to think too fast and make premature decisions, hindering effective exploration. These findings shed light on the limitations of LLM exploration and suggest directions for improving their adaptability.