< Explain other AI papers

Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation

Shiven Sinha, Shashwat Goel, Ponnurangam Kumaraguru, Jonas Geiping, Matthias Bethge, Ameya Prabhu

2025-02-27

Can Language Models Falsify? Evaluating Algorithmic Reasoning with
  Counterexample Creation

Summary

This paper talks about testing whether AI language models can find mistakes in solutions to problems, especially in computer programming. The researchers created a new test called REFUTE to see how well AI can spot errors and come up with examples that prove solutions wrong.

What's the problem?

While AI language models are getting better at solving problems, they're not very good at finding mistakes in solutions. This skill is important in science and programming because it helps improve ideas and solutions over time. Current tests for AI mostly focus on whether they can solve problems, not whether they can find flaws in solutions.

What's the solution?

The researchers created a new test called REFUTE. This test uses real programming problems and incorrect solutions from coding competitions. They then checked how well the best AI models could find mistakes in these solutions and create examples that show why the solutions are wrong. They found that even the best AI, including one called OpenAI o3-mini, could only find mistakes in less than 9% of the incorrect solutions.

Why it matters?

This research matters because it shows we need to make AI better at finding mistakes, not just solving problems. Being able to spot errors is crucial for scientific progress and for making AI systems that can improve themselves. By creating tests like REFUTE, researchers can work on making AI that's better at critical thinking and checking its own work, which could lead to faster scientific discoveries and more reliable AI systems in the future.

Abstract

There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery. Falsifying hypotheses is key to scientific progress, as it allows claims to be iteratively refined over time. This process requires significant researcher effort, reasoning, and ingenuity. Yet current benchmarks for LMs predominantly assess their ability to generate solutions rather than challenge them. We advocate for developing benchmarks that evaluate this inverse capability - creating counterexamples for subtly incorrect solutions. To demonstrate this approach, we start with the domain of algorithmic problem solving, where counterexamples can be evaluated automatically using code execution. Specifically, we introduce REFUTE, a dynamically updating benchmark that includes recent problems and incorrect submissions from programming competitions, where human experts successfully identified counterexamples. Our analysis finds that the best reasoning agents, even OpenAI o3-mini (high) with code execution feedback, can create counterexamples for only <9% of incorrect solutions in REFUTE, even though ratings indicate its ability to solve up to 48% of these problems from scratch. We hope our work spurs progress in evaluating and enhancing LMs' ability to falsify incorrect solutions - a capability that is crucial for both accelerating research and making models self-improve through reliable reflective reasoning.