AInstein: Assessing the Feasibility of AI-Generated Approaches to Research Problems
Shambhavi Mishra, Gaurav Sahu, Marco Pedersoli, Laurent Charlin, Jose Dolz, Christopher Pal
2025-10-08
Summary
This paper investigates whether large language models (LLMs) are actually *thinking* when they solve problems, or if they're just really good at remembering and repeating information they've already seen.
What's the problem?
It's currently unclear if LLMs demonstrate true reasoning skills or simply excel at recalling patterns from their training data. Researchers wanted to find out if LLMs could independently come up with solutions to complex AI problems, like those presented in cutting-edge research, without being given specific help or examples.
What's the solution?
The researchers created a system called AInstein. They took problem descriptions from AI research papers submitted to a conference, and then had an LLM act as a 'scientist' trying to solve those problems. This 'scientist' LLM would propose solutions, and then another LLM would critique those solutions, leading to revisions – mimicking how real scientific research works. They tested this on over a thousand papers and used another LLM, guided by specific rules, to judge the quality of the solutions, with some manual checks by humans to confirm the results.
Why it matters?
This work is important because it gives us a better understanding of what LLMs are actually capable of. It shows they can sometimes rediscover existing solutions and even come up with new ideas, but their ability to do so is still limited and easily affected by how the problem is presented. It helps us see both the potential and the current weaknesses of using LLMs for genuine scientific discovery.
Abstract
Large language models (LLMs) demonstrate impressive capabilities across a wide range of tasks, yet it remains unclear whether such success reflects genuine reasoning or sophisticated recall. We introduce AInstein, a framework for testing whether LLMs can generate valid solutions to AI research problems using only their pretrained parametric knowledge -- without domain-specific fine-tuning, retrieval augmentation, or other external aids. Our approach extracts distilled problem statements from high-quality ICLR 2025 submissions, then tasks specialized solver agents with proposing and refining technical solutions through iterative critique loops, mimicking the cycles of proposal, review, and revision central to scientific inquiry. We evaluate AInstein on 1,214 ICLR papers stratified by acceptance tier (Oral, Spotlight, Poster), using an LLM-as-a-judge paradigm guided by a structured rubric, complemented by targeted manual checks. Performance is assessed with three metrics: Success Rate (does the solution address the problem?), Rediscovery (does it align with human-proposed methods?), and Novelty (does it yield valid, original approaches?). Our results reveal that while LLMs can rediscover feasible solutions and occasionally propose creative alternatives, their problem-solving ability remains fragile and highly sensitive to framing. These findings provide the first large-scale evidence on the extent to which LLMs can act as autonomous scientific problem-solvers, highlighting both their latent potential and their current limitations.