Demystifying Scientific Problem-Solving in LLMs by Probing Knowledge and Reasoning
Alan Li, Yixin Liu, Arpan Sarkar, Doug Downey, Arman Cohan
2025-08-27
Summary
This paper investigates how well large language models (LLMs) can perform scientific problem-solving, which requires both knowing a lot about science and being able to think through problems logically. It identifies weaknesses in current methods for testing these skills and proposes new tools and tests to better understand how LLMs approach science.
What's the problem?
Currently, there isn't a good, all-encompassing way to test if an LLM can *actually* do science. Existing tests focus on small parts of the process, and it's hard to tell if a model is succeeding because it already knew the answer or because it figured it out through reasoning. The paper points out that we need to separate and evaluate both the knowledge an LLM has and its ability to use that knowledge to solve problems.
What's the solution?
The researchers created SciReas, a collection of existing science tests, and SciReas-Pro, a tougher version focusing on more complex thinking. They also developed a method called KRUX to specifically test whether a model is retrieving relevant scientific information and then using it to reason. By combining these tools, they found that LLMs struggle most with *finding* the right information within themselves to solve a problem, and that giving them extra information helps them reason better. They also discovered that making the model explain its reasoning process helps it access the necessary knowledge.
Why it matters?
This work is important because it provides a more accurate way to measure and improve the scientific reasoning abilities of LLMs. By understanding where these models fall short – specifically in accessing and applying their own knowledge – researchers can develop better techniques to build AI assistants that can truly help scientists with their work. The released SciLit01 model also provides a strong starting point for future research in this area.
Abstract
Scientific problem solving poses unique challenges for LLMs, requiring both deep domain knowledge and the ability to apply such knowledge through complex reasoning. While automated scientific reasoners hold great promise for assisting human scientists, there is currently no widely adopted holistic benchmark for evaluating scientific reasoning, and few approaches systematically disentangle the distinct roles of knowledge and reasoning in these tasks. To address these gaps, we introduce SciReas, a diverse suite of existing benchmarks for scientific reasoning tasks, and SciReas-Pro, a selective subset that requires more complex reasoning. Our holistic evaluation surfaces insights about scientific reasoning performance that remain hidden when relying on individual benchmarks alone. We then propose KRUX, a probing framework for studying the distinct roles of reasoning and knowledge in scientific tasks. Combining the two, we conduct an in-depth analysis that yields several key findings: (1) Retrieving task-relevant knowledge from model parameters is a critical bottleneck for LLMs in scientific reasoning; (2) Reasoning models consistently benefit from external knowledge added in-context on top of the reasoning enhancement; (3) Enhancing verbalized reasoning improves LLMs' ability to surface task-relevant knowledge. Finally, we conduct a lightweight analysis, comparing our science-focused data composition with concurrent efforts on long CoT SFT, and release SciLit01, a strong 8B baseline for scientific reasoning.