< Explain other AI papers

Are Large Reasoning Models Interruptible?

Tsung-Han Wu, Mihran Miroyan, David M. Chan, Trevor Darrell, Narges Norouzi, Joseph E. Gonzalez

2025-10-14

Are Large Reasoning Models Interruptible?

Summary

This paper investigates how well large reasoning models, like those used for complex problem-solving, perform when things aren't perfectly stable during their thinking process.

What's the problem?

Typically, we test these models assuming they get to work on a problem once and give an answer without interruption or changes to the problem itself. This is like a 'frozen world' scenario. However, in real-world situations, like helping someone write code, a model might take a long time to think, and the code could be updated *while* the model is working. The paper shows that this 'frozen world' assumption doesn't hold up – models that seem really smart in standard tests can struggle significantly when faced with interruptions or changing information.

What's the solution?

The researchers tested leading reasoning models in two ways that mimic real-world challenges. First, they interrupted the models mid-process to see how good their partial work was. Second, they changed the context of the problem while the model was still thinking. They used math and programming problems that require a lot of step-by-step reasoning to see how the models handled these dynamic situations.

Why it matters?

The findings are important because they reveal that current evaluations of these models are overly optimistic. We think they're much better at handling complex tasks than they actually are. The research also identifies specific ways these models fail – sometimes they rush and give wrong answers ('panic'), sometimes they mix up their reasoning with the final answer when interrupted ('reasoning leakage'), and sometimes their performance gets worse when given new information ('self-doubt'). Understanding these weaknesses is crucial for building more reliable and helpful AI assistants.

Abstract

Large Reasoning Models (LRMs) excel at complex reasoning but are traditionally evaluated in static, "frozen world" settings: model responses are assumed to be instantaneous, and the context of a request is presumed to be immutable over the duration of the response. While generally true for short-term tasks, the "frozen world" assumption breaks down in modern reasoning tasks such as assistive programming, where models may take hours to think through problems and code may change dramatically from the time the model starts thinking to the model's final output. In this work, we challenge the frozen world assumption and evaluate LRM robustness under two realistic dynamic scenarios: interruptions, which test the quality of the model's partial outputs on a limited budget, and dynamic context, which tests model adaptation to in-flight changes. Across mathematics and programming benchmarks that require long-form reasoning, static evaluations consistently overestimate robustness: even state-of-the-art LRMs, which achieve high accuracy in static settings, can fail unpredictably when interrupted or exposed to changing context, with performance dropping by up to 60% when updates are introduced late in the reasoning process. Our analysis further reveals several novel failure modes, including reasoning leakage, where models fold the reasoning into their final answer when interrupted; panic, where under time pressure models abandon reasoning entirely and return incorrect answers; and self-doubt, where performance degrades while incorporating updated information.