< Explain other AI papers

Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information

Lukas Struppek, Dominik Hintersdorf, Hannah Struppek, Daniel Neider, Kristian Kersting

2025-12-01

Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information

Summary

This paper explores how to make large language models, which are good at complex reasoning, more efficient without sacrificing their problem-solving abilities.

What's the problem?

Large language models often explain their reasoning step-by-step, which is helpful for understanding but uses a lot of computing power and takes a long time because they generate so many words. Current solutions usually involve retraining the model itself, which is a complex process.

What's the solution?

The researchers came up with a new technique called Focused Chain-of-Thought (F-CoT). Instead of changing the model, they focus on preparing the input. F-CoT first identifies and organizes only the *important* information from a question into a clear, concise summary. Then, the model is asked to reason using *only* this summarized information, ignoring irrelevant details. This keeps the model's explanations shorter and more focused.

Why it matters?

This research shows that simply structuring the input to a large language model can significantly improve its efficiency – reducing the amount of text it generates while maintaining its accuracy. This is a simpler and potentially more effective approach than trying to modify the model itself, making these powerful AI systems more practical to use.

Abstract

Recent large language models achieve strong reasoning performance by generating detailed chain-of-thought traces, but this often leads to excessive token use and high inference latency. Existing efficiency approaches typically focus on model-centric interventions, such as reinforcement learning or supervised fine-tuning, to reduce verbosity. In contrast, we propose a training-free, input-centric approach. Inspired by cognitive psychology, we introduce Focused Chain-of-Thought (F-CoT), which separates information extraction from the reasoning process. F-CoT first organizes the essential information from a query into a concise, structured context and then guides the model to reason exclusively over this context. By preventing attention to irrelevant details, F-CoT naturally produces shorter reasoning paths. On arithmetic word problems, F-CoT reduces generated tokens by 2-3x while maintaining accuracy comparable to standard zero-shot CoT. These results highlight structured input as a simple yet effective lever for more efficient LLM reasoning.