ORION: Teaching Language Models to Reason Efficiently in the Language of Thought
Kumar Tanmay, Kriti Aggarwal, Paul Pu Liang, Subhabrata Mukherjee
2025-12-02
Summary
This paper introduces a new way for large language models to 'think' about problems, making them faster and more efficient without losing accuracy. It's based on the idea that humans don't use a lot of words when reasoning internally, and tries to get AI to do the same.
What's the problem?
Current large reasoning models, while good at things like math and coding, are slow and wordy. They generate long explanations, which takes time and computing power. These explanations aren't always clear or helpful, and often repeat themselves. Essentially, they 'think' too much like they're writing an essay instead of doing quick calculations.
What's the solution?
The researchers developed a system called ORION that uses a compressed 'reasoning language' inspired by how humans might think – a more symbolic and concise internal representation. They then used a technique called SHORTER LENGTH PREFERENCE OPTIMIZATION (SLPO) to train the model to prefer shorter, correct solutions. SLPO is a type of reinforcement learning that rewards the model for being brief while still getting the right answer, and allows for longer reasoning when necessary.
Why it matters?
This work is important because it makes powerful AI reasoning much more practical. By significantly reducing the length of the reasoning process, it lowers costs, speeds up responses, and makes these models more accessible for real-time applications. It also gets us closer to understanding how humans think and building AI that reasons more like we do.
Abstract
Large Reasoning Models (LRMs) achieve strong performance in mathematics, code generation, and task planning, but their reliance on long chains of verbose "thinking" tokens leads to high latency, redundancy, and incoherent reasoning paths. Inspired by the Language of Thought Hypothesis, which posits that human reasoning operates over a symbolic, compositional mental language called Mentalese, we introduce a framework that trains models to reason in a similarly compact style. Mentalese encodes abstract reasoning as ultra-compressed, structured tokens, enabling models to solve complex problems with far fewer steps. To improve both efficiency and accuracy, we propose SHORTER LENGTH PREFERENCE OPTIMIZATION (SLPO), a reinforcement learning method that rewards concise solutions that stay correct, while still allowing longer reasoning when needed. Applied to Mentalese-aligned models, SLPO yields significantly higher compression rates by enabling concise reasoning that preserves the benefits of detailed thinking without the computational overhead. Across benchmarks including AIME 2024 and 2025, MinervaMath, OlympiadBench, Math500, and AMC, our ORION models produce reasoning traces with 4-16x fewer tokens, achieve up to 5x lower inference latency, and reduce training costs by 7-9x relative to the DeepSeek R1 Distilled model, while maintaining 90-98% of its accuracy. ORION also surpasses Claude and ChatGPT-4o by up to 5% in accuracy while maintaining 2x compression. These results show that Mentalese-style compressed reasoning offers a step toward human-like cognitive efficiency, enabling real-time, cost-effective reasoning without sacrificing accuracy.