< Explain other AI papers

CRANE: Reasoning with constrained LLM generation

Debangshu Banerjee, Tarun Suresh, Shubham Ugare, Sasa Misailovic, Gagandeep Singh

2025-02-18

CRANE: Reasoning with constrained LLM generation

Summary

This paper talks about CRANE, a new method for making AI language models (LLMs) generate text that follows specific rules while still being able to think and reason effectively. It's like teaching a smart computer to write in a specific format without losing its ability to solve complex problems.

What's the problem?

When we try to make LLMs follow strict rules for writing (like in coding or math), they often lose their ability to think creatively and solve problems. It's like forcing a smart student to write essays using only certain words - they might follow the rules, but their essays might not be as good or clever.

What's the solution?

The researchers created CRANE, which adds special rules to the writing instructions that allow the AI to show its work and reasoning steps. This way, the AI can still think through problems while making sure its final answer follows the correct format. They also made a special algorithm that helps the AI switch between free thinking and rule-following as needed.

Why it matters?

This matters because it could make AI much better at tasks that require both creative thinking and precise answers, like writing computer code or solving math problems. By improving how well AI can follow rules without losing its problem-solving skills, CRANE could lead to more reliable and useful AI tools for things like programming, scientific research, and complex data analysis.

Abstract

Code generation, symbolic math reasoning, and other tasks require LLMs to produce outputs that are both syntactically and semantically correct. Constrained LLM generation is a promising direction to enforce adherence to formal grammar, but prior works have empirically observed that strict enforcement of formal constraints often diminishes the reasoning capabilities of LLMs. In this work, we first provide a theoretical explanation for why constraining LLM outputs to very restrictive grammars that only allow syntactically valid final answers reduces the reasoning capabilities of the model. Second, we demonstrate that by augmenting the output grammar with carefully designed additional rules, it is always possible to preserve the reasoning capabilities of the LLM while ensuring syntactic and semantic correctness in its outputs. Building on these theoretical insights, we propose a reasoning-augmented constrained decoding algorithm, CRANE, which effectively balances the correctness of constrained generation with the flexibility of unconstrained generation. Experiments on multiple open-source LLMs and benchmarks show that CRANE significantly outperforms both state-of-the-art constrained decoding strategies and standard unconstrained decoding, showing up to 10% points accuracy improvement over baselines on challenging symbolic reasoning benchmarks GSM-symbolic and FOLIO.