< Explain other AI papers

Self-Steering Language Models

Gabriel Grand, Joshua B. Tenenbaum, Vikash K. Mansinghka, Alexander K. Lew, Jacob Andreas

2025-04-10

Self-Steering Language Models

Summary

This paper talks about DisCIPL, a method that helps AI language models solve tough problems by having one model plan the steps and smaller models follow those steps, like a team leader guiding workers.

What's the problem?

When AI models try to solve complex tasks step-by-step, they often get stuck, take too long, or make mistakes because figuring out the right path is hard.

What's the solution?

DisCIPL uses a ‘Planner’ model to create a step-by-step plan for solving a problem, then has smaller ‘Follower’ models follow that plan, breaking big tasks into smaller parts and checking solutions along the way.

Why it matters?

This lets smaller AI models match the performance of much larger ones, making advanced problem-solving faster, cheaper, and more reliable for tasks like answering tricky questions or generating error-free code.

Abstract

While test-time reasoning enables language models to tackle complex tasks, searching or planning in natural language can be slow, costly, and error-prone. But even when LMs struggle to emulate the precise reasoning steps needed to solve a problem, they often excel at describing its abstract structure--both how to verify solutions and how to search for them. This paper introduces DisCIPL, a method for "self-steering" LMs where a Planner model generates a task-specific inference program that is executed by a population of Follower models. Our approach equips LMs with the ability to write recursive search procedures that guide LM inference, enabling new forms of verifiable and efficient reasoning. When instantiated with a small Follower (e.g., Llama-3.2-1B), DisCIPL matches (and sometimes outperforms) much larger models, including GPT-4o and o1, on challenging constrained generation tasks. In decoupling planning from execution, our work opens up a design space of highly-parallelized Monte Carlo inference strategies that outperform standard best-of-N sampling, require no finetuning, and can be implemented automatically by existing LMs.