CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
Yongchao Chen, Yilun Hao, Yueying Liu, Yang Zhang, Chuchu Fan
2025-02-10
Summary
This paper talks about CodeSteer, a new method that helps large language models (LLMs) switch between writing regular text and computer code more effectively, making them better at solving complex problems.
What's the problem?
Current AI models struggle to smoothly transition between text-based reasoning and code generation, which limits their ability to use symbolic computing (a way of solving problems using precise rules and logic) to its full potential.
What's the solution?
The researchers created CodeSteer, which guides LLMs in generating code and text. They made a test called SymBench with 37 different tasks to measure performance. They also trained a smaller AI model called CodeSteerLLM to help guide bigger models. When they used CodeSteer with GPT-4, it dramatically improved its performance, even beating other top AI models on these tasks.
Why it matters?
This matters because it makes AI models much better at handling complex tasks that require both understanding text and writing code. It could lead to more powerful and flexible AI systems that can solve a wider range of problems in fields like science, engineering, and data analysis. The improvement works across different AI models, showing it's a general technique that could be widely useful in advancing AI capabilities.
Abstract
Existing methods fail to effectively steer Large Language Models (LLMs) between textual reasoning and code generation, leaving symbolic computing capabilities underutilized. We introduce CodeSteer, an effective method for guiding LLM code/text generation. We construct a comprehensive benchmark SymBench comprising 37 symbolic tasks with adjustable complexity and also synthesize datasets of 12k multi-round guidance/generation trajectories and 5.5k guidance comparison pairs. We fine-tune the Llama-3-8B model with a newly designed multi-round supervised fine-tuning (SFT) and direct preference optimization (DPO). The resulting model, <PRE_TAG>CodeSteerLLM</POST_TAG>, augmented with the proposed symbolic and self-answer checkers, effectively guides the code/text generation of larger models. Augmenting GPT-4o with CodeSteer raises its average performance score from 53.3 to 86.4, even outperforming the existing best LLM OpenAI o1 (82.7), o1-preview (74.8), and DeepSeek R1 (76.8) across all 37 tasks (28 seen, 9 unseen). Trained for GPT-4o, CodeSteer demonstrates superior generalizability, providing an average 41.8 performance boost on Claude, Mistral, and GPT-3.5. CodeSteer-guided LLMs fully harness symbolic computing to maintain strong performance on highly complex tasks. Models, Datasets, and Codes are available at https://github.com/yongchao98/CodeSteer-v1.0.