< Explain other AI papers

On Code-Induced Reasoning in LLMs

Abdul Waheed, Zhen Wu, Carolyn Rosé, Daphne Ippolito

2025-10-08

On Code-Induced Reasoning in LLMs

Summary

This research explores how helpful code is when teaching large language models (LLMs) to think better, and specifically tries to figure out *what* about code is most important for improving their reasoning skills.

What's the problem?

LLMs get better at reasoning when they're trained with code, but it wasn't clear if it was the code's meaning, its structure (like how it's organized), or just the way it looks that was making the difference. Researchers needed a way to test these different aspects of code systematically.

What's the solution?

The researchers created a bunch of instruction datasets in ten different programming languages. Then, they intentionally messed with the code in different ways – sometimes changing the structure without changing the meaning, and sometimes changing the meaning while keeping the basic structure. They then trained several LLMs of different sizes on these modified datasets and tested how well they performed on tasks involving natural language, math, and actual code. They ran over three thousand different training experiments to compare the results.

Why it matters?

This work shows that LLMs are more sensitive to changes in *how* code is written (its structure) than *what* the code actually does (its meaning), especially when solving math problems or writing code themselves. It also suggests that things like pseudocode or flowcharts can be just as helpful as real code, and that simplifying code's appearance can sometimes even improve performance. Understanding this helps us design better training data to make LLMs even smarter.

Abstract

Code data has been shown to enhance the reasoning capabilities of large language models (LLMs), but it remains unclear which aspects of code are most responsible. We investigate this question with a systematic, data-centric framework. We construct parallel instruction datasets in ten programming languages and apply controlled perturbations that selectively disrupt structural or semantic properties of code. We then finetune LLMs from five model families and eight scales on each variant and evaluate their performance on natural language, math, and code tasks. Across 3,331 experiments, our results show that LLMs are more vulnerable to structural perturbations than semantic ones, particularly on math and code tasks. Appropriate abstractions like pseudocode and flowcharts can be as effective as code, while encoding the same information with fewer tokens without adhering to original syntax can often retain or even improve performance. Remarkably, even corrupted code with misleading signals remains competitive when surface-level regularities persist. Finally, syntactic styles also shape task-specific gains with Python favoring natural language reasoning and lower-level languages such as Java and Rust favoring math. Through our systematic framework, we aim to provide insight into how different properties of code influence reasoning and inform the design of training data for enhancing LLM reasoning capabilities.