< Explain other AI papers

TokDrift: When LLM Speaks in Subwords but Code Speaks in Grammar

Yinxi Li, Yuntian Deng, Pengyu Nie

2025-10-17

TokDrift: When LLM Speaks in Subwords but Code Speaks in Grammar

Summary

This paper investigates how the way code is broken down into smaller pieces, called tokens, affects how well large language models (LLMs) understand and work with code.

What's the problem?

LLMs for code use a method of splitting code into tokens based on how often certain sequences of characters appear together, similar to how words are formed. This method doesn't understand the actual rules of programming languages, meaning code that *means* the same thing but is formatted differently – like having different spaces or variable names – can be tokenized in completely different ways. This inconsistency can confuse the LLM and lead to unpredictable results.

What's the solution?

The researchers created a tool called TokDrift that automatically changes code in ways that don't alter its meaning but *do* change its tokenization. They then tested nine different code LLMs, including some very large ones, with these slightly altered code versions. They found that even small changes in formatting could significantly change how the models behaved, and pinpointed that the issue starts very early in the model's processing, when the code is first converted into numerical representations.

Why it matters?

This research shows that the way code is tokenized is a hidden problem that impacts the reliability of code LLMs. It suggests that future LLMs designed for code should use methods that are aware of programming language grammar, rather than relying solely on statistical patterns, to ensure they consistently understand and generate code correctly.

Abstract

Large language models (LLMs) for code rely on subword tokenizers, such as byte-pair encoding (BPE), learned from mixed natural language text and programming language code but driven by statistics rather than grammar. As a result, semantically identical code snippets can be tokenized differently depending on superficial factors such as whitespace or identifier naming. To measure the impact of this misalignment, we introduce TokDrift, a framework that applies semantic-preserving rewrite rules to create code variants differing only in tokenization. Across nine code LLMs, including large ones with over 30B parameters, even minor formatting changes can cause substantial shifts in model behavior. Layer-wise analysis shows that the issue originates in early embeddings, where subword segmentation fails to capture grammar token boundaries. Our findings identify misaligned tokenization as a hidden obstacle to reliable code understanding and generation, highlighting the need for grammar-aware tokenization for future code LLMs.