< Explain other AI papers

Training Language Models on Synthetic Edit Sequences Improves Code Synthesis

Ulyana Piterbarg, Lerrel Pinto, Rob Fergus

2024-10-04

Training Language Models on Synthetic Edit Sequences Improves Code Synthesis

Summary

This paper discusses a new method for improving how language models generate code by training them on synthetic sequences of code edits instead of relying solely on existing code examples.

What's the problem?

Software engineers often write code by modifying existing programs, but large language models (LLMs) typically generate code all at once without considering these incremental edits. This difference arises because there isn't enough high-quality data available that shows how to edit code effectively. As a result, LLMs may not perform well in generating code that requires multiple changes or refinements.

What's the solution?

To fill this gap, the authors developed a synthetic data generation algorithm called LintSeq. This algorithm takes existing code and breaks it down into a series of smaller edits, creating a sequence of changes that can be used to train the models. They then tested this approach by fine-tuning smaller LLMs on both the original code and the new edit sequences. The results showed that models trained with these synthetic edit sequences produced more varied and higher-quality code than those trained with traditional methods.

Why it matters?

This research is important because it enhances the ability of language models to generate complex code more effectively. By using synthetic edit sequences, the models can better mimic how real software engineers work, leading to improved performance in coding tasks. This advancement can benefit various applications in software development, making it easier for developers to create and refine programs.

Abstract

Software engineers mainly write code by editing existing programs. In contrast, large language models (LLMs) autoregressively synthesize programs in a single pass. One explanation for this is the scarcity of open-sourced edit data. While high-quality instruction data for code synthesis is already scarce, high-quality edit data is even scarcer. To fill this gap, we develop a synthetic data generation algorithm called LintSeq. This algorithm refactors existing code into a sequence of code edits by using a linter to procedurally sample across the error-free insertions that can be used to sequentially write programs. It outputs edit sequences as text strings consisting of consecutive program diffs. To test LintSeq, we use it to refactor a dataset of instruction + program pairs into instruction + program-diff-sequence tuples. Then, we instruction finetune a series of smaller LLMs ranging from 2.6B to 14B parameters on both the re-factored and original versions of this dataset, comparing zero-shot performance on code synthesis benchmarks. We show that during repeated sampling, edit sequence finetuned models produce more diverse programs than baselines. This results in better inference-time scaling for benchmark coverage as a function of samples, i.e. the fraction of problems "pass@k" solved by any attempt given "k" tries. For example, on HumanEval pass@50, small LLMs finetuned on synthetic edit sequences are competitive with GPT-4 and outperform models finetuned on the baseline dataset by +20% (+/-3%) in absolute score. Finally, we also pretrain our own tiny LMs for code understanding. We show that finetuning tiny models on synthetic code edits results in state-of-the-art code synthesis for the on-device model class. Our 150M parameter edit sequence LM matches or outperforms code models with twice as many parameters, both with and without repeated sampling, including Codex and AlphaCode.