GitChameleon: Unmasking the Version-Switching Capabilities of Code Generation Models
Nizar Islah, Justine Gehring, Diganta Misra, Eilif Muller, Irina Rish, Terry Yue Zhuo, Massimo Caccia
2024-11-12

Summary
This paper introduces GitChameleon, a new dataset and benchmark designed to test how well code generation models can adapt to different versions of software libraries when writing Python code.
What's the problem?
As software libraries frequently update, code generation models must keep up with these changes to produce accurate and functional code. However, existing benchmarks often ignore this aspect or only focus on simple code tasks without testing if the generated code works correctly in real situations. This means that many models might not perform well when faced with version-specific coding challenges.
What's the solution?
GitChameleon addresses this problem by providing a dataset of 116 Python code completion tasks that are specifically linked to different library versions. Each task comes with executable unit tests to verify that the generated code not only looks correct but also runs properly. The dataset allows for a thorough evaluation of how well modern language models can generate accurate code for specific versions of libraries. The authors found that even advanced models like GPT-4 struggled with these tasks, achieving only about 39.9% accuracy in generating correct code without help.
Why it matters?
This research is important because it highlights the challenges that current code generation models face in adapting to changing software environments. By creating a structured way to assess these models, GitChameleon can help developers improve their coding tools, ensuring that they produce reliable and version-specific code. This advancement is crucial for maintaining software quality as libraries evolve.
Abstract
The rapid evolution of software libraries presents a significant challenge for code generation models, which must adapt to frequent version updates while maintaining compatibility with previous versions. Existing code completion benchmarks often overlook this dynamic aspect, and the one that does consider it relies on static code prediction tasks without execution-based evaluation, offering a limited perspective on a model's practical usability. To address this gap, we introduce \GitChameleon{}, a novel, manually curated dataset comprising 116 Python code completion problems, each conditioned on specific library versions and accompanied by executable unit tests. is designed to rigorously assess the ability of modern large language models (LLMs) to generate version-specific code that is not only syntactically correct but also functionally accurate upon execution. Our comprehensive evaluations reveal that state-of-the-art LLMs struggle with this task; for instance, GPT-4o achieves a pass@10 of only 39.9\% (43.7\% when provided with error feedback), highlighting the complexity of the problem and the limitations of current models. By providing an execution-based benchmark that emphasizes the dynamic nature of code libraries, serves as a critical tool to advance the development of more adaptable and reliable code generation models. For facilitation for further exploration of version-conditioned code generation, we make our code repository publicly accessible at https://github.com/NizarIslah/GitChameleon.