< Explain other AI papers

Evaluating and Aligning CodeLLMs on Human Preference

Jian Yang, Jiaxi Yang, Ke Jin, Yibo Miao, Lei Zhang, Liqun Yang, Zeyu Cui, Yichang Zhang, Binyuan Hui, Junyang Lin

2024-12-11

Evaluating and Aligning CodeLLMs on Human Preference

Summary

This paper talks about how to improve code generation models by aligning them with human preferences, using a new evaluation framework called CodeArena.

What's the problem?

Current code generation models can create correct code snippets, but they often don't consider what humans actually prefer in terms of code quality and style. This lack of alignment means that even if the code works, it might not be written in a way that is clean, efficient, or easy to understand.

What's the solution?

The authors introduce CodeArena, a comprehensive benchmark that evaluates how well code generation models align with human preferences. They created a dataset with 397 high-quality coding tasks across 40 categories and 44 programming languages based on real user queries. Additionally, they developed SynCode-Instruct, a large synthetic instruction dataset to help train models better. By using these resources, they found that models trained with this new approach showed significant improvements in generating code that matches human expectations.

Why it matters?

This research is important because it helps make AI-generated code more useful and user-friendly. By focusing on human preferences, developers can create better tools that not only produce functional code but also prioritize readability and maintainability. This can enhance the overall experience for programmers and improve collaboration between humans and AI in coding tasks.

Abstract

Code large language models (codeLLMs) have made significant strides in code generation. Most previous code-related benchmarks, which consist of various programming exercises along with the corresponding test cases, are used as a common measure to evaluate the performance and capabilities of code LLMs. However, the current code LLMs focus on synthesizing the correct code snippet, ignoring the alignment with human preferences, where the query should be sampled from the practical application scenarios and the model-generated responses should satisfy the human preference. To bridge the gap between the model-generated response and human preference, we present a rigorous human-curated benchmark CodeArena to emulate the complexity and diversity of real-world coding tasks, where 397 high-quality samples spanning 40 categories and 44 programming languages, carefully curated from user queries. Further, we propose a diverse synthetic instruction corpus SynCode-Instruct (nearly 20B tokens) by scaling instructions from the website to verify the effectiveness of the large-scale synthetic instruction fine-tuning, where Qwen2.5-SynCoder totally trained on synthetic instruction data can achieve top-tier performance of open-source code LLMs. The results find performance differences between execution-based benchmarks and CodeArena. Our systematic experiments of CodeArena on 40+ LLMs reveal a notable performance gap between open SOTA code LLMs (e.g. Qwen2.5-Coder) and proprietary LLMs (e.g., OpenAI o1), underscoring the importance of the human preference alignment.\url{https://codearenaeval.github.io/ }