BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions
Terry Yue Zhuo, Minh Chien Vu, Jenny Chim, Han Hu, Wenhao Yu, Ratnadira Widyasari, Imam Nur Bani Yusuf, Haolan Zhan, Junda He, Indraneil Paul, Simon Brunner, Chen Gong, Thong Hoang, Armel Randy Zebaze, Xiaoheng Hong, Wen-Ding Li, Jean Kaddour, Ming Xu, Zhihan Zhang, Prateek Yadav, Naman Jain, Alex Gu
2024-06-25

Summary
This paper presents BigCodeBench, a new benchmark designed to evaluate how well Large Language Models (LLMs) can handle complex programming tasks. It focuses on using diverse function calls and understanding complicated instructions, which are essential for real-world software development.
What's the problem?
While LLMs have made significant progress in automating software engineering tasks, most existing evaluations only test them on simple, short tasks. This limits their ability to demonstrate how well they can tackle more challenging programming problems that require using multiple functions and understanding detailed instructions.
What's the solution?
To address this issue, the authors created BigCodeBench, which includes 1,140 detailed programming tasks that require LLMs to use function calls from 139 different libraries across seven domains. Each task is rigorously tested with multiple test cases to ensure comprehensive evaluation. They also introduced a variant called BigCodeBench-Instruct, which simplifies the original task descriptions into shorter instructions, making it easier for LLMs to understand what is needed.
Why it matters?
This research is important because it highlights the current limitations of LLMs in handling complex programming tasks and emphasizes the need for further improvements. By providing a more challenging benchmark, BigCodeBench can help researchers develop better models that can perform at a level closer to human programmers, ultimately enhancing the field of automated software engineering.
Abstract
Automated software engineering has been greatly empowered by the recent advances in Large Language Models (LLMs) for programming. While current benchmarks have shown that LLMs can perform various software engineering tasks like human developers, the majority of their evaluations are limited to short and self-contained algorithmic tasks. Solving challenging and practical programming tasks requires the capability of utilizing diverse function calls as tools to efficiently implement functionalities like data analysis and web development. In addition, using multiple tools to solve a task needs compositional reasoning by accurately understanding complex instructions. Fulfilling both of these characteristics can pose a great challenge for LLMs. To assess how well LLMs can solve challenging and practical programming tasks, we introduce Bench, a benchmark that challenges LLMs to invoke multiple function calls as tools from 139 libraries and 7 domains for 1,140 fine-grained programming tasks. To evaluate LLMs rigorously, each programming task encompasses 5.6 test cases with an average branch coverage of 99%. In addition, we propose a natural-language-oriented variant of Bench, Benchi, that automatically transforms the original docstrings into short instructions only with essential information. Our extensive evaluation of 60 LLMs shows that LLMs are not yet capable of following complex instructions to use function calls precisely, with scores up to 60%, significantly lower than the human performance of 97%. The results underscore the need for further advancements in this area.