B4: Towards Optimal Assessment of Plausible Code Solutions with Plausible Tests
Mouxiang Chen, Zhongxin Liu, He Tao, Yusu Hong, David Lo, Xin Xia, Jianling Sun
2024-09-20

Summary
This paper presents B4, a new method for selecting the best code solutions from multiple generated options by using automatically created test cases to evaluate their effectiveness.
What's the problem?
When generating code solutions, it's important to choose the best one, but reliable test cases to validate these solutions are often not available. Creating these test cases can be time-consuming and expensive. Existing methods for selecting the best solutions often rely on heuristics (rules of thumb) that don't guarantee optimal results, especially when both the code and tests are plausible but not necessarily reliable.
What's the solution?
The authors propose a two-part approach using a Bayesian framework to define an optimal selection strategy based on the likelihood of passing tests. They frame the problem as an integer programming challenge, which helps in identifying the best code solution. Additionally, they develop an efficient method to approximate this optimal strategy, ensuring that the error in their approximation is manageable. Their new method, called B4, significantly outperforms existing heuristic methods by improving selection accuracy by up to 50% in tough scenarios.
Why it matters?
This research is important because it enhances the process of evaluating and selecting code solutions in automated programming tasks. By improving how we assess code quality with plausible tests, B4 can lead to better software development practices and more reliable AI systems that generate code.
Abstract
Selecting the best code solution from multiple generated ones is an essential task in code generation, which can be achieved by using some reliable validators (e.g., developer-written test cases) for assistance. Since reliable test cases are not always available and can be expensive to build in practice, researchers propose to automatically generate test cases to assess code solutions. However, when both code solutions and test cases are plausible and not reliable, selecting the best solution becomes challenging. Although some heuristic strategies have been proposed to tackle this problem, they lack a strong theoretical guarantee and it is still an open question whether an optimal selection strategy exists. Our work contributes in two ways. First, we show that within a Bayesian framework, the optimal selection strategy can be defined based on the posterior probability of the observed passing states between solutions and tests. The problem of identifying the best solution is then framed as an integer programming problem. Second, we propose an efficient approach for approximating this optimal (yet uncomputable) strategy, where the approximation error is bounded by the correctness of prior knowledge. We then incorporate effective prior knowledge to tailor code generation tasks. Both theoretical and empirical studies confirm that existing heuristics are limited in selecting the best solutions with plausible test cases. Our proposed approximated optimal strategy B4 significantly surpasses existing heuristics in selecting code solutions generated by large language models (LLMs) with LLM-generated tests, achieving a relative performance improvement by up to 50% over the strongest heuristic and 246% over the random selection in the most challenging scenarios. Our code is publicly available at https://github.com/ZJU-CTAG/B4.