< Explain other AI papers

FeatureBench: Benchmarking Agentic Coding for Complex Feature Development

Qixing Zhou, Jiacheng Zhang, Haiyang Wang, Rui Hao, Jiahe Wang, Minghao Han, Yuxue Yang, Shuzhe Wu, Feiyang Pan, Lue Fan, Dandan Tu, Zhaoxiang Zhang

2026-02-12

FeatureBench: Benchmarking Agentic Coding for Complex Feature Development

Summary

This paper introduces a new way to test how well AI agents, powered by large language models, can actually write and add features to real software projects.

What's the problem?

Currently, the tests used to evaluate these AI coding agents are limited. They often focus on small fixes within a single change to the code, and don't always check if the code actually *works* after the AI makes changes. Also, creating new tests to keep up with evolving software is a slow, manual process, and existing tests can become outdated, giving the AI an unfair advantage by letting it 'memorize' solutions.

What's the solution?

The researchers created a benchmark called FeatureBench. This benchmark automatically generates coding tasks by looking at the tests already written for open-source projects. It traces how different parts of the code depend on each other to identify larger, more realistic tasks – like adding a whole new feature that might involve changes across multiple files and updates over time. It then creates environments where the AI can work and automatically checks if the AI’s changes pass all the tests, ensuring the code actually functions correctly. They started with 200 tasks from 24 projects.

Why it matters?

The results show that even the best AI coding agents struggle with these more complex, real-world tasks, succeeding only about 11% of the time, even though they perform much better on simpler tests. This highlights that there's still a lot of room for improvement in AI coding abilities. Also, because FeatureBench automatically creates and updates tests, it provides a reliable and scalable way to continue pushing the boundaries of what these AI agents can do and even use the benchmark to help *train* them.

Abstract

Agents powered by large language models (LLMs) are increasingly adopted in the software industry, contributing code as collaborators or even autonomous developers. As their presence grows, it becomes important to assess the current boundaries of their coding abilities. Existing agentic coding benchmarks, however, cover a limited task scope, e.g., bug fixing within a single pull request (PR), and often rely on non-executable evaluations or lack an automated approach for continually updating the evaluation coverage. To address such issues, we propose FeatureBench, a benchmark designed to evaluate agentic coding performance in end-to-end, feature-oriented software development. FeatureBench incorporates an execution-based evaluation protocol and a scalable test-driven method that automatically derives tasks from code repositories with minimal human effort. By tracing from unit tests along a dependency graph, our approach can identify feature-level coding tasks spanning multiple commits and PRs scattered across the development timeline, while ensuring the proper functioning of other features after the separation. Using this framework, we curated 200 challenging evaluation tasks and 3825 executable environments from 24 open-source repositories in the first version of our benchmark. Empirical evaluation reveals that the state-of-the-art agentic model, such as Claude 4.5 Opus, which achieves a 74.4% resolved rate on SWE-bench, succeeds on only 11.0% of tasks, opening new opportunities for advancing agentic coding. Moreover, benefiting from our automated task collection toolkit, FeatureBench can be easily scaled and updated over time to mitigate data leakage. The inherent verifiability of constructed environments also makes our method potentially valuable for agent training.