< Explain other AI papers

BigO(Bench) -- Can LLMs Generate Code with Controlled Time and Space Complexity?

Pierre Chambon, Baptiste Roziere, Benoit Sagot, Gabriel Synnaeve

2025-03-21

BigO(Bench) -- Can LLMs Generate Code with Controlled Time and Space
  Complexity?

Summary

This paper introduces a new test, called BigO(Bench), to see if AI language models can write code that follows specific rules about how fast it runs and how much memory it uses.

What's the problem?

Current tests for AI coding skills don't check if the AI understands or can create code that's efficient in terms of time and memory.

What's the solution?

BigO(Bench) includes problems, solutions, and a way to automatically figure out how efficient a piece of code is. This allows researchers to test how well AI models can generate code with controlled time and space complexity.

Why it matters?

This work matters because it helps us understand if AI models can truly understand and generate efficient code, which is important for real-world programming tasks.

Abstract

We introduce BigO(Bench), a novel coding benchmark designed to evaluate the capabilities of generative language models in understanding and generating code with specified time and space complexities. This benchmark addresses the gap in current evaluations that often overlook the ability of models to comprehend and produce code constrained by computational complexity. BigO(Bench) includes tooling to infer the algorithmic complexity of any Python function from profiling measurements, including human- or LLM-generated solutions. BigO(Bench) also includes of set of 3,105 coding problems and 1,190,250 solutions from Code Contests annotated with inferred (synthetic) time and space complexity labels from the complexity framework, as well as corresponding runtime and memory footprint values for a large set of input sizes. We present results from evaluating multiple state-of-the-art language models on this benchmark, highlighting their strengths and weaknesses in handling complexity requirements. In particular, token-space reasoning models are unrivaled in code generation but not in complexity understanding, hinting that they may not generalize well to tasks for which no reward was given at training time.