DeepCode: Open Agentic Coding
Zongwei Li, Zhonghang Li, Zirui Guo, Xubin Ren, Chao Huang
2025-12-10
Summary
This paper introduces DeepCode, a new system designed to automatically turn scientific papers into working code, essentially acting as an automated code engineer.
What's the problem?
Current AI coding assistants struggle to reliably translate complex documents, like scientific papers, into functional codebases. This is because these papers contain a lot of information, but AI models have limits on how much they can 'remember' and use at once. It's a balancing act between having enough information to understand the task and not overwhelming the AI's processing ability.
What's the solution?
DeepCode tackles this problem by carefully managing the flow of information. It works like optimizing a channel for clear communication. It does this in four main ways: first, it simplifies the paper's information into a 'blueprint'; second, it creates a structured 'memory' of the code as it's being built; third, it pulls in relevant knowledge only when needed; and finally, it checks its work and corrects errors in a continuous loop. This allows it to focus on the most important details without getting bogged down.
Why it matters?
This research is important because it significantly improves the ability of AI to automatically recreate scientific results from published papers. DeepCode performs better than existing AI tools and even surpasses human experts in some cases, which could speed up the process of verifying research and making new discoveries. It lays the groundwork for a future where AI can autonomously reproduce scientific findings, accelerating the pace of research.
Abstract
Recent advances in large language models (LLMs) have given rise to powerful coding agents, making it possible for code assistants to evolve into code engineers. However, existing methods still face significant challenges in achieving high-fidelity document-to-codebase synthesis--such as scientific papers to code--primarily due to a fundamental conflict between information overload and the context bottlenecks of LLMs. In this work, we introduce DeepCode, a fully autonomous framework that fundamentally addresses this challenge through principled information-flow management. By treating repository synthesis as a channel optimization problem, DeepCode seamlessly orchestrates four information operations to maximize task-relevant signals under finite context budgets: source compression via blueprint distillation, structured indexing using stateful code memory, conditional knowledge injection via retrieval-augmented generation, and closed-loop error correction. Extensive evaluations on the PaperBench benchmark demonstrate that DeepCode achieves state-of-the-art performance, decisively outperforming leading commercial agents such as Cursor and Claude Code, and crucially, surpassing PhD-level human experts from top institutes on key reproduction metrics. By systematically transforming paper specifications into production-grade implementations comparable to human expert quality, this work establishes new foundations for autonomous scientific reproduction that can accelerate research evaluation and discovery.