< Explain other AI papers

Training Language Model Agents to Find Vulnerabilities with CTF-Dojo

Terry Yue Zhuo, Dingmin Wang, Hantian Ding, Varun Kumar, Zijian Wang

2025-08-27

Training Language Model Agents to Find Vulnerabilities with CTF-Dojo

Summary

This paper introduces CTF-Dojo, a new system for training large language models (LLMs) by letting them actually *do* things and learn from the results, specifically by solving cybersecurity challenges like those found in 'Capture the Flag' competitions.

What's the problem?

Currently, it's hard to train LLMs to be really good at tasks that require actually interacting with a computer system, like writing and debugging code or solving security problems. Existing systems that allow this kind of 'executable' training are limited in scale and aren't easily adaptable to new challenges, meaning it's difficult to create a large, diverse training environment without a lot of manual work from experts.

What's the solution?

The researchers created CTF-Dojo, a large collection of 658 cybersecurity challenges packaged in a way that LLMs can interact with them safely and consistently. They also built CTF-Forge, a tool that automatically turns existing cybersecurity resources into these runnable challenges, drastically reducing the time and effort needed to expand the training environment. They then trained LLMs on data generated from these challenges and showed significant improvements in performance on standard cybersecurity benchmarks.

Why it matters?

This work is important because it shows that letting LLMs learn by actually *doing* things – getting feedback from executing code or solving problems – is a very effective way to improve their abilities. It also demonstrates that you don't necessarily need expensive, proprietary systems to achieve this; CTF-Dojo is open-source and performs as well as or better than some of the most advanced, closed-source models.

Abstract

Large language models (LLMs) have demonstrated exceptional capabilities when trained within executable runtime environments, notably excelling at software engineering tasks through verified feedback loops. Yet, scalable and generalizable execution-grounded environments remain scarce, limiting progress in training more capable ML agents. We introduce CTF-Dojo, the first large-scale executable runtime tailored for training LLMs with verifiable feedback, featuring 658 fully functional Capture-The-Flag (CTF)-style challenges containerized in Docker with guaranteed reproducibility. To enable rapid scaling without manual intervention, we develop CTF-Forge, an automated pipeline that transforms publicly available artifacts into ready-to-use execution environments in minutes, eliminating weeks of expert configuration traditionally required. We trained LLM-based agents on just 486 high-quality, execution-verified trajectories from CTF-Dojo, achieving up to 11.6% absolute gains over strong baselines across three competitive benchmarks: InterCode-CTF, NYU CTF Bench, and Cybench. Our best-performing 32B model reaches 31.9% Pass@1, establishing a new open-weight state-of-the-art that rivals frontier models like DeepSeek-V3-0324 and Gemini-2.5-Flash. By framing CTF-style tasks as a benchmark for executable-agent learning, CTF-Dojo demonstrates that execution-grounded training signals are not only effective but pivotal in advancing high-performance ML agents without dependence on costly proprietary systems.