Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risk of Language Models
Andy K. Zhang, Neil Perry, Riya Dulepet, Eliot Jones, Justin W. Lin, Joey Ji, Celeste Menders, Gashon Hussein, Samantha Liu, Donovan Jasper, Pura Peetathawatchai, Ari Glenn, Vikram Sivashankar, Daniel Zamoshchin, Leo Glikbarg, Derek Askaryar, Mike Yang, Teddy Zhang, Rishi Alluri, Nathan Tran, Rinnara Sangpisit, Polycarpos Yiorkadjis
2024-08-20

Summary
This paper presents Cybench, a framework designed to evaluate the cybersecurity capabilities and risks of language models (LMs) in identifying vulnerabilities and executing exploits.
What's the problem?
As language models become more advanced, they can potentially be used to identify security weaknesses in computer systems. However, there is a need to measure how well these models can perform cybersecurity tasks to ensure they are effective and safe. Traditional methods for evaluating these capabilities are limited and do not cover the wide range of tasks needed for thorough assessment.
What's the solution?
Cybench addresses this issue by providing a structured way to specify cybersecurity tasks and evaluate LMs on those tasks. It includes 40 professional-level Capture the Flag (CTF) challenges from various competitions, which are designed to test different skills in cybersecurity. The framework allows for the breakdown of complex tasks into smaller subtasks, making it easier to assess the models' performance. The authors tested several language models, including GPT-4o and Claude 3.5 Sonnet, to see how well they could solve these tasks.
Why it matters?
This research is important because it helps improve the understanding of how language models can be used in cybersecurity. By providing a way to evaluate their capabilities, Cybench can help researchers and developers create safer AI systems and better prepare for potential cyber threats.
Abstract
Language Model (LM) agents for cybersecurity that are capable of autonomously identifying vulnerabilities and executing exploits have the potential to cause real-world impact. Policymakers, model providers, and other researchers in the AI and cybersecurity communities are interested in quantifying the capabilities of such agents to help mitigate cyberrisk and investigate opportunities for penetration testing. Toward that end, we introduce Cybench, a framework for specifying cybersecurity tasks and evaluating agents on those tasks. We include 40 professional-level Capture the Flag (CTF) tasks from 4 distinct CTF competitions, chosen to be recent, meaningful, and spanning a wide range of difficulties. Each task includes its own description, starter files, and is initialized in an environment where an agent can execute bash commands and observe outputs. Since many tasks are beyond the capabilities of existing LM agents, we introduce subtasks, which break down a task into intermediary steps for more gradated evaluation; we add subtasks for 17 of the 40 tasks. To evaluate agent capabilities, we construct a cybersecurity agent and evaluate 7 models: GPT-4o, Claude 3 Opus, Claude 3.5 Sonnet, Mixtral 8x22b Instruct, Gemini 1.5 Pro, Llama 3 70B Chat, and Llama 3.1 405B Instruct. Without guidance, we find that agents are able to solve only the easiest complete tasks that took human teams up to 11 minutes to solve, with Claude 3.5 Sonnet and GPT-4o having the highest success rates. Finally, subtasks provide more signal for measuring performance compared to unguided runs, with models achieving a 3.2\% higher success rate on complete tasks with subtask-guidance than without subtask-guidance. All code and data are publicly available at https://cybench.github.io