SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
Samuel Miserendino, Michele Wang, Tejal Patwardhan, Johannes Heidecke
2025-02-18
Summary
This paper talks about SWE-Lancer, a new way to test how good AI language models are at doing real software engineering jobs. It's like giving AI a bunch of coding tasks that real freelance programmers would do and seeing how much money they could earn.
What's the problem?
Current ways of testing AI's coding skills don't really show how well they can handle real-world programming jobs. It's hard to know if AI is actually ready to do the work of human programmers in practical situations.
What's the solution?
The researchers created SWE-Lancer, which uses over 1,400 real coding tasks from Upwork, a freelancing website. These tasks are worth a total of $1 million in real money. They test the AI on both writing code and making decisions about which code to use. They then check the AI's work using thorough tests that experienced programmers have triple-checked.
Why it matters?
This matters because it gives us a more realistic picture of how close AI is to being able to do the job of human programmers. By linking AI performance to real money earned, we can better understand how AI might impact jobs and the economy in the future. It also helps researchers improve AI coding skills in ways that are actually useful in the real world.
Abstract
We introduce SWE-Lancer, a benchmark of over 1,400 freelance software engineering tasks from Upwork, valued at \1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks--ranging from 50 bug fixes to \$32,000 feature implementations--and managerial tasks, where models choose between technical implementation proposals. Independent tasks are graded with end-to-end tests triple-verified by experienced software engineers, while managerial decisions are assessed against the choices of the original hired engineering managers. We evaluate model performance and find that frontier models are still unable to solve the majority of tasks. To facilitate future research, we open-source a unified Docker image and a public evaluation split, SWE-Lancer Diamond (https://github.com/openai/SWELancer-Benchmark). By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.