SWE-chat: Coding Agent Interactions From Real Users in the Wild
Joachim Baumann, Vishakh Padmakumar, Xiang Li, John Yang, Diyi Yang, Sanmi Koyejo
2026-04-23
Summary
This research paper investigates how developers are actually using AI coding assistants, like GitHub Copilot, in their everyday work, and how helpful those assistants truly are.
What's the problem?
While AI coding tools are becoming popular, there hasn't been much real-world data on *how* people use them. Most evaluations are done in controlled settings, not reflecting the messy reality of software development. We didn't know if developers were relying heavily on these tools, if the code generated was actually good, or how often developers had to fix or reject the AI's suggestions.
What's the solution?
The researchers created a dataset called SWE-chat by collecting data from over 6,000 actual coding sessions of open-source developers using AI assistants. This dataset includes all the prompts the developers gave the AI, the AI’s responses, and importantly, what code the developers ultimately kept or changed. They then analyzed this data to understand patterns in how developers interact with these tools, how much code comes directly from the AI, and how often the AI makes mistakes or creates security issues.
Why it matters?
This work is important because it moves beyond just testing AI coding tools in the lab and provides a realistic picture of their performance in the wild. The findings show that AI coding assistants aren't always perfect – a significant amount of the generated code is either rejected or needs fixing, and sometimes even introduces security vulnerabilities. This research provides a foundation for improving these tools and understanding how to best integrate them into the software development process, ultimately helping developers write better code more efficiently.
Abstract
AI coding agents are being adopted at scale, yet we lack empirical evidence on how people actually use them and how much of their output is useful in practice. We present SWE-chat, the first large-scale dataset of real coding agent sessions collected from open-source developers in the wild. The dataset currently contains 6,000 sessions, comprising more than 63,000 user prompts and 355,000 agent tool calls. SWE-chat is a living dataset; our collection pipeline automatically and continually discovers and processes sessions from public repositories. Leveraging SWE-chat, we provide an initial empirical characterization of real-world coding agent usage and failure modes. We find that coding patterns are bimodal: in 41% of sessions, agents author virtually all committed code ("vibe coding"), while in 23%, humans write all code themselves. Despite rapidly improving capabilities, coding agents remain inefficient in natural settings. Just 44% of all agent-produced code survives into user commits, and agent-written code introduces more security vulnerabilities than code authored by humans. Furthermore, users push back against agent outputs -- through corrections, failure reports, and interruptions -- in 44% of all turns. By capturing complete interaction traces with human vs. agent code authorship attribution, SWE-chat provides an empirical foundation for moving beyond curated benchmarks towards an evidence-based understanding of how AI agents perform in real developer workflows.