< Explain other AI papers

A Survey of Vibe Coding with Large Language Models

Yuyao Ge, Lingrui Mei, Zenghao Duan, Tianhao Li, Yujia Zheng, Yiwei Wang, Lexin Wang, Jiayu Yao, Tianyu Liu, Yujun Cai, Baolong Bi, Fangda Guo, Jiafeng Guo, Shenghua Liu, Xueqi Cheng

2025-10-15

A Survey of Vibe Coding with Large Language Models

Summary

This paper explores a new way of coding with the help of powerful AI tools called large language models, nicknamed 'Vibe Coding'. Instead of carefully reading every line of code the AI writes, developers are starting to check if the AI's code *works* as expected, focusing on the outcome rather than the details.

What's the problem?

While 'Vibe Coding' sounds promising, it's not always as effective as it could be. Surprisingly, some developers have actually become *less* productive when using this method, and it's not clear how humans and AI can best work together in this new style of development. There hasn't been a good, organized look at how this 'Vibe Coding' actually functions and what makes it succeed or fail.

What's the solution?

The researchers did a huge review, looking at over 1000 research papers, to understand 'Vibe Coding' better. They created a theoretical model to explain how developers, projects, and AI agents interact. Then, they identified five different ways developers are currently using 'Vibe Coding', categorizing them into distinct approaches like letting the AI run freely, or working with it step-by-step. They found that success isn't just about how good the AI is, but also about giving it the right information, having a good coding environment, and establishing clear ways for humans and AI to collaborate.

Why it matters?

This research is important because 'Vibe Coding' has the potential to change how software is built. By understanding what works and what doesn't, and by providing a framework for how to approach it, this paper helps developers and researchers make the most of these new AI tools and build better software more efficiently.

Abstract

The advancement of large language models (LLMs) has catalyzed a paradigm shift from code generation assistance to autonomous coding agents, enabling a novel development methodology termed "Vibe Coding" where developers validate AI-generated implementations through outcome observation rather than line-by-line code comprehension. Despite its transformative potential, the effectiveness of this emergent paradigm remains under-explored, with empirical evidence revealing unexpected productivity losses and fundamental challenges in human-AI collaboration. To address this gap, this survey provides the first comprehensive and systematic review of Vibe Coding with large language models, establishing both theoretical foundations and practical frameworks for this transformative development approach. Drawing from systematic analysis of over 1000 research papers, we survey the entire vibe coding ecosystem, examining critical infrastructure components including LLMs for coding, LLM-based coding agent, development environment of coding agent, and feedback mechanisms. We first introduce Vibe Coding as a formal discipline by formalizing it through a Constrained Markov Decision Process that captures the dynamic triadic relationship among human developers, software projects, and coding agents. Building upon this theoretical foundation, we then synthesize existing practices into five distinct development models: Unconstrained Automation, Iterative Conversational Collaboration, Planning-Driven, Test-Driven, and Context-Enhanced Models, thus providing the first comprehensive taxonomy in this domain. Critically, our analysis reveals that successful Vibe Coding depends not merely on agent capabilities but on systematic context engineering, well-established development environments, and human-agent collaborative development models.