DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
DeepSeek-AI, Qihao Zhu, Daya Guo, Zhihong Shao, Dejian Yang, Peiyi Wang, Runxin Xu, Y. Wu, Yukun Li, Huazuo Gao, Shirong Ma, Wangding Zeng, Xiao Bi, Zihui Gu, Hanwei Xu, Damai Dai, Kai Dong, Liyue Zhang, Yishi Piao, Zhibin Gou, Zhenda Xie, Zhewen Hao
2024-06-19

Summary
This paper introduces DeepSeek-Coder-V2, an open-source language model designed for coding tasks. It performs similarly to advanced models like GPT-4 Turbo and has been improved to handle a wider range of programming languages and more complex tasks.
What's the problem?
Many existing language models used for coding are closed-source, meaning their inner workings and training data aren't available to the public. This limits accessibility and innovation in the field of code intelligence. Additionally, these models often support only a limited number of programming languages and have restrictions on the amount of text they can process at once, which can hinder their effectiveness in real-world coding scenarios.
What's the solution?
To address these issues, the authors developed DeepSeek-Coder-V2, which is based on a Mixture-of-Experts (MoE) architecture. This model has been pre-trained with an extensive dataset of 6 trillion tokens, which enhances its ability to understand and generate code. It now supports 338 programming languages (up from 86) and has increased its context length from 16K to 128K tokens, allowing it to handle more complex coding tasks. In tests against other models, DeepSeek-Coder-V2 showed better performance in coding and math-related benchmarks compared to popular closed-source models.
Why it matters?
This research is significant because it provides an open-source alternative to powerful coding models like GPT-4 Turbo, making advanced coding tools more accessible to developers and researchers. By expanding language support and improving performance, DeepSeek-Coder-V2 can help programmers write code more efficiently and tackle more challenging problems. This advancement could lead to better software development practices and innovations in AI-assisted programming.
Abstract
We present DeepSeek-Coder-V2, an open-source Mixture-of-Experts (MoE) code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks. Specifically, DeepSeek-Coder-V2 is further pre-trained from an intermediate checkpoint of DeepSeek-V2 with additional 6 trillion tokens. Through this continued pre-training, DeepSeek-Coder-V2 substantially enhances the coding and mathematical reasoning capabilities of DeepSeek-V2, while maintaining comparable performance in general language tasks. Compared to DeepSeek-Coder-33B, DeepSeek-Coder-V2 demonstrates significant advancements in various aspects of code-related tasks, as well as reasoning and general capabilities. Additionally, DeepSeek-Coder-V2 expands its support for programming languages from 86 to 338, while extending the context length from 16K to 128K. In standard benchmark evaluations, DeepSeek-Coder-V2 achieves superior performance compared to closed-source models such as GPT4-Turbo, Claude 3 Opus, and Gemini 1.5 Pro in coding and math benchmarks.