< Explain other AI papers

DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models

DeepSeek-AI, Aixin Liu, Aoxue Mei, Bangcai Lin, Bing Xue, Bingxuan Wang, Bingzheng Xu, Bochao Wu, Bowei Zhang, Chaofan Lin, Chen Dong, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenhao Xu, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Erhang Li, Fangqi Zhou

2025-12-03

DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models

Summary

This paper introduces DeepSeek-V3.2, a new AI model designed to be both efficient in its calculations and very good at complex reasoning and problem-solving, especially when acting as an 'agent' that can use tools.

What's the problem?

Existing large language models often struggle with a trade-off: being really good at reasoning requires a lot of computing power, making them slow and expensive. Also, getting these models to reliably use tools to solve problems, like a virtual assistant, is difficult because it requires specific training data that's hard to create at a large scale.

What's the solution?

The researchers tackled these problems in three main ways. First, they created a new 'attention' mechanism called DeepSeek Sparse Attention (DSA) which allows the model to process long pieces of text without needing as much computing power. Second, they used a lot of extra computing power *after* the initial training to fine-tune the model using reinforcement learning, making it perform as well as, or even better than, models like GPT-5. Finally, they built a system to automatically generate lots of training examples for the model to learn how to use tools effectively, improving its ability to follow instructions and handle complex tasks.

Why it matters?

DeepSeek-V3.2 is important because it shows you can build a powerful AI that doesn't necessarily require massive amounts of computing resources. It also demonstrates a strong ability to solve difficult problems, even achieving top scores in challenging competitions like the International Mathematical Olympiad and the International Olympiad in Informatics, and it's a step forward in creating AI agents that can reliably interact with and use tools to help people.

Abstract

We introduce DeepSeek-V3.2, a model that harmonizes high computational efficiency with superior reasoning and agent performance. The key technical breakthroughs of DeepSeek-V3.2 are as follows: (1) DeepSeek Sparse Attention (DSA): We introduce DSA, an efficient attention mechanism that substantially reduces computational complexity while preserving model performance in long-context scenarios. (2) Scalable Reinforcement Learning Framework: By implementing a robust reinforcement learning protocol and scaling post-training compute, DeepSeek-V3.2 performs comparably to GPT-5. Notably, our high-compute variant, DeepSeek-V3.2-Speciale, surpasses GPT-5 and exhibits reasoning proficiency on par with Gemini-3.0-Pro, achieving gold-medal performance in both the 2025 International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI). (3) Large-Scale Agentic Task Synthesis Pipeline: To integrate reasoning into tool-use scenarios, we developed a novel synthesis pipeline that systematically generates training data at scale. This methodology facilitates scalable agentic post-training, yielding substantial improvements in generalization and instruction-following robustness within complex, interactive environments.