DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang
2025-01-23

Summary
This paper talks about a new way to make AI systems that understand images and text (called Large Vision Language Models or LVLMs) more accurate when describing what they see in pictures. The researchers found a way to reduce 'hallucinations,' which is when these AI systems make up things that aren't actually in the image.
What's the problem?
LVLMs are really good at understanding and describing images, but they often make mistakes by talking about objects or details that aren't actually in the picture. This happens because as the AI processes the image through its layers, it starts to lose focus on what's actually in the image and begins to make things up based on what it expects to see.
What's the solution?
The researchers came up with a clever fix that doesn't require completely retraining the AI. They created a method that helps the AI pay better attention to the important parts of the image throughout the whole process. They did this in two main ways: First, they made a system that picks out the most important visual details in the image. Second, they adjusted how different parts of the AI pay attention to these details, based on how good each part is at understanding visual information. This helps the AI stay focused on what's actually in the image instead of making things up.
Why it matters?
This matters because it makes AI systems that work with images and text much more reliable. By reducing hallucinations by up to 62.3%, it means these AIs are much less likely to describe things that aren't there. This is really important for using these systems in real-world applications where accuracy is crucial, like helping visually impaired people understand their surroundings or in automated image captioning for social media. It's a big step towards making AI we can trust to accurately describe what it sees, without needing to completely rebuild or retrain these complex systems.
Abstract
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.