< Explain other AI papers

Apriel-1.5-15b-Thinker

Shruthan Radhakrishna, Aman Tiwari, Aanjaneya Shukla, Masoud Hashemi, Rishabh Maheshwary, Shiva Krishna Reddy Malay, Jash Mehta, Pulkit Pattnaik, Saloni Mittal, Khalil Slimi, Kelechi Ogueji, Akintunde Oladipo, Soham Parikh, Oluwanifemi Bamgbose, Toby Liang, Ahmed Masry, Khyati Mahajan, Sai Rajeswar Mudumba, Vikas Yadav, Sathwik Tejaswi Madhusudhan, Torsten Scholak, Sagar Davasam

2025-10-06

Apriel-1.5-15b-Thinker

Summary

This paper introduces Apriel-1.5-15B-Thinker, a new artificial intelligence model that's really good at understanding both images and text, and can even reason about them. It's a 15 billion parameter model, which is large, but it achieves high performance not just by being big, but through clever training techniques.

What's the problem?

Building AI models that can truly *understand* images and text, and then use that understanding to solve problems, is incredibly difficult and usually requires massive amounts of computing power and data. Existing high-performing models are often very large and expensive to run, making them inaccessible to many researchers and organizations. The challenge is to create a powerful multimodal model without needing enormous resources.

What's the solution?

The researchers started with an existing model called Pixtral-12B and improved it in three steps. First, they increased the model's capacity to reason. Second, they trained it in stages, first teaching it basic text and image understanding, then specifically focusing on visual reasoning with specially created images that tested its ability to understand spatial relationships, how things are put together, and small details. Finally, they fine-tuned the model using high-quality examples of questions and answers that showed step-by-step reasoning in areas like math, coding, and science. Importantly, they didn't use complex techniques like reinforcement learning, focusing instead on the quality of the training data.

Why it matters?

This work is important because it shows that you don't *always* need a gigantic AI model to get excellent results. By carefully designing the training process and using targeted data, they were able to create a model that performs competitively with much larger models, even running on a single computer. This makes advanced AI capabilities more accessible to a wider range of people and organizations, and encourages further open-source research in the field.

Abstract

We present Apriel-1.5-15B-Thinker, a 15-billion parameter open-weights multimodal reasoning model that achieves frontier-level performance through training design rather than sheer scale. Starting from Pixtral-12B, we apply a progressive three-stage methodology: (1) depth upscaling to expand reasoning capacity without pretraining from scratch, (2) staged continual pre-training that first develops foundational text and vision understanding, then enhances visual reasoning through targeted synthetic data generation addressing spatial structure, compositional understanding, and fine-grained perception, and (3) high-quality text-only supervised fine-tuning on curated instruction-response pairs with explicit reasoning traces spanning mathematics, coding, science, and tool use. Notably, our model achieves competitive results without reinforcement learning or preference optimization, isolating the contribution of our data-centric continual pre-training approach. On the Artificial Analysis Intelligence Index, Apriel-1.5-15B-Thinker attains a score of 52, matching DeepSeek-R1-0528 despite requiring significantly fewer computational resources. Across ten image benchmarks, its performance is on average within five points of Gemini-2.5-Flash and Claude Sonnet-3.7, a key achievement for a model operating within single-GPU deployment constraints. Our results demonstrate that thoughtful mid-training 2 design can close substantial capability gaps without massive scale, making frontier-level multimodal reasoning accessible to organizations with limited infrastructure. We release the model checkpoint, all training recipes, and evaluation protocols under the MIT license to to advance open-source research.