< Explain other AI papers

Instella: Fully Open Language Models with Stellar Performance

Jiang Liu, Jialian Wu, Xiaodong Yu, Yusheng Su, Prakamya Mishra, Gowtham Ramesh, Sudhanshu Ranjan, Chaitanya Manem, Ximeng Sun, Ze Wang, Pratik Prabhanjan Brahma, Zicheng Liu, Emad Barsoum

2025-11-18

Instella: Fully Open Language Models with Stellar Performance

Summary

This paper introduces Instella, a new set of large language models that are completely open to the public, meaning anyone can see how they work and use them for their own projects.

What's the problem?

Most of the really good large language models are kept secret by the companies that create them, or only partially shared. This makes it hard for researchers to understand how they work, check their results, or build upon them. It limits progress in the field because everything isn't transparent and easily reproducible.

What's the solution?

The researchers created Instella, a family of language models with three billion parameters, and made everything – the model itself, the data used to train it, and the code – freely available. They trained Instella using powerful AMD computers, focusing on pre-training, instruction tuning (teaching it to follow directions), and aligning it with what people actually prefer. They also created two special versions: one that can handle very long pieces of text, and another that's really good at math.

Why it matters?

Instella provides a powerful, transparent, and accessible alternative to closed-source models. This allows more researchers and developers to study, improve, and build upon this technology, ultimately speeding up progress in the field of artificial intelligence and making it more open and collaborative.

Abstract

Large language models (LLMs) have demonstrated remarkable performance across a wide range of tasks, yet the majority of high-performing models remain closed-source or partially open, limiting transparency and reproducibility. In this work, we introduce Instella, a family of fully open three billion parameter language models trained entirely on openly available data and codebase. Powered by AMD Instinct MI300X GPUs, Instella is developed through large-scale pre-training, general-purpose instruction tuning, and alignment with human preferences. Despite using substantially fewer pre-training tokens than many contemporaries, Instella achieves state-of-the-art results among fully open models and is competitive with leading open-weight models of comparable size. We further release two specialized variants: Instella-Long, capable of handling context lengths up to 128K tokens, and Instella-Math, a reasoning-focused model enhanced through supervised fine-tuning and reinforcement learning on mathematical tasks. Together, these contributions establish Instella as a transparent, performant, and versatile alternative for the community, advancing the goal of open and reproducible language modeling research.