NeoBERT: A Next-Generation BERT
Lola Le Breton, Quentin Fournier, Mariam El Mezouar, Sarath Chandar
2025-02-28
Summary
This paper talks about NeoBERT, a new and improved version of BERT, which is a type of AI model used for understanding language. NeoBERT is designed to be more powerful and efficient than older models like BERT and RoBERTa, while still being easy to use in existing systems.
What's the problem?
While some types of AI language models have made big improvements recently, the kind of models used for understanding text (called encoders) haven't kept up. This means that many important language tasks aren't benefiting from the latest advancements in AI technology.
What's the solution?
The researchers created NeoBERT, which combines the latest improvements in AI design, uses modern data for training, and has better training methods. NeoBERT can handle longer pieces of text (up to 4,096 words) and performs better than larger models on important tests, even though it's smaller and more efficient. They also carefully tested each improvement they made and created a standard way to fine-tune and evaluate these types of models.
Why it matters?
This matters because NeoBERT could make many language-related AI tasks work better, like understanding text, answering questions, or analyzing sentiment. It's designed to be easily used in place of older models, which means researchers and companies can quickly benefit from these improvements. By making all their work openly available, the researchers are helping to speed up progress in AI language understanding and its real-world applications.
Abstract
Recent innovations in architecture, pre-training, and fine-tuning have led to the remarkable in-context learning and reasoning abilities of large auto-regressive language models such as LLaMA and DeepSeek. In contrast, encoders like BERT and RoBERTa have not seen the same level of progress despite being foundational for many downstream NLP applications. To bridge this gap, we introduce NeoBERT, a next-generation encoder that redefines the capabilities of bidirectional models by integrating state-of-the-art advancements in architecture, modern data, and optimized pre-training methodologies. NeoBERT is designed for seamless adoption: it serves as a plug-and-play replacement for existing base models, relies on an optimal depth-to-width ratio, and leverages an extended context length of 4,096 tokens. Despite its compact 250M parameter footprint, it achieves state-of-the-art results on the massive MTEB benchmark, outperforming BERT large, RoBERTa large, NomicBERT, and ModernBERT under identical fine-tuning conditions. In addition, we rigorously evaluate the impact of each modification on GLUE and design a uniform fine-tuning and evaluation framework for MTEB. We release all code, data, checkpoints, and training scripts to accelerate research and real-world adoption.