< Explain other AI papers

Gemma 2: Improving Open Language Models at a Practical Size

Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk

2024-08-02

Gemma 2: Improving Open Language Models at a Practical Size

Summary

This paper introduces Gemma 2, a new lightweight language model designed to perform well while being smaller in size. It features several technical improvements that enhance its capabilities compared to previous models.

What's the problem?

Many existing language models are very large and require a lot of computational resources, making them expensive and difficult to use for everyday applications. As a result, there is a need for smaller models that can still deliver high performance without needing extensive hardware or data.

What's the solution?

Gemma 2 addresses this issue by using a transformer architecture that has been optimized with techniques like sliding window attention and knowledge distillation. These methods allow the model to learn effectively from a smaller dataset while maintaining or even improving performance. Gemma 2 comes in different sizes (from 2 billion to 27 billion parameters) and has been trained on a large amount of data, making it competitive with much larger models. The authors also released the model and its training data to the public, allowing others to use and build upon their work.

Why it matters?

This research is significant because it provides a more accessible option for developers and researchers who want to use advanced language models without the high costs associated with larger models. By improving the efficiency of these models, Gemma 2 can help democratize access to powerful AI tools, enabling more people to create innovative applications in areas like natural language processing, chatbots, and content generation.

Abstract

In this work, we introduce Gemma 2, a new addition to the Gemma family of lightweight, state-of-the-art open models, ranging in scale from 2 billion to 27 billion parameters. In this new version, we apply several known technical modifications to the Transformer architecture, such as interleaving local-global attentions (Beltagy et al., 2020a) and group-query attention (Ainslie et al., 2023). We also train the 2B and 9B models with knowledge distillation (Hinton et al., 2015) instead of next token prediction. The resulting models deliver the best performance for their size, and even offer competitive alternatives to models that are 2-3 times bigger. We release all our models to the community.