< Explain other AI papers

Puzzle: Distillation-Based NAS for Inference-Optimized LLMs

Akhiad Bercovich, Tomer Ronen, Talor Abramovich, Nir Ailon, Nave Assaf, Mohammad Dabbah, Ido Galil, Amnon Geifman, Yonatan Geifman, Izhak Golan, Netanel Haber, Ehud Karpas, Itay Levy, Shahar Mor, Zach Moshe, Najeeb Nabwani, Omri Puny, Ran Rubin, Itamar Schen, Ido Shahaf, Oren Tropp, Omer Ullman Argov

2024-12-02

Puzzle: Distillation-Based NAS for Inference-Optimized LLMs

Summary

This paper presents Puzzle, a new framework designed to make large language models (LLMs) faster and more efficient for use on specific hardware without losing their performance.

What's the problem?

Large language models are very powerful, but they can be slow and expensive to run, especially when they have a lot of parameters. This makes it hard to use them in practical applications where speed and cost are important. Simply increasing the number of parameters can improve accuracy, but it also makes these models harder to deploy effectively.

What's the solution?

Puzzle addresses this problem by optimizing LLMs using a method called neural architecture search (NAS) that focuses on finding the best model configurations for specific hardware. It uses techniques like blockwise local knowledge distillation to explore different model architectures efficiently. The result is a new model called Nemotron-51B, which is much faster—up to 2.17 times quicker—while still retaining 98.4% of the original model's capabilities. This means it can run effectively on a single GPU, making it more accessible for users.

Why it matters?

This research is important because it shows a way to improve the efficiency of powerful AI models, making them easier and cheaper to use in real-world applications. By optimizing how these models work without sacrificing their abilities, Puzzle can help bring advanced language processing tools to more people and industries, enhancing various fields like education, customer service, and content creation.

Abstract

Large language models (LLMs) have demonstrated remarkable capabilities, but their adoption is limited by high computational costs during inference. While increasing parameter counts enhances accuracy, it also widens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a framework to accelerate LLM inference on specific hardware while preserving their capabilities. Through an innovative application of neural architecture search (NAS) at an unprecedented scale, Puzzle systematically optimizes models with tens of billions of parameters under hardware constraints. Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization. We demonstrate the real-world impact of our framework through Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B), a publicly available model derived from Llama-3.1-70B-Instruct. Nemotron-51B achieves a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while preserving 98.4% of the original model's capabilities. Nemotron-51B currently stands as the most accurate language model capable of inference on a single GPU with large batch sizes. Remarkably, this transformation required just 45B training tokens, compared to over 15T tokens used for the 70B model it was derived from. This establishes a new paradigm where powerful models can be optimized for efficient deployment with only negligible compromise of their capabilities, demonstrating that inference performance, not parameter count alone, should guide model selection. With the release of Nemotron-51B and the presentation of the Puzzle framework, we provide practitioners immediate access to state-of-the-art language modeling capabilities at significantly reduced computational costs.