< Explain other AI papers

NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks

Yongchang Hao, Yanshuai Cao, Lili Mou

2024-11-01

NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks

Summary

This paper introduces NeuZip, a new method for compressing neural networks to make them more memory-efficient during training and inference without losing performance.

What's the problem?

Neural networks perform better when they have more parameters, but this increases their size, making them difficult to run on devices with limited memory. Traditional methods for reducing the size of these models, like quantization, often lead to a decrease in performance, which is a significant drawback when trying to use these models in real-world applications.

What's the solution?

NeuZip addresses this issue by using a new weight compression technique that analyzes the data in neural networks to reduce memory usage without sacrificing the model's effectiveness. Specifically, it reduces the memory needed to train a large model (like Llama-3) from 31GB to under 16GB while keeping the training process unchanged. During inference, it can cut memory usage by more than half while still performing nearly as well as before.

Why it matters?

This research is important because it allows larger and more complex neural networks to be used on devices with limited memory, making advanced AI technology more accessible. By improving how we manage memory in neural networks, NeuZip can enhance various applications, from mobile apps to cloud computing services, ultimately leading to better performance and user experiences.

Abstract

The performance of neural networks improves when more parameters are used. However, the model sizes are constrained by the available on-device memory during training and inference. Although applying techniques like quantization can alleviate the constraint, they suffer from performance degradation. In this work, we introduce NeuZip, a new weight compression scheme based on the entropy of floating-point numbers in neural networks. With NeuZip, we are able to achieve memory-efficient training and inference without sacrificing performance. Notably, we significantly reduce the memory footprint of training a Llama-3 8B model from 31GB to less than 16GB, while keeping the training dynamics fully unchanged. In inference, our method can reduce memory usage by more than half while maintaining near-lossless performance. Our code is publicly available.