< Explain other AI papers

Quantizing Large Language Models for Code Generation: A Differentiated Replication

Alessandro Giagnorio, Antonio Mastropaolo, Saima Afrin, Massimiliano Di Penta, Gabriele Bavota

2025-03-13

Quantizing Large Language Models for Code Generation: A Differentiated
  Replication

Summary

This paper talks about shrinking big code-writing AI models to make them lighter and faster by reducing how much computer memory they need, without making them worse at coding.

What's the problem?

Huge AI models that write code work well but need tons of memory and energy, making them expensive and hard to use on regular computers or phones.

What's the solution?

Researchers squeezed these models by lowering the precision of their numbers (like turning detailed decimals into simpler whole numbers) and found they still work well at 4-bit precision, with special code-focused adjustments helping at ultra-low settings.

Why it matters?

This lets people run powerful coding assistants on cheaper devices, reducing costs and energy use while keeping AI tools effective for programming help.

Abstract

Large Language Models (LLMs) have shown an impressive capability in code generation and, specifically, to automatically implement requirements described in natural language. The LLM effectiveness generally increases with its size: The higher the number of LLM's trainable parameters the better its ability to implement code. However, when it comes to deploying LLM-based code generators, larger LLMs pose significant challenges related to their memory (and, consequently, carbon) footprint. A previous work by Wei et al. proposed to leverage quantization techniques to reduce the memory footprint of LLM-based code generators without substantially degrading their effectiveness. In short, they studied LLMs featuring up to 16B parameters, quantizing their precision from floating point 32 bits down to int 8 bits and showing their limited impact on code generation performance. Given the fast pace at which LLM capabilities and quantization techniques are evolving, in this work we present a differentiated replication of the work by Wei et al. in which we consider (i) on the one side, more recent and larger code-related LLMs, of up to 34B parameters; (ii) the latest advancements in model quantization techniques, which allow pushing the compression to the extreme quantization level of 2 bits per model parameter and; (iii) different types of calibration datasets to guide the quantization process, including code-specific ones. Our empirical evaluation reveals that the new frontier for LLM quantization is 4-bit precision, resulting in an average memory footprint reduction of 70% compared to the original model without observing any significant decrease in performance. Additionally, when the quantization becomes even more extreme (3 and 2 bits), a code-specific calibration dataset helps to limit the loss of performance.