As large language models (LLM) got bigger with more and more parameters, new techniques to reduce their memory usage have also been proposed.
One of the most effective methods to reduce the model size in memory is quantization. You can see quantization as a compression technique for LLMs. In practice, the main goal of quantization is to lower the precision of the LLM’s weights, typically from 16-bit to 8-bit, 4-bit, or even 3-bit, with minimal performance degradation.
There are two popular quantization methods for LLMs: GPTQ and bitsandbytes.
In this article, I discuss what the main differences between these two approaches are. They both have their own advantages and disadvantages that make them suitable for different use cases. I present a comparison of their memory usage and inference speed using Llama 2. I also discuss their performance based on experiments from previous work.
Note: If you want to know more about quantization, I recommend reading this excellent introduction by Maxime Labonne:
GPTQ (Frantar et al., 2023) was first applied to models ready to deploy. In other words, once the model is fully fine-tuned, GPTQ will be applied to reduce its size.
GPTQ can lower the weight precision to 4-bit or 3-bit. In practice, GPTQ is mainly used for 4-bit quantization. 3-bit has been shown very unstable (Dettmers and Zettlemoyer, 2023).
It quantizes without loading the entire model into memory. Instead, GPTQ loads and quantizes the LLM module by module. Quantization also requires a small sample of data for calibration which can take more than…