llmcompressor
is an easy-to-use library for optimizing models for deployment with vllm
, including:
- Comprehensive set of quantization algorithms for weight-only and activation quantization
- Seamless integration with Hugging Face models and repositories
safetensors
-based file format compatible withvllm
- Large model support via
accelerate
✨ Read the announcement blog here! ✨
- Activation Quantization: W8A8 (int8 and fp8)
- Mixed Precision: W4A16, W8A16
- 2:4 Semi-structured and Unstructured Sparsity
- Simple PTQ
- GPTQ
- SmoothQuant
- SparseGPT
pip install llmcompressor
Applying quantization with llmcompressor
:
- Activation quantization to
int8
- Activation quantization to
fp8
- Weight only quantization to
int4
- Quantizing MoE LLMs
Deep dives into advanced usage of llmcompressor
:
Let's quantize TinyLlama
with 8 bit weights and activations using the GPTQ
and SmoothQuant
algorithms.
Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe
may be changed to target different quantization algorithms or formats.
Quantization is applied by selecting an algorithm and calling the oneshot
API.
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.transformers import oneshot
# Select quantization algorithm. In this case, we:
# * apply SmoothQuant to make the activations easier to quantize
# * quantize the weights to int8 with GPTQ (static per channel)
# * quantize the activations to int8 (dynamic per token)
recipe = [
SmoothQuantModifier(smoothing_strength=0.8),
GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lm_head"]),
]
# Apply quantization using the built in open_platypus dataset.
# * See examples for demos showing how to pass a custom calibration set
oneshot(
model="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
dataset="open_platypus",
recipe=recipe,
output_dir="TinyLlama-1.1B-Chat-v1.0-INT8",
max_seq_length=2048,
num_calibration_samples=512,
)
The checkpoints created by llmcompressor
can be loaded and run in vllm
:
Install:
pip install vllm
Run:
from vllm import LLM
model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8")
output = model.generate("My name is")
- If you have any questions or requests open an issue and we will add an example or documentation.
- We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.