From the Huggingface blog post: Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA (<a href="https://huggingface.co/blog/4bit-transformers-bitsandbytes" rel="nofollow noreferrer">https://huggingface.co/blog/4bit-transformers-bitsandbytes</a>)