This repo proposes LLaMA-Adapter, a lightweight and simple adapter for fine-tuning instruction-following LLaMA models.<p>By inserting adapters into LLaMA's transformer, our method only introduces 1~8M learnable parameters, and turns a LLaMA into an instruction-following model within 25~50 minutes. LLaMA-Adapter is plug-and-play due to a proposed Zero Attention mechanism, and can be simply extended to multi-modal input instructions. After fine-tuning, LLaMA-Adapter can generate high-quality instruction-following sentences, comparable to other fully trained models.