TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Llama Adapter – An instruction fine tuned model under 1 hour

5 pointsby guywithabowtieabout 2 years ago

1 comment

guywithabowtieabout 2 years ago
This repo proposes LLaMA-Adapter, a lightweight and simple adapter for fine-tuning instruction-following LLaMA models.<p>By inserting adapters into LLaMA&#x27;s transformer, our method only introduces 1~8M learnable parameters, and turns a LLaMA into an instruction-following model within 25~50 minutes. LLaMA-Adapter is plug-and-play due to a proposed Zero Attention mechanism, and can be simply extended to multi-modal input instructions. After fine-tuning, LLaMA-Adapter can generate high-quality instruction-following sentences, comparable to other fully trained models.