Stating independence as the advantage of Llama 3.1 is a bit funny. Without the huge amount of computational resources from Meta, Llama 3.1 won't be possible. We are still dependent on certain big companys' "good" willness to be able to enjoy the benefits of open source.
I just got Llama 3.1 GGUFs working on my Mac laptop with a new plugin for my LLM CLI tool: <a href="https://llm.datasette.io/" rel="nofollow">https://llm.datasette.io/</a><p>Here's information on the new plugin: <a href="https://simonwillison.net/2024/Jul/23/llm-gguf/" rel="nofollow">https://simonwillison.net/2024/Jul/23/llm-gguf/</a><p>Once you've installed LLM ("brew install llm" or "pipx install llm" or "pip install llm") you can try the new plugin like this:<p><pre><code> llm install llm-gguf
llm gguf download-model \
https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF/resolve/main/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf \
--alias llama-3.1-8b-instruct --alias l31i
llm -m l31i "five great names for a pet lemur"
</code></pre>
This is using the GGUF version of Llama 3.1 8B Instruct from here: <a href="https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF/tree/main" rel="nofollow">https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-...</a>
Does the community license let companies fine-tune it or retrain it for their use cases?<p>There are significant restrictions on it so it's not fully open-source, but maybe it's only a real problem for Google and OpenAI and Microsoft.<p>Open source has turned into a game of, what's the most commercial value I can retain, while still calling it open-source and benefiting from the trust and marketing value of the 'open source' branding.
The last section is the most important. There’s a massive difference between what you can do with the text output of an LLM versus being able to know and play with the individual weights, depending on your use case.
Excited about this - though probably more the 70B than 405B because its also really good & will be accessible for cheap & in bulk.<p>btw pretty sure nobody is creating adapters for a 405B with a laptop and a weekend ;)
I used to think it was cheaper. But according to <a href="https://llama.meta.com/" rel="nofollow">https://llama.meta.com/</a> GPT-4o Mini is actually cheaper most of the time.