We believe that AI should be fully open source and part of the collective knowledge.<p>The original LLaMA code is GPL licensed which means any project using it must also be released under GPL.<p>This "taints" any other code and prevents meaningful academic and commercial use.<p>Lit-LLaMA solves that for good.
I think implying that GPL is not "fully open source" is a hot take. It's specifically designed to ensure you and anyone you distribute your code gets the same freedoms. Maybe you don't agree that it's a good license but that is its intention. GPL vs BSD-type licenses I guess is decades long argument by now.<p>Maybe I'm a naive idealist but IMO the GPL-family of licenses are underrated. You can use them to make sure you don't work for free for someone who won't share their improvements.<p>I liked the choice of AGPL for AUTOMATIC1111 Stable Diffusion web UI. (<a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">https://github.com/AUTOMATIC1111/stable-diffusion-webui</a>)<p>Commercial interests are very allergic to AGPL which ensures the project stays community-run and new features and fixes will prioritize the most ordinary user doing things for fun.
FYI, there's something fishy going on in this thread. Multiple people from the LightningAI team theaniketmaurya (developer advocate for Lightning AI) and rasbt (developer at Lightning AI) are shilling for this post without disclosing their affiliations. The account that submitted this (osurits) also only has two comments, also with the same behavior.<p>Having interacted with the Lightning AI team in the past, this is unsurprising behavior.
IANAL, but this seems very fishy to me:
1) I don't understand how this isn't a derivative work of the original code, as I very highly doubt you've done a clean room implementation. I doubt this would hold up in court.<p>2) Doesn't the original FB license also apply to the weights? Just re-implementing the code would not change the license on the weights. So while THE CODE may now be re-licensed, the weights would still fall under the original license.<p>I'd love if someone with more legal understanding could shed some light on this.
Bs.<p>Prevents meaningful academic.....<p>How the hell does agpl prevent academic use? Commercial use sure because agpl follows 4 freedoms and commercial often wants to take someone else's work, slap their brand without acknowledging the original work. That and the downstream is often closed source for "business reasons" which causes their users to not enjoy the fruits of the first party's licensing.<p>Where does academia come into it? Are researchers now keeping everything under wraps for "shareholders interests"?<p>Isn't academia supposed to be open culture from the start without any restrictions so what am I missing or are they mixing two unrelated things?<p>Also, I think I might be wrong but isn't it merely converting llama into their version? Uh ...
llama.cpp is also MIT<p><a href="https://github.com/ggerganov/llama.cpp">https://github.com/ggerganov/llama.cpp</a><p>previously discussed here <a href="https://news.ycombinator.com/item?id=35100086" rel="nofollow">https://news.ycombinator.com/item?id=35100086</a><p>and one of the rust wrapper: <a href="https://news.ycombinator.com/item?id=35171527" rel="nofollow">https://news.ycombinator.com/item?id=35171527</a> (also MIT)
If you hate GPL so much then I assume that you don't run any GPL licensed code on your machines then. I admire your resolve because I would think that is pretty hard!
No, the GPL doesn't prevent meaningful academic or commercial use; rather, it seeks to prevent individuals from taking advantage of free software to limit the freedom of other users. It is important to note that if you live in a free country, there are laws that protect the liberties of all citizens and prevent actions that could restrict those freedoms.
> We believe that AI should be fully open source and part of the collective knowledge.<p>As do I.<p>> The original LLaMA code is GPL licensed which means any project using it must also be released under GPL.<p>Yep. This ensures that AI is "fully open source and part of the collective knowledge."<p>> This "taints" any other code and prevents meaningful academic and commercial use.<p>Taints? As in "makes fully open source"? Isn't that the goal?<p>> Lit-LLaMA solves that for good.<p>Lit-LLaMA helps people create proprietary closed-source AI instead of the fully open source AI required by Llama. Okay.
Just noting that HuggingFace has a Llama code implementation[1]. It's also under an Apache 2 license.<p>While this seems to be nice code I don't particularly see any reason to use that over HuggingFace transformers, where you can easily swap out alternative implementations.<p>Also, going to legal restrictions on the Facebook LLama code when there are much stronger restrictions on the use of the model seems an odd thing to do. It's true that in some - not all - jurisdictions it is possible the model might not be copyrightable - but you'd have a bold legal department to rely on those arguments. It's also moderately likely that an instruction-tuned Llama (like Alpaca) <i>would</i> be copyrightable even in those jurisdictions.<p>TL;DR: Use the HuggingFace transformers library. You can experiment with Llama and switch to truly free models like GPT-J or anything new that arrives very easily.<p>[1] <a href="https://huggingface.co/docs/transformers/main/model_doc/llama" rel="nofollow">https://huggingface.co/docs/transformers/main/model_doc/llam...</a>
I see this as a win for the AI community. The key for LLMs is to enable people to train collaboratively and innovate more quickly in this space. Are there any examples or demos available that showcase the capabilities of "lit-llama"?