So, rckrd wrote a (rather short) article about the license of Llama 2 but got it all wrong: Even though the press calls it open-source, it's not.<p>Open Source has a very clear definition. Llama 2 fails in multiple regards.<p>First, the license by itself is not an open source license. It has important restrictions that make it non-open source.<p>Second, the distribution. You have to apply for the download with a web form and you are not allowed to redistribute the model.<p>Third, the source code, i.e. the data used to train the model. Meta is not telling us about it. One of the core features of open source is that you can recreate the binary (in this case, the model weights) by yourself. You can't do that here.
Llama had already become a defacto standard for LLMs, between all the fine-tunes and llama.cpp. Giving it a wider license really cements it as a standard while making everybody use it on Meta's terms. It's an "open source" strategy where all the benefits accrue back to Meta instead of the community but that's permissive enough to placate most people.<p>Personally I feel like it's a dangerous precedent because it shifts open source from a community concept to something that a company lets you have with a bunch of conditions.
All the responses I'm reading so far are rather shallow and fails to consider the overall landscape and how it will evolve and who the big losers will be. The way I see it, that the current players include Google (which has a lot to lose), OpenAI (unclear business model) and upcoming startups (can disrupt Google/OpenAI). Meta releasing these models will impact Google and OpenAI the most by helping upcoming startups to inflict fatal blows or slowly chip away at their business models by means of a race to the bottom. The main issue preventing Google or OpenAI to succeed is that the regulatory landscape will pose a huge risk and Meta knows that. Startups are not hampered by this as they are small fry and before anyone can notice, they can/will land a blow on Google/OpenAI.
To all those people complaining on this not being open source - Zuck is playing chess, while you play a much simpler game. Advancing SOTA and a bit of Open source is a side benefit.
To increase adoption. They are also working with Qualcom [1] to bring it on-device. Not sure if they're licensing it, but when you tweak hardware for something specific, you kinda want people to use it.<p>[1] <a href="https://www.qualcomm.com/news/releases/2023/07/qualcomm-works-with-meta-to-enable-on-device-ai-applications-usi" rel="nofollow noreferrer">https://www.qualcomm.com/news/releases/2023/07/qualcomm-work...</a>
Meta made Llama 2 source-and-weights available because they agreed with the observations in the leaked Google memo[1]. Meta got a huge amount of infra/research/experimentation work done on top of LLaMa. Pre-training wasn’t cheap, but they got datapoints no one else in big tech had, abd that is very valuable, especially when building a bridge into a new frontier like LLM-driven products.