> OpenAI CEO Sam Altman with the express objective of creating open-source AI models<p>Not aware of OpenAI at any point saying they would open-source their models.<p>Only that researchers would be encouraged to share their work with the world:<p><a href="https://web.archive.org/web/20190224031626/https://blog.openai.com/introducing-openai/" rel="nofollow">https://web.archive.org/web/20190224031626/https://blog.open...</a>
the submission that doesn't have a paywall: <a href="https://news.ycombinator.com/item?id=39737313">https://news.ycombinator.com/item?id=39737313</a> or the discussion on the code itself: <a href="https://news.ycombinator.com/item?id=39737281">https://news.ycombinator.com/item?id=39737281</a><p>TFA: <a href="https://x.ai/blog/grok-os" rel="nofollow">https://x.ai/blog/grok-os</a><p>TFC: <a href="https://github.com/xai-org/grok-1#readme">https://github.com/xai-org/grok-1#readme</a>
From the official release page [1]:<p>> We are releasing the base model weights and network architecture of Grok-1, our large language model. Grok-1 is a 314 billion parameter Mixture-of-Experts model trained from scratch by xAI.<p>> This is the raw base model checkpoint from the Grok-1 pre-training phase, which concluded in October 2023. This means that the model is not fine-tuned for any specific application, such as dialogue.<p>> We are releasing the weights and the architecture under the Apache 2.0 license.<p>> To get started with using the model, follow the instructions at github.com/xai-org/grok.<p>A little disappointing they are not releasing the weights for the Grok-1 finetuned model.<p>[1] <a href="https://x.ai/blog/grok-os" rel="nofollow">https://x.ai/blog/grok-os</a>