Alex Heath from The Verge alleged that Grok is just tuned LLaMa [0]. I wonder what will be revealed!<p>[0]: <a href="https://www.threads.net/@alexheath/post/C0pEidVp-1U" rel="nofollow">https://www.threads.net/@alexheath/post/C0pEidVp-1U</a>
"Open source" or "open weight"? Because there is a distinction. Many have previously provided open weights (or what they call "open model" now): Mistral, LLaMA, Falcon, etc. There are not many open "source" LLMs out there that bring true value to business and academia.
Has anyone benchmarked Grok against other models? The LLMSYS benchmarks, which I trust most, don't have it. And their own reported results are good but nothing amazing since it doesn't seem to surpass GPT4 or Claude 3.
tidbit: Oracle have an OpenGrok project under active development:<p><a href="https://en.wikipedia.org/wiki/OpenGrok" rel="nofollow">https://en.wikipedia.org/wiki/OpenGrok</a>