To help those who got a bit confused (like me) this Groq the company making accelerators designed specifically for LLM's that they call LPUs (Language Process Units) [0]. So they want to sell you their custom machines that, while expensive, will be much more efficient at running LLMs for you. While there is also Grok [0] which is xAI's series of LLMs and competes with ChatGPT and other models like Claude and DeepSeek.<p>EDIT - Seems that Groq has stopped selling their chips and now will only partner to fund large build outs of their cloud [2].<p>0 - <a href="https://groq.com/the-groq-lpu-explained/" rel="nofollow">https://groq.com/the-groq-lpu-explained/</a><p>1 - <a href="https://grok.com/" rel="nofollow">https://grok.com/</a><p>2 - <a href="https://www.eetimes.com/groq-ceo-we-no-longer-sell-hardware" rel="nofollow">https://www.eetimes.com/groq-ceo-we-no-longer-sell-hardware</a>
It's live on Groq, Together and Fireworks now.<p>All three of those can also be accessed via OpenRouter - with both a chat interface and an API:<p>- Scout: <a href="https://openrouter.ai/meta-llama/llama-4-scout" rel="nofollow">https://openrouter.ai/meta-llama/llama-4-scout</a><p>- Maverick: <a href="https://openrouter.ai/meta-llama/llama-4-maverick" rel="nofollow">https://openrouter.ai/meta-llama/llama-4-maverick</a><p>Scout claims a 10 million input token length but the available providers currently seem to limit to 128,000 (Groq and Fireworks) or 328,000 (Together) - I wonder who will win the race to get that full sized 10 million token window running?<p>Maverick claims 1 million and Fireworks offers 1.05M while Together offers 524,000. Groq isn't offering Maverick yet
I might be biased by the products I'm building but it feels to me that function support is table stakes now? Are open source models are just missing the dataset to fine tune one?<p>Very few of the models supported on Groq/Together/Fireworks support function calling. And rarely the interesting ones (DeepSeek V3, large llamas, etc)
Although Llama 4 is too big for mere mortals to run without many caveats, the economics of call a dedicated-hosting Llama 4 are more interesting than expected.<p>$0.11 per 1M tokens, a 10 million content window (not yet implemented in Groq), and faster inference due to fewer activated parameters allows for some specific applications that were not cost-feasible to be done with GPT-4o/Claude 3.7 Sonnet. That's all dependent on whether the quality of Llama 4 is as advertised, of course, particularly around that 10M context window.
FYI, the last sentence, "Start building today on GroqCloud – sign up for free access here…" links to <a href="https://conosle.groq.com/" rel="nofollow">https://conosle.groq.com/</a> (instead of "console")
Just tried this thank you. Couple qs - looked like just scout access for now, do you have plans for larger model access? Also, seems like context length is always fairly short with you guys, is that architectural or cost-based decisions?
I got an error when passing a prompt with about 20k tokens to the Llama 4 Scout model on groq (despite Llama 4 supporting up to 10M token context). groq responds with a POST <a href="https://api.groq.com/openai/v1/chat/completions" rel="nofollow">https://api.groq.com/openai/v1/chat/completions</a> 413 (Payload Too Large) error.<p>Is there some technical limitation on the context window size with LPUs or is this a temporary stop-gap measure to avoid overloading groq's resources? Or something else?
Seems to be about 500 tk/s. That's actually significantly less than I expected / hoped for, but fantastic compared to nearly anything else. (specdec when?)<p>Out of curiosity, the console is letting me set max output tokens to 131k but errors above 8192. what's the max intended to be? (8192 max output tokens would be rough after getting spoiled with 128K output of Claude 3.7 Sonnet and 64K of gemini models.)
I'm glad I saw this because llama-3.3-70b-versatile just stopped working in my app. I switched it to meta-llama/llama-4-scout-17b-16e-instruct and it started working again. Maybe groq stopped supporting the old one?