Groq with llama3 70b is so fast and good enough for what we do (source code stuff) that it’s really quite painful to work with most others now. We replaced most our internal integrations with this and everything is great so far. I guess they will be bought soon?
Couple of things:<p>1. Filtering by model should be enabled by default. Mixtral-8x7b-instruct on Perplexity is almost as fast as the 7B Llama 2 on fireworks, but are quite different in sizes.<p>2. Pricing is a very important factor that is not included.<p>3. Overall service reliability should also be an important signal.
I don't understanding why would we need to having similar expectations from systems that we have from humans and building a whole theory on it. I can adjust my behaviour around systems. I am not restricted to operate within default values. e.g Whenever a price is listed as $99, I automatically know it is $100. Marketing gimmicks don't work once you know about them or in other words, expectations can be set in a new environment.
I'd be interested to hear how Llama 8B with long chain-of-thought prompts compares to GPT-4 one-shot prompts for real-world tasks.<p>In classification for example, you could ask Llama 8B to reason through each possibility, rank them, rate them, make counterarguments, etc. - all in the same time that GPT-4 would take to output one classification without reasoning. Which does better?
There are dozens of AI chip startups out there with wild claims about speed. Groq seems like the first to actually prove it by launching a product. I hope they spur a speed war with other chipmakers to make the fastest inference engine.
I love this. Latency is the worst part about AI. I use the lowest latency models that give adequate answers. I do wish this site gave an average and standard deviation.For example Groq fluctuates wildly, depending of the time of day. They're ranked pretty poorly at "610ms" here, and I definitely encounter far worse from them sometimes, but it's wicked fast at other times.