TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Benchmarks and comparison of LLM AI models and API hosting providers

152 pointsby Gcamover 1 year ago
Hi HN, ArtificialAnalysis.ai provides objective benchmarks and analysis of LLM AI models and API hosting providers so you can compare which to use in your next (or current) project.<p>The site consolidates different quality benchmarks, pricing information and our own technical benchmarking data. Technical benchmarking (throughput, latency) is conducted through sending API requests every 3 hours.<p>Check out the site at <a href="https:&#x2F;&#x2F;artificialanalysis.ai" rel="nofollow">https:&#x2F;&#x2F;artificialanalysis.ai</a>, and our twitter at <a href="https:&#x2F;&#x2F;twitter.com&#x2F;ArtificialAnlys" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;ArtificialAnlys</a><p>Twitter thread with initial insights: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;ArtificialAnlys&#x2F;status&#x2F;1747264832439734353" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;ArtificialAnlys&#x2F;status&#x2F;17472648324397343...</a><p>All feedback is welcome and happy to discuss methodology, etc.

23 comments

chadashover 1 year ago
I love it. One minor change I&#x27;d make is changing the pricing chart to put lowest on the left. On the other highlights, left to right goes from best to worst, but this one is the opposite.<p>I&#x27;m excited to see where things land. What I find interesting is that pricing is either wildly expensive or wildly cheap, depending on your use case. For example, if you want to run GPT-4 to glean insights on every webpage your users visit, a freemium business model is likely completely unviable. On the other hand, if I&#x27;m using an LLM to spot issues in a legal contract, I&#x27;d happily pay 10x what GPT4 currently charges for something marginally better (It doesn&#x27;t make much difference if this task costs $4 vs $0.40). I think that the ultimate &quot;winners&quot; in this space will have a range of models at various price points and let you seamlessly shift between them depending on the task (e.g., in a single workflow, I might have some sub-tasks that need a cheap model and some that require an expensive one).
badFEengineerover 1 year ago
nice, I&#x27;ve been looking for something like this! A few notes &#x2F; wishlist items:<p>* Looks like for gpt-4 turbo (<a href="https:&#x2F;&#x2F;artificialanalysis.ai&#x2F;models&#x2F;gpt-4-turbo-1106-preview" rel="nofollow">https:&#x2F;&#x2F;artificialanalysis.ai&#x2F;models&#x2F;gpt-4-turbo-1106-previe...</a>), there was a huge latency spike on December 28, which is causing the avg. latency to be very high. Perhaps dropping top and bottom 10% of requests will help with avg (or switch over to median + include variance)<p>* Adding latency variance would be truly awesome, I&#x27;ve run into issues with some LLM API providers where they&#x27;ve had incredibly high variance, but I haven&#x27;t seen concrete data across providers
评论 #39019600 未加载
Gcamover 1 year ago
Hi HN, Thanks for checking this out! Goal with this project is to provide objective benchmarks and analysis of LLM AI models and API hosting providers to compare which to use in your next (or current) project. Benchmark comparisons include quality, price, technical performance (e.g. throughput, latency).<p>Twitter thread with initial insights: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;ArtificialAnlys&#x2F;status&#x2F;1747264832439734353" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;ArtificialAnlys&#x2F;status&#x2F;17472648324397343...</a><p>All feedback is welcome
评论 #39018857 未加载
评论 #39023616 未加载
评论 #39019351 未加载
bearjawsover 1 year ago
I&#x27;ve been using Mixtral and Bard ever since the end of the year. I am pleased with their performance overall for a mixture of content generation and coding.<p>It seems to me GPT4 has become short in its outputs, you have to do a lot more COT type prompting to get it to actually output a good result. Which is excruciating given how slow it is to produce content.<p>Mixtral on together AI is crazy to see ~70-100token&#x2F;s, and the quality works for my use case as well.
评论 #39020572 未加载
评论 #39020700 未加载
评论 #39059218 未加载
评论 #39018867 未加载
m3kw9over 1 year ago
I feel sorry for all other models when gpt4.5 comes out. If you are not at gpt4 level it’s pretty useless other than have some fun.
djshover 1 year ago
Since we are talking about throughput of API hosting providers, I wanted to add in the work we have done at Groq. I understand that the team is getting in touch with the ArtificialAnalysis folks to get benchmarked.<p>Mixtral running at &gt;500 tokens&#x2F;s @ Groq <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=5fJyOVtOk4Y" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=5fJyOVtOk4Y</a> Experience the speed for yourself, LLama2-70B, at <a href="https:&#x2F;&#x2F;chat.groq.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;chat.groq.com&#x2F;</a>
zurferover 1 year ago
This is great. Thank you! I would be especially interested in more details around speed. Average is a good starting point, but I would love to also see standard distribution or 90, 99 percentiles.<p>In my experience speed varies a lot and it make it big difference if a requests takes 10 seconds or 50 seconds.
评论 #39017182 未加载
causalover 1 year ago
Thanks for putting this together! Amazon is far and away the priciest option here, but I wonder if a big part of that is the convenience tax for the Bedrock service. Would be interesting to compare that to the price of just renting AWS GPUs on EC2.
评论 #39015481 未加载
binsquareover 1 year ago
I&#x27;m surprised to see perplexity&#x27;s 70B online model score so low on model quality and somehow far worse mixtral and gpt3.5(they use a fine tuned gpt3.5 as the foundational model AFAIK)<p>I run <a href="https:&#x2F;&#x2F;www.labophase.com" rel="nofollow">https:&#x2F;&#x2F;www.labophase.com</a> and my data suggests that it&#x27;s one of the top 3 models in terms of users liking to interact with it. May I know how model quality is benchmarked to understand this discrepancy?
评论 #39019741 未加载
idilivover 1 year ago
I&#x27;m curious how they evaluated model quality. The only information I could find is &quot;Quality: Index based on several quality benchmarks&quot;.
评论 #39017792 未加载
vunderbaover 1 year ago
It&#x27;s probably beyond the scope of this project, but it would be great to see comparisons across different quant levels (e.g. 4-bit, etc), since this can sometimes result in an extreme drop off in quality, but it&#x27;s an important factor to consider when hosting your own LLM.
MacsHeadroomover 1 year ago
Perhaps price should be tokens per dollar, to keep the charts all &quot;higher is better.&quot;
luke-stanleyover 1 year ago
This is awesome. I was looking at benchmarking speed and quality myself but didn&#x27;t go this far! I wonder about Claude Instant and Phi 2? Modal.com for inference felt crazy fast, but I didn&#x27;t note the metrics. Good ones to add? Replicate.com too maybe?
评论 #39017845 未加载
com2kidover 1 year ago
I wish more places showed Time To First Token. For scenarios real time human interaction, the important part is how long until the first token is returned, and are tokens generated faster than people consume them.<p>Sadly very few benchmarks bother to track this.
评论 #39022695 未加载
sabareeshover 1 year ago
I want to see benchmarks for RAG. Most of the models are not very good with RAG
评论 #39020631 未加载
wonderfulyover 1 year ago
If you want to compare LLMs on daily usage, checkout: <a href="https:&#x2F;&#x2F;chathub.gg" rel="nofollow">https:&#x2F;&#x2F;chathub.gg</a>
throwawaymathsover 1 year ago
Latency (ttft) would be a nice metric.
评论 #39018267 未加载
elicksaurover 1 year ago
&gt; Application error: a client-side exception has occurred (see the browser console for more information).<p>iOS Safari
评论 #39015606 未加载
scribuover 1 year ago
I’m not sure about the Speed chart. I would expect gpt-4-turbo to be faster than plain gpt-4.
评论 #39022468 未加载
评论 #39018456 未加载
jdthediscipleover 1 year ago
Really neat!<p>And I did not realize how much Gemini Pro lags behind GPT4 in terms of quality, wow!
评论 #39036666 未加载
avereveardover 1 year ago
I wish there was claude instant in there is a damn fine model often overlooked
评论 #39029011 未加载
评论 #39020651 未加载
rubymamisover 1 year ago
I wish there were more details about how you measure &quot;quality&quot;.
评论 #39018488 未加载
jafitcover 1 year ago
Deepinfra Mixtral is $0.27 &#x2F; M tokens as per their website
评论 #39022448 未加载