Disclaimer: I'm the author of Nawak, so I'm pretty excited about this.<p>The Nimrod programming language is finally featured for the db tests (at least on i7. Not sure what happened with ec2 and peak, as neither jester nor nawak seem to appears in the results).<p>It fares pretty well when the database is involved.<p>Look for the nawak micro-framework, in the top 10 both for fortunes:<p><a href="http://www.techempower.com/benchmarks/#section=data-r9&hw=i7&test=fortune" rel="nofollow">http://www.techempower.com/benchmarks/#section=data-r9&hw=i7...</a><p>and updates:<p><a href="http://www.techempower.com/benchmarks/#section=data-r9&hw=i7&test=update" rel="nofollow">http://www.techempower.com/benchmarks/#section=data-r9&hw=i7...</a> )<p>And there is room to grow. That micro-framework will not be the best for the json or plaintext tests, but once the database needs to be involved, it is trivial to add more concurrency: firing up more workers (1-3meg each in RAM) acts as an effective database connection pool (1 database connection per worker).<p>edit: Why should you care? Nawak (<a href="https://github.com/idlewan/nawak" rel="nofollow">https://github.com/idlewan/nawak</a>) is a clean micro-framework:<p><pre><code> import nawak_mongrel, strutils
get "/":
return response("Hello World!")
get "/user/@username/?":
return response("Hello $1!" % url_params.username)
run()
</code></pre>
Benchmark implementation in 100 lines here: <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/nawak/nawak_app.nim" rel="nofollow">https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...</a>
We're very happy to have Round 9 completed. The 10-gigabit 40-HT core hardware has been quite amusing to work with. In most respects, these servers utterly humble our in-house workstations that we've been using for previous rounds.<p>If anyone has any questions about this round or thoughts for future rounds, please let me know!
These web framework tests have been really interesting to look at, and each time I've been saddened to see that Rails/Ruby, the framework/language I program with most days, is consistently near the bottom. With Adequate Record now being merged into master, I'm hoping we start climbing the speed test records.<p>But a question that keeps coming up in my mind is that there are metrics that would be much harder to compare, but might be more useful in my book.<p>For example, I'd love to see a "framework olympics" where different developers build an agreed upon application/website on an agreed upon server using their favorite framework. The application has to be of some decent complexity, and using tools that an average developer using the framework might use.<p>In the end, you could compare the complexity of the code, the average page response time, maintainability/flexibility, and the time it took to actually develop the app and the results could let developers know what they sacrifice or what they gain by using one framework over the other. I know a lot of these metrics could reflect the developer themselves vs the actual framework, but it might also be a tool to let you know what an average developer, given a weekend, might be able to produce. It would also help me to see an application written a ton of different ways -- so I can make good decisions about what framework to choose based on my needs.<p>In the end, speed only tells us so much -- and speed is not the only metric that we consider when we write applications -- otherwise it looks like most developers would be coding their web apps in Gemini.
Interesting to see Go benchmarks fall right out of the top 10 on the 10GbE machines while Java/JVM and even Javascript do amazingly well. My guess at 10GbE you are now testing how much the framework spends on the CPU.
Couple notes:<p>They have a blog post about the results here: <a href="http://www.techempower.com/blog/2014/05/01/framework-benchmarks-round-9/" rel="nofollow">http://www.techempower.com/blog/2014/05/01/framework-benchma...</a><p>If you're running on EC2 and not dedicated hardware (probably most people reading this), be sure to toggle to the EC2 results at the top right of the benchmark.
It is great to see my beloved Ninja framework (fullstack, Java) in standalone mode to be the one of the best performers in multiple queries benchmark (better than 81% of 93 known frameworks) and data updates (better than 77% of 93 known frameworks).<p>These are the most realistic scenarios for the web app in my opinion.
And Ruby on Rails is...... at the bottom of the chart again.<p>It is quite disheartening to see it being 20 - 50x slower on the best. Or 2 - 5x Slower to other similar framework.
How is this data useful to someone building a web application? I have used several of these frameworks ,alteast the jvm based ones, and I can tell you that it is like comparing apples to oranges. Case in point, Wicket, which I have been using for several years, is a component oriented framework with a rich ready to use pre-built components. If on the other hand you are using netty, you are left to reinvent pretty much everything . Based on your configuration, it may be that wicket is returning the response from cache. Compojure and wicket serve different business use cases.
I was hoping that Snap & Yesod would be run on GHC 7.8. It'll be nice to see what sort of improvements MIO will make, especially on the 40 core machine.
I am surprised that NodeJS on Mysql is much faster than on MongoDB. Is this expected?<p><a href="http://www.techempower.com/benchmarks/#section=data-r9&hw=peak&test=query" rel="nofollow">http://www.techempower.com/benchmarks/#section=data-r9&hw=pe...</a>
There are many frameworks showing as "did not complete". I was interested to see the results for Spray since it did really well in previous rounds, but there are no results for Spray in the latest round.
Could you make the error log accessible? Some of the frameworks appear to just bleed errors left and right. Be interesting to see if they are real errors or just misconfiguration
I would love to so Varnish in here for some of the tests.<p>For a typically webpage with multiple queries there appears to be around 5-10x performance disadvantage between slow and fast languages.Things like serving a plaintext or json response, where the slow languages are much much slower, Varnish is a good match for.
Nice to see how great PHP is doing. It is still my favorite language.<p>One thing is strange: The HHVM result on the "plaintext" test. How can HHVM only do 938 requests/sec if it can do 70,471 in the much more complicated "singly query" test?
Still scratching my head at the C#/httplistener results. Ostensibly it should be pretty close to what a native C++ implementation should look like performance-wise, as a good chuck of the work on the raw text results is done by http.sys.