Lisp (Common Lisp) beats all the other dynamic languages by a considerable margin. This is why I am developing Clasp - a Common Lisp implementation based on LLVM that interoperates with C++/C (<a href="https://github.com/clasp-developers/clasp.git" rel="nofollow">https://github.com/clasp-developers/clasp.git</a>) for scientific programming.<p>With Clasp, we get the best of multiple worlds. We get a dynamic language (Common Lisp) with automatic memory management and enormous expressive power that can directly use powerful C and C++ libraries. All three of these languages are "long-lived" languages in that code that was written 10 and 20 years ago still works.<p>Performance is really important to me and I have written a lot of code over the past four decades. I won't develop meaningful code in any language that falls below Racket in table 4 because these language implementations are too inefficient. I furthermore want to keep using my code over the years and decades and so I won't develop meaningful code in any language where someone else can break my code by changing the standard. My program "leap" was written 27 years ago in C and it is still being used daily by thousands of computational chemists. But it's really hard to improve leap because the code is brittle largely because of malloc/free-style memory management (brrr). For a compiled, high-performance, standard language with proven lasting power and performance - Common Lisp is the best choice.
Something I've been thinking a lot about lately is environmental friendliness in software, given that data centers contribute 3% of global greenhouse emissions (same amount as the entire airline indistry).<p>I'm thinking along the lines of using interpreted languages less server side because of efficiency, but also relying on JS less client side and using WASM where it makes sense.<p>This has stemmed from me leaning Go last year and being moved by actually how much faster it is than Node for my use cases (API development and data processing).<p>Where I am curious to see the total impact is how we can take advantage of economies of scale to save money and increase efficiency. I'm thinking along the lines of scale to zero, event driven architectures.<p>Google Cloud, for example, claims they operate their data centers with industry leading efficiency while also being powered by renewable energy. At scale, do services like Cloud Run or Cloud Functions actually make a difference?
I did some research on this topic in university, and our consistent result for CPU-based programs was: if it finishes faster, it uses less energy, and vice versa.<p>So it's no surprise to see that VM-based programs use more energy; they're slower.
So, the Steve Jobs rumor about only allowing compiled programs on the original iphone was right? There is about a 4x energy increase going to a VM'ed language and about a 19x going to a fully interpreted one over using a natively compiled language.<p>So, the energy efficiency is actually worse than the perf loss in general.
I was really interested in how they measured power because there is a ton of nuance there.<p>They used the metric reported by a tool that limits average power via a programmable power limiter in hardware which an interesting way to do it. Totally valid but I really wish they provided more detail here. For example, did all workloads run at limit all the time? Presumably they did. Limit based throttling is a form of hysteretic control so the penalty part will be critical. How often and when the limit is hit will be critical too.
> In order to properly compare the languages, we needed
to collect the energy consumed by a single execution of a
specific solution.<p>With this, Java ranking on top 5 is quite impressive. Considering that JIT optimisations wouldn't have really kicked in. My hypothesis is that if the Java program was allowed to run a few more times, and then compared, it would rank higher.<p>And, along the lines, couldn't the other compiled languages and vm-based (common lisp, racket) be JIT optimised?
Funny (old) energy efficiency story that used to be published on- line but I can't find it.<p>It's about the first handheld scanner for a large shipping company. The hardware was engineered and nailed down and a team was contracted to write the software. They got about 1/2 way completed and said the box didn't have enough ROM to handle all the features in the spec. The company contracted Forth Inc. to try and salvage the project and that was possible because they used their proprietary Forth VM and factored the heck out the code so they could reuse as much as possible and got it all to fit. (Common Forth trick)<p>10 Years later, a new device was commissioned and management was smarter! They made sure their was a lot more memory on board and a new contracted team finished the job. In the field however the batteries would not last an entire shift...<p>Forth Inc was called again. They made use of their cooperative tasking system to put the machine to sleep at every reasonable opportunity and wake it from user input.<p>Maybe it ain't the language that matters as much as the skill and imagination of the designers and coders. Just sayin'
Marginal differences should be ignored in this kind of benchmark.<p>It is usually well accepted that faster execution leads to lower power usage, as long as the CPU is operating in a reasonable thermal envelope.<p>Nothing new here, except that we can have a better grasp of the different orders of magnitude.
I wonder what the carbon impact the use of those inefficient dynamic languages has had, in both desktop and backend environments?<p>I imagine it's substantial and worth considering.
These are computationally-heavy workloads. Do most HN programmers really do work in those domains? Is most computational work done today even in those domains (possibly, due to the amount of streaming videos, but also most of us are not coding video streamers)? Maybe a more interesting to test workload would be parsing medium-large random JSONs issued by concurrently-connecting entities. And also comparing the same setup under a low-workload scenario with a high workload scenario, possibly also comparing orchestration engines (e.g. kubernetes autoscaling).<p>I'd also be curious to probe "worst case" scenarios. Can you cause kubernetes to thrash spinning up and killing containers really badly, and how much of an effect does that have on energy consumption?
Huh - I somehow submitted the same message twice. Hacker News doesn't let me delete it. So I'll edit it down - see the version above about Common Lisp and our implementation of it called Clasp.
It's worth noting that since this was published several years ago, and Rust has come a long way since then, it might very well top most of these benchmarks nowadays.
Anything that seems to demonstrate C++ as slower than C is implicitly busted. You could compile the C code with the C++ compiler and get the same speed.<p>I'm looking at their table 4, with C:1.0, C++:1.56.<p>This throws the whole paper into doubt. Comparing crappy code in one language with good code in another reveals little of substance.
It would be interesting to have a similar research done for distributed systems as well. Then one would have to choose programming language and library or framework for distributed systems development, if the language or the runtime einvironment does not offer support for it out of the box.
This is cool.<p>Now, can we get a comparison of these results vs. LOC?<p>I feel like almost any assessment of programming languages should have a table weighting the results based on how many lines of code it took to get that result.
That Erlang is relatively power inefficient doesn't surprise me. I wonder how much of that is due to the "busy wait" it uses to reduce the latency in message processing.
I'm surprised that Rust is ahead of C++ in their ranking, and by quite a bit. I tend to use C++ as "C with STL and smart pointers" basically (a-la Google), I don't see why it'd be any slower or less energy efficient than C.