TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Using Rust in non-Rust servers to improve performance

406 pointsby amatheus7 months ago

19 comments

jchw7 months ago
Haha, I was flabbergasted to see the results of the subprocess approach, incredible. I&#x27;m guessing the memory usage being lower for that approach (versus later ones) is because a lot of the heavy lifting is being done in the subprocess which then gets entirely freed once the request is over. Neat.<p>I have a couple of things I&#x27;m wondering about though:<p>- Node.js is pretty good at IO-bound workloads, but I wonder if this holds up as well when comparing e.g. Go or PHP. I have run into embarrassing situations where my RiiR adventure ended with less performance against even PHP, which makes some sense: PHP has tons of relatively fast C modules for doing some heavy lifting like image processing, so it&#x27;s not quite so clear-cut.<p>- The &quot;caveman&quot; approach is a nice one just to show off that it still works, but it obviously has a lot of overhead just because of all of the forking and whatnot. You can do a lot better by not spawning a new process each time. Even a rudimentary approach like having requests and responses stream synchronously and spawning N workers would probably work pretty well. For computationally expensive stuff, this might be a worthwhile approach because it is so relatively simple compared to approaches that reach for native code binding.
评论 #41972451 未加载
评论 #41978475 未加载
评论 #41992103 未加载
评论 #41977015 未加载
评论 #41977049 未加载
eandre7 months ago
Encore.ts is doing something similar for TypeScript backend frameworks, by moving most of the request&#x2F;response lifecycle into Async Rust: <a href="https:&#x2F;&#x2F;encore.dev&#x2F;blog&#x2F;event-loops" rel="nofollow">https:&#x2F;&#x2F;encore.dev&#x2F;blog&#x2F;event-loops</a><p>Disclaimer: I&#x27;m one of the maintainers
评论 #41970965 未加载
isodev7 months ago
This is a really cool comparison, thank you for sharing!<p>Beyond performance, Rust also brings a high level of portability and these examples show just how versatile a pice of code can be. Even beyond the server, running this on iOS or Android is also straightforward.<p>Rust is definitely a happy path.
评论 #41970814 未加载
xyst7 months ago
In my opinion, the significant drop in memory footprint is truly underrated (13 MB vs 1300 MB). If everybody cared about optimizing for efficiency and performance, the cost of computing wouldn’t be so burdensome.<p>Even self-hosting on an rpi becomes viable.
评论 #41971462 未加载
评论 #41971070 未加载
评论 #41971884 未加载
评论 #41971006 未加载
评论 #41976382 未加载
评论 #41971055 未加载
评论 #41973800 未加载
评论 #41971838 未加载
rwaksmunski7 months ago
Pretty sure Tier 4 should be faster than that. I wonder if the CPU was fully utilized on this benchmark. I did some performance work with Axum a while back and was bitten by Nagle algorithm. Setting TCP_NODELAY pushed the benchmark from 90,000 req&#x2F;s to 700,000 req&#x2F;s in a VM on my laptop.
pjmlp7 months ago
And so what we were doing with Apache, mod_&lt;pick your lang&gt; and C back in 2000, is new again.<p>At least with Rust it is safer.
ports543u7 months ago
While I agree the enhancement is significant, the title of this post makes it seem more like an advertisement for Rust than an optimization article. If you rewrite js code into a native language, be it Rust or C, of course it&#x27;s gonna be faster and use less resources.
评论 #41971605 未加载
评论 #41971366 未加载
echelon7 months ago
Rust is simply amazing to do web backend development in. It&#x27;s the biggest secret in the world right now. It&#x27;s why people are writing so many different web frameworks and utilities - it&#x27;s popular, practical, and growing fast.<p>Writing Rust for web (Actix, Axum) is no different than writing Go, Jetty, Flask, etc. in terms of developer productivity. It&#x27;s super easy to write server code in Rust.<p>Unlike writing Python HTTP backends, the Rust code is so much more defect free.<p>I&#x27;ve absorbed 10,000+ qps on a couple of cheap tiny VPS instances. My server bill is practically non-existent and I&#x27;m serving up crazy volumes without effort.
评论 #41971596 未加载
评论 #41971030 未加载
评论 #41971033 未加载
评论 #41973486 未加载
评论 #41971087 未加载
评论 #41970954 未加载
评论 #41978141 未加载
Dowwie7 months ago
Beware the risks of using NIFs with Elixir. They run in the same memory space as the BEAM and can crash not just the process but the entire BEAM. Granted, well-written, safe Rust could lower the chances of this happening, but you need to consider the risk.
评论 #41973926 未加载
voiper17 months ago
Wow, that&#x27;s an incredible writeup.<p>Super surprised that shelling out was nearly as good any any other method.<p>Why is the average bytes smaller? Shouldn&#x27;t it be the same size file? And if not, it&#x27;s a different alorithm so not necessarily better?
评论 #41970974 未加载
评论 #41973396 未加载
评论 #41971261 未加载
评论 #41973122 未加载
djoldman7 months ago
Not trying to be snarky, but for this example, if we can compile to wasm, why not have the client compute this locally?<p>This would entail zero network hops, probably 100,000+ QRs per second.<p>IF it is 100,000+ QRs per second, isn&#x27;t most of the thing we&#x27;re measuring here dominated by network calls?
评论 #41973546 未加载
评论 #41973566 未加载
评论 #41974985 未加载
bdahz7 months ago
I&#x27;m curious what if we replace Rust with C&#x2F;C++ in those tiers. Would the results be even better or worse than Rust?
评论 #41971031 未加载
评论 #41978117 未加载
评论 #41971025 未加载
jinnko7 months ago
I&#x27;m curious how many cores the server the tests ran on had, and what the performance would be of handling the requests in native node with worker threads[1]? I suspect there&#x27;s an aspect of being tied to a single main thread that explains the difference at least between tier 0 and 1.<p>1: <a href="https:&#x2F;&#x2F;nodejs.org&#x2F;api&#x2F;worker_threads.html" rel="nofollow">https:&#x2F;&#x2F;nodejs.org&#x2F;api&#x2F;worker_threads.html</a>
评论 #41976910 未加载
评论 #41976934 未加载
bhelx7 months ago
If you have a Java library, take a look at Chicory: <a href="https:&#x2F;&#x2F;github.com&#x2F;dylibso&#x2F;chicory">https:&#x2F;&#x2F;github.com&#x2F;dylibso&#x2F;chicory</a><p>It runs on any JVM and has a couple flavors of &quot;ahead-of-time&quot; bytecode compilation.
评论 #41971074 未加载
评论 #41971096 未加载
Already__Taken7 months ago
Shelling out to a CLI is quite an interesting path because often that functionality could be useful handed out as a separate utility to power users or non-automation tasks. Rust makes cross-platform distribution easy.
dyzdyz0107 months ago
Make Rustler great again!
demarq7 months ago
I didn’t realize calling to the cli is that fast.
评论 #41978158 未加载
lsofzz7 months ago
&lt;3
bebna7 months ago
For me a &quot;Non-Rust Server&quot; would be something like a PHP webhoster. If I can run my own node instance, I can possible run everything I want.
评论 #41970731 未加载