TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

When Node.js is the wrong tool for the job

85 pointsby vmware505over 8 years ago

18 comments

klodolphover 8 years ago
It seems like a lot of JavaScript developers are repeating things like &quot;more people know JavaScript so you don&#x27;t have to learn a new language, which saves you time&quot;. I don&#x27;t get it. In my experience, if you find a good developer, they can pick up C#, Swift, or Go pretty quickly, and if you can&#x27;t find a good developer, the fact that they already know JavaScript is not much of an advantage. Even if the developer you hire already knows your language, they&#x27;re going to be spending time learning your code base and how your organization works (shared repo? PRs? Feature branches? Code review? Coding standards?)<p>That, and nose.js developers seem to repeat the claim that node.js makes delivery faster… but is it really any faster than ASP.NET, Rails, Django, or Go stdlib? Those frameworks are so fast for prototyping and delivering bread-and-butter apps as it is (and some of them let you do multithreading to boot).<p>I&#x27;m also really not interested in how things work for &quot;typical CRUD apps&quot; because those are so trivial to write in any decent environment.<p>I&#x27;m worried that node.js articles are the same kind of echo chamber that Rails articles were 10 ago.
评论 #13336524 未加载
评论 #13336316 未加载
评论 #13336846 未加载
评论 #13336286 未加载
评论 #13336976 未加载
评论 #13336503 未加载
评论 #13336246 未加载
评论 #13337866 未加载
评论 #13337551 未加载
评论 #13337300 未加载
评论 #13336741 未加载
smokeyjover 8 years ago
&gt; As node.js is not multi-threaded, we spin up 4 instances of node.js per server, 1 instance per CPU core. Thus, we cache in-memory 4 times per server.<p>And why not use a shared memory server?<p>&gt; Operations started adding rules with 100,000s of domains, which caused a single set of rules to be about 10mb large ... If we weren’t using node.js, we could cut this bandwidth by 4 as there would only be one connection to the Redis cluster retrieving rule sets.<p><i>Maybe a 10mb json string isn&#x27;t the best design decision</i>.....<p>Or you know, you could have one node process connect to the Redis server, and have the local processes read from a shared memory server.. Or you could not store your rules as a 10mb friggin JSON string..<p>&gt; When rule sets were 10mb JSON strings, each node.js process would need to JSON.parse() the string every 30 seconds. We found that this actually blocked the event loop quite drastically<p>Well then do it in another thread and save it to shared memory. Maybe, just maybe, JSON strings aren&#x27;t the tool for the job here.
评论 #13336929 未加载
评论 #13337008 未加载
rodpover 8 years ago
While I agree Node.js isn&#x27;t the right tool for any job -- just like anything else, really -- after reading his description of the problem, I can&#x27;t shake off this feeling that the main issues he has with performance in this case have very little to do with Node itself. Parsing a huge JSON string in any language would block CPU for a while. This JSON then becomes a huge hash table in memory, so no wonder each process uses up a lot of RAM. I don&#x27;t know how these rules are then used but it seems to me he might be better off trying to rethink how to do shared memory in this case before he simply blames Node for blocking CPU and wasting memory.<p>That said, I can imagine other languages (like Java or Go) could still end up being more efficient than Node.
评论 #13337548 未加载
tyingqover 8 years ago
<i>&quot;Operations started adding rules with 100,000s of domains, which caused a single set of rules to be about 10mb large&quot;</i><p>There&#x27;s not enough detail to be sure, but this sounds more like <i>&quot;when a relational database would be a better idea than redis.&quot;</i><p>Edit: That is, pushing the evaluation of the rules down...rather than pulling a kv and walking 10MB (of JSON?) to get to the small number of rules that apply for the transaction.
binocarlosover 8 years ago
This is an excellent article which really highlights the underlying trade-offs when you choose node for your service (i&#x2F;o bound work vs cpu).<p>Unless you know for sure what limits you will hit - it makes sense to iterate quickly and find out. Then, if the service is actually hitting limits (and probably not the ones you thought) - re-write it in a multi-threaded concurrent language like go, elixr etc - or a language designed to solve the actual problems the service is hitting (which might be disk i&#x2F;o or other infrastructure level things not language choice)
dlojudiceover 8 years ago
They could have fixed part of the architecture by having a &quot;cache service&quot; process (4 cpus: 3 for proxies, 1 for the cache service). With that they&#x27;d have a single point consuming their limited resources (memory, cpu and socket for redis connections), using IPC to communicate between process.
评论 #13336512 未加载
neebzover 8 years ago
JSON.parse() is one issue we faced regularly. Any large amount of data fetching could block the event loop and the whole server slows down. It&#x27;s very unforgiving.<p>We go great length to figure out which attributes to fetch and add limits to all our sql queries. These are best practices but with node they are must.
评论 #13336865 未加载
评论 #13336843 未加载
wehadfunover 8 years ago
JavaScript is my first non-mathematical programming language and I haven’t found the need to expand my programming skills to more<p>-Having a hard time taking anything this guy says seriously
yahyaheeeover 8 years ago
I debated between learning Node and Go for my latest project. I took a couple days doing beginner tutorials on each, and Go was actually a lot easier for me to learn. Could just be my background, but I know a couple other people who picked it up in about a week too, it&#x27;s surprisingly simple.
评论 #13337004 未加载
suzzer99over 8 years ago
&gt; On each server, rules are retrieved from Redis and cached in-memory using an LRU-cache. As node.js is not multi-threaded, we spin up 4 instances of node.js per server, 1 instance per CPU core. Thus, we cache in-memory 4 times per server. This is a waste of memory!<p>This is completely standard and the only way to do node in-memory caching. Think of each worker as a completely independent node process, which is only bound to the cluster by a master process which has the ability spawn and kill child cluster processes.
评论 #13338194 未加载
评论 #13339061 未加载
stevebmarkover 8 years ago
re: multiple processes duplicating memory, would a single menmcache instance or similar solve this problem? I don&#x27;t have any perspective on how that would perform at scale vs individual programs reading from application state. Although thinking about it, each process would probably have to store all that data in app memory anyway...
cbemover 8 years ago
It was an very unfortunate decision for Node devs to deep six multithreaded web workers. A pull request implementing it was ready to go with an optional flag to enable it but they did not want to support it. So node will be forever more compute bound to a single thread blocking all I&#x2F;O.
评论 #13341199 未加载
tannhaeuserover 8 years ago
I&#x27;d also add that node.js might not be the right choice for complex backend business logic with lots of service calls because of. Node.js&#x27; always-async execution model tends which tends to make things more complicated than need be.
评论 #13336514 未加载
评论 #13336093 未加载
评论 #13337387 未加载
评论 #13339103 未加载
donatjover 8 years ago
&quot;Usually&quot; &#x2F;snark
hitgeekover 8 years ago
good detailed write up.<p>There were probably opportunities for the author to architect the system in ways that were better suited to node (given that was the chosen platform), but the architecture choices were not unreasonable by design. These are some good things to consider when architecting a system, and considering node as the platform.<p>I&#x27;m not sure I agree that node is the &quot;perfect for simple CRUD apps&quot; though
leshowover 8 years ago
I think it&#x27;s a lot closer to 10x as fast for Rust and 6-7x for Go.
BuuQu9huover 8 years ago
When WebAssembly comes, what will that mean for the node.js ecosystem?
评论 #13344238 未加载
hmansover 8 years ago
Always.