TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

An Introduction to Node's New Streams

76 点作者 calvinfo将近 12 年前

3 条评论

rgarcia将近 12 年前
The new node streams API is nice, but for processing a lot of data like this we&#x27;ve found them unsuitable for a few reasons:<p>1) objectMode is considered an abomination by most of node-core[1], so if you&#x27;re putting anything other than a string or a buffer through a stream, you&#x27;re &quot;doing it wrong&quot; (I disagree, but I also am not a node maintainer).<p>2) If you want to process data in parallel you&#x27;re out of luck, since Writable handles writes one at a time. We&#x27;ve created some workarounds but I&#x27;m not super happy with them [2].<p>3) Once you jerry-rig Writable&#x2F;Transform to run in parallel, you&#x27;re stuck with one core. We&#x27;ve also created a workaround for this [3].<p>We&#x27;ve started moving towards go&#x27;s channels because of all these issues.<p>[1] <a href="https://github.com/joyent/node/pull/4835" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;joyent&#x2F;node&#x2F;pull&#x2F;4835</a><p>[2] <a href="https://github.com/Clever/writable-stream-parallel" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Clever&#x2F;writable-stream-parallel</a> <a href="https://github.com/clever/understream" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;clever&#x2F;understream</a><p>[3] <a href="https://github.com/Clever/async-forkqueue" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Clever&#x2F;async-forkqueue</a>
评论 #5950706 未加载
评论 #5949813 未加载
bcoates将近 12 年前
I love the new streams API. A 0.8 based Node project I had had to have a messy, callback-filled ad-hoc flow control system because advisory I&#x2F;O pauses weren&#x27;t enough to give bounded memory usage (in practice, not just in theory) and the general lameness of the old streams meant I minimized their use.<p>I was able to re-write the I&#x2F;O pipeline into a handful of Streams2 transforms, I was able to ditch a few hundred lines of ugly code and what&#x27;s left is vastly less complicated.<p>I agree with the author that it&#x27;s kind of lame that objectMode streams appear to be second-class, I guess the Node people think everything should be a raw data pipeline annotated with events for a data channel, like &#x27;npm tar&#x27; does? I can&#x27;t quite wrap my brain around the object-hate.
mtdewcmu将近 12 年前
I came upon node around two years ago, wrote a little project to play with it, and encountered its streams. I wasn&#x27;t prepared for how low-level their behavior was. It was almost like being inside the kernel, except with a veneer of Javascript. If you had a fairly large amount of data to write out, and you wrote it too rapidly with the expectation that the stream would buffer for you, it was easy to trigger pathological behavior. It appeared that each write() from the application triggered an immediate system call to write that data to the kernel. The call was nonblocking, so once the kernel buffer filled up, the kernel would start rejecting any more data. node would then apparently set a timer and keep retrying the syscall until all the data was gone. If you kept sending data too fast, huge numbers of pending I&#x2F;Os would rapidly build up, and the system would eventually be getting hammered by syscalls as fast as node could send them, effectively crashing the app.<p>From a technical design standpoint, this behavior seemed to me like something you&#x27;d always have to be aware of, but, in practice, it didn&#x27;t seem to be an issue with network I&#x2F;O. The reason, I&#x27;m guessing, had to do with the nature of network I&#x2F;O and the kind of traffic servers, especially node servers, face. Interactive low-latency applications would never send enough data downstream to trigger it. In other network applications, the random ebb and flow of network traffic would tend to mitigate it.<p>I haven&#x27;t really kept up with node, so I don&#x27;t know how the status might have changed in the past two years. It looks like with this new API they&#x27;re trying to make streams easier overall, and that&#x27;s good.