I came upon node around two years ago, wrote a little project to play with it, and encountered its streams. I wasn't prepared for how low-level their behavior was. It was almost like being inside the kernel, except with a veneer of Javascript. If you had a fairly large amount of data to write out, and you wrote it too rapidly with the expectation that the stream would buffer for you, it was easy to trigger pathological behavior. It appeared that each write() from the application triggered an immediate system call to write that data to the kernel. The call was nonblocking, so once the kernel buffer filled up, the kernel would start rejecting any more data. node would then apparently set a timer and keep retrying the syscall until all the data was gone. If you kept sending data too fast, huge numbers of pending I/Os would rapidly build up, and the system would eventually be getting hammered by syscalls as fast as node could send them, effectively crashing the app.<p>From a technical design standpoint, this behavior seemed to me like something you'd always have to be aware of, but, in practice, it didn't seem to be an issue with network I/O. The reason, I'm guessing, had to do with the nature of network I/O and the kind of traffic servers, especially node servers, face. Interactive low-latency applications would never send enough data downstream to trigger it. In other network applications, the random ebb and flow of network traffic would tend to mitigate it.<p>I haven't really kept up with node, so I don't know how the status might have changed in the past two years. It looks like with this new API they're trying to make streams easier overall, and that's good.