Unrelated but just have a look at the comments in that post. It is hilarious because almost every comment says the site has become slow. Engineers measures load time their own way; users experience load time their own way.
interesting article, but faster is only useful when everything works.<p>I'd much rather a real concentrated effort on getting rid of the constant errors that popup on facebook. "Oops! something went wrong", "chat not available at this time", etc.
It's good to make things faster, but their method of fixing it by "writing a library" instead of just rewriting the same functions over and over again is really, really basic. It really took them that long to do that?<p>And once again, Facebook creates a completely custom solution for no real reason. They don't see any advantage in basing this off of jQuery or similar if they're rewriting the JavaScript anyway?<p>Are we supposed to be impressed here?
The big_pipe used on the homepage is quite cool. The initial request just returns <script> blocks as each partial is rendered. I wonder how they work it on the server side - they use PHP so they're not using any kind of threading.<p>Perhaps the initial PHP request passes off most of the heavy lifting to their backend services over asynchronous Thrift calls. If that's the case then their PHP layer wouldn't be doing too much work, which doesn't seem to really tie in with them releasing HPHP.
"We noticed that a relatively small set of functionality could be used to build a large portion of our features yet we were implementing them in similar-but-different ways."<p>Wait, what? They didn't do OOP in JS?