I've never been attracted to SPDY as I have doing HTTP 1.1 pipelining for years before SPDY appeared. It's great for getting lots of data from the same source, though I don't know about the way typical webpages are these days with so many connections to pull-in offiste resources, many of which offer the user zero value (they benefit the site owner only). That's a problem with how people are deisgning websites. Not something that's solved with a transmission procotol.<p>SPDY did offer a couple of new things over 1.1.<p>SPDY wants to compress headers, but I asked "Why?"<p>What exactly are they planning to put in the headers to make them so big they need to be compressed? Normal headers are not large. And headers can actually be quite useful after pipelining a large amount of data when you want to carve into pieces later. They are like record separators.<p>Another new addition was de-serialization. But I see nothing wrong with receiving the data in the order I requested it. If the transmission is interrupted, at least then I can restart where I left off.<p>I'm just not convinced compressing headers or de-serializing adds such a speed boost as to justify a new protocol.<p>And now, with this exploit, I don't need to even think about SPDY anymore. Except has a potential security hole.<p>Sorry SPDY fans, but this is just my opinion as a dumb end user. Pipelining worked just fine before SPDY. Alas, no one used it. Why? Maybe because it didn't have a catchy acronym and a corporate brand behind it. I honestly don't know. Because it's efficient and makes perfect sense. I used it. And I still do.<p>I will never use SPDY, especially not now. (It's on by default in Firefox but you can disable it.)