As someone who was fairly intimately involved in the entire evolution of the MySpace stack, I'm dumbfounded at the number of inaccuracies in this article (actually, it's hard to call them inaccuracies so much as an exercise in "I'm going to write an article based on some stuff I heard from disgruntled people."). I developed in non-Microsoft technologies before and after MySpace, and I can tell you that, like all technologies, the Microsoft web stack has strengths and weaknesses. Performance was a strength, non-terseness of the code was a weakness. Modularity was a strength. Etc. Have any of you encountered a technology where, as much as you like it, you can't rattle off a bunch of problems and things that could be done better?<p>The web tier has very little to do with scalability (don't get me wrong, it has a lot to do with cost, just not scalability, except in subtler ways like database connection pooling)--it's all about the data. When MySpace hit its exponential growth curve, there were few solutions, OSS or non OSS for scaling a Web 2.0 stype company (heavy reads, heavy writes, large amount of hot data exceeding memory of commodity caching hardware, which was 32 bit at the time, with extraordinarily expensive memory). No hadoop, no redis, memcached was just getting released and had extant issues. It's funny because today people ask me, "Why didn't you use, Technology X?" and I answer, "Well, it hadn't been conceived of then :)".<p>At the time, the only places that had grown to that scale were places like Yahoo, Google, EBay, Amazon, etc., and because they were on proprietary stacks, we read as many white papers as we could and went to as many get-togethers as we could to glean information. In the end, we wrote a distributed data tier, messaging system, etc. that handled a huge amount of load across multiple data centers. We partitioned the databases and wrote an etl tier to ship data from point A to point B and target the indices to the required workload. All of this was done under a massive load of hundreds of thousands of hits per second, most of which required access to many-to-many data structures. Many startups we worked with, Silicon Valley or not Silicon Valley, could not imagine scaling their stuff to that load--many vendors of data systems required many patches to their stuff before we could use it (if at all).<p>Times have changed--imagining scaling to MySpace's initial load is much easier now (almost pat). Key partitioned database tier, distributed asynchronous queues, big 64-bit servers for chat session, etc. But then you factor in that the system never goes offline--you need constant 24 hour access. When the whole system goes down, you lose a huge amount of money, as your database cache is gone, your middle tier cache is gone, etc. That's where the operations story comes in, wherein I could devote another bunch of paragraphs to the systems for monitoring, debugging, and imaging servers.<p>Of course there's the data story and the web code story. MySpace was an extraordinarily difficult platform to evolve on the web side. Part of that was a fragmentation of the user experience across the site, and a huge part of that was user-provided HTML. It was very difficult to do things without breaking peoples' experiences in subtle or not subtle ways. A lot of profile themes had images layed on top of images, with CSS that read, "table table table table...". Try changing the experience when you had to deal with millions of html variations. In that respect, we dug our own grave when it came to flexibility :).<p>Don't get me wrong, there were more flaws to the system than I can count. There was always something to do. But as someone who enjoys spending time on the Microsoft and OSS stacks, I can tell you it wasn't MS tech that was the problem, nor was it a lack of engineering talent. I am amazed and humbled at the quality of the people I worked next to to build out those systems.