This is an interesting enough question if you're just curious, but I wouldn't base any decisions on it. It's a good example of asking the wrong question. Different products require different approaches to QA and deployment.<p>For example, our software is sold to enterprise customers. These customers use the software to run real-time reverse auction procurements where the total value of the purchase can be anywhere from $250,000 to $20,000,000.<p>The results of the reverse auction events are awarded in the form of a contract between buyer and seller. If a software bug causes an incorrect calculation, it's a big problem (0.1% error on a $10M purchase is a $10,000 error). And by big problem, I mean that weeks (maybe months) worth of effort from our team, the customer's team, and several vendors go down the drain.<p>Put simply, a bug in our bid core could cost us $40,000-$50,000, assuming the lost business assessment is limited to the single failed procurement event. Looking at the total value of a lost customer, you're easily talking $250k.<p>Because of this, we have a very long QA cycle. Outside of automated tests, our software is touched by humans (a lot) before it goes to production, and production releases are less frequent (once every couple of months).<p>The frequency with which you deploy, and the amount of effort you put in to QA is determined by the cost of failure. We're in a rather unenviable position of being a small fish in the enterprise market, so our cost of failure is huge (we have a small number of huge clients). Thus, we invest significant effort in QA and slow-roll to production. Your situation may be entirely different.