A coworker and I were discussing this yesterday. Our Rails processes take about 50MB of memory each, and can handle only one request. Since our site hits a backend DB that is sometimes slow to return results, requests can last minutes. We could build an out-of-band process to query the backend service and return results asynchronously, but that seems like overkill. However, it's easy for a handful of users to eat up all the connections quickly, if they are all making long-running requests.<p>It seems silly to have an entire Rails process tied up blocked on results from this backend service. And it seems like a rather severe limitation of Rails' architecture to require 50MB of memory per connection-handling process. How do large sites scale Rails?
I signed up for Scout after reading this. It made me realize that this is a plausible soft-failure for my setup and that if it happened I would be totally in the dark until customers started complaining.<p>Granted, I should be so lucky as to have load problems, but if I had nice pretty graphs I could plan future VPS upgrades such that it never becomes a problem for my customers.