The server hosting one of our live apps is under high loads and gets into a state where it stops responding to http requests several times a day. These outages last typically anywhere from a few minutes all the way up to half an hour or so.<p>The major culprit here is MySQL load and we've been working on optimizing this (those interested can refer to my previous thread on this: https://news.ycombinator.com/item?id=6348903).<p>For now we've found that restarting the httpd and mysqld services brings things back to normal almost immediately.<p>While we continue to work on a - more elegant - solution, we're thinking of writing a bash/shell script that runs on hourly cron, checks load average (via uptime or /proc/loadavg) and if found higher than a threshold, restarts the services.<p>Can anyone think of any downside to this (used at least as a temporary measure)?
I've had load issues in Postgres, not MySQL before and it was due to autovacuum running on tables getting updated/inserted frequently. I'm not sure what the equivalent in MySQL is, but if you have a ton of insert/update queries, consider archiving your tables after a certain period of time, so that your main table doesn't have ton of rows. You can consider sharding of course, but also consider sharding the actual tables in the same database. An insert query on a table with 1000 rows will take much much less time than an insert query on a table with 100 million rows, all things considered.<p>Also, consider creating a buffer in the application layer that buffers inserts/updates and executes them once as a single transaction, if they don't need to be executed immediately. It puts less stress on the database. Of course, this would require a lot of rewriting in your app, so not sure if you want to go through this route.<p>Indices are another area. I'm sure plenty of people have told you to optimize your indices, but also consider REMOVING unnecessary indices. Do you have an index on a text column, or multiple varchar columns? Those can be killer after awhile because inserts will slow down. Consider changing indices on varchar columns to indices on an int column by hashing those strings.<p>A quick suggestion: Install NewRelic (it's free for a certain period), and check out the database transactions that are taking up the most CPU load. Sometimes there's that 1 query you overlooked that is table scanning and could be the main culprit.<p>Also, are you using Rails by any chance? If so, there are other areas I can suggest.<p>And please post your server specs. Maybe your VPS just does suck (no offense), and the easiest route is just to upgrade your server.
Restarting the processes isn't really solving the problem. What is the actual bottleneck? It should be possible to figure this out.<p>I think your first problem is that you are using a VPS. You should never use a VPS in a high load situation like this - buy a dedicated server! They only cost about $70/month, which you should be able to afford if you have a successful site. Ideally you should get as much RAM as you can afford and/or an SSD drive.<p>I know you said you didn't want to throw hardware at the problem, but there are limits - you can't run a massive database on crappy hardware and expect it to work smoothly.
> For now we've found that restarting the httpd and mysqld services brings things back to normal almost immediately.<p>You need to examine the restart process and analyze why it resolves the issue. If the reason is the abandonment of dead parasitic processes and memory leaks, you need to find out why and correct them. If the reason is that the restart unceremoniously drops all the current transactions, you need to increase capacity.<p>> Can anyone think of any downside to this (used at least as a temporary measure)?<p>I certainly can -- a bunch of really irritated visitors, whose transactions are abandoned. But that's only true if that is actually what's going on. Make sure you don't have software issues that are preventing efficient operation. If that's not the issue, you need to grow with your customer base -- increase server capacity.
Crude check but since it's temporary, an hour seems far too long an interval, you check at 10GMT , by 10:05GMT your server is in trouble and has 55 minutes to crap out. I would check every */5 minutes at least.
I would be really worried about losing requests in-flight or that take a long time to run.<p>Is it prohibitively expensive/time consuming to get (or borrow) a bigger machine (on EC2, or in your colo, or what have you) to run MySQL on until you've figured out how to shard / scale out your application?
If you haven't already done this, when the mysql server is having trouble make sure you connect through the terminal and try:<p>> SHOW PROCESSLIST<p>This will show all active queries and the time they have taken to execute. The fact that the server seems to churn to a halt and then work its way through the problem suggests the issues are related to specific queries you can catch this way. Then use the EXPLAIN command on the slow queries to figure out why they are hanging your server and add indexes or tweak that part of your code (avoid joins on large tables, etc.) as necessary.
It seems pretty clear that you don't have enough experience on your own to resolve this properly, so call in some help. It may be possible to make an architectural change that significantly reduces the resources you need, or perhaps you'll find that you unavoidably need more resources to do what you want. Someone who knows how to diagnose and analyze this properly can tell you that.
while the root cause should be fixed ( sound like you're working on it ). consider using monit for the restarts instead of cron :<p><a href="http://mmonit.com/monit/" rel="nofollow">http://mmonit.com/monit/</a>