I would not migrate to postgres without knowing the issue first. Here are some suggestions: Enable replication master/slaves+, assuming you have a read heavy architecture. Enable the "<i>The Slow Query Log</i>" [1], which will give you something like /var/log/mysql-slowquery.log, then you can look in here to find queries that take a long time. Find out what the bottleneck is, I'm willing to best you have limited I/O and that is the source of your issues, can you use memcached [2] to limit the database hit rate? You need to remove/limit direct access to the database when people get duplicate data out.<p>Before you start to optimize I would profile things to get a baseline for how many select/inserts you are doing and how long they are taking, and system load, etc. Based off your 100M statement, you are doing roughly 1157.4 inserts/s. Maybe that's 250 inserts/s during a slow time and 3,000 inserts/s at peak, but it would be nice to know.<p><pre><code> 100,000,000 (inserts) / 86,400 (seconds/day)
= 1157.4 (inserts/s)
</code></pre>
I guess it all depends on the insert size and index but you should be able to scale this. If you cannot then you need to partition/shard your data. After reading all this, you are probably thinking, I should just throw more hardware at the problem ;)<p>ps. Have you stopped to ask if you really need all this data?<p>[1] <a href="http://dev.mysql.com/doc/refman/5.6/en/slow-query-log.html" rel="nofollow">http://dev.mysql.com/doc/refman/5.6/en/slow-query-log.html</a><p>[2] <a href="http://memcached.org/" rel="nofollow">http://memcached.org/</a>