Another solution is wal-e[1] which handles continuous archiving. It was built by the Heroku guys and as such is battle-tested.<p>I use wal-e myself and its indispensable and easy to use.<p><a href="https://github.com/wal-e/wal-e" rel="nofollow">https://github.com/wal-e/wal-e</a>
We run a similar setup at Kloudless [1]. We use PgBouncer [2] for connection pooling, which connects to pgpool2 to load balance between our Postgres servers. We've noticed PgBouncer is more performant at handling thousands of simultaneous connections.<p>[1] <a href="https://kloudless.com" rel="nofollow">https://kloudless.com</a>
[2] <a href="http://wiki.postgresql.org/wiki/PgBouncer" rel="nofollow">http://wiki.postgresql.org/wiki/PgBouncer</a>
Seems down Google cache link<p><a href="http://webcache.googleusercontent.com/search?q=cache:MPIiThxiSD8J:michael.stapelberg.de/Artikel/replicated_postgresql_with_pgpool+&cd=1&hl=en&ct=clnk&gl=us" rel="nofollow">http://webcache.googleusercontent.com/search?q=cache:MPIiThx...</a>
I've done a lot of work with pgpool over the past year, be aware there are lots of situations where it won't work for you. For example if your devs don't write their own SQL and instead use a framework with limited control you're going to have a bad time.
The problem with these setups is that: 'There Are Many Ways To Do It'(tm), and: 'You Really Need To Test For Your Use-Case'(tm). I need to read and understand everything to decide what's best in my case. And then you need to write a lot of scripts, do a <i>lot</i> of time-consuming testing, and document everything.<p>Scalable, Reliable PostgreSQL is not really there yet.<p>There are many ways to do it, yes, but most people just want one thing: the db failing over in case it goes down.