The question I have is schema updates. The biggest pain I have had with things like Mongo is dealing with old data records.<p>Use case example for Uber:<p>1. In 2011, a driver joined. They made a bunch of trips<p>2. In 2012, Uber added more detail about the trip. Information not collected for the 2011 trips.<p>3. And so on, each year there are 'just a few changes'<p>Given the above:<p>In 2016, Uber want to run a query to reward all drivers based on some piece of information that was only present in 2014 on.<p>At this point the historical trip information from 2011 is in a significantly different format than in 2016.<p>In a RDB, at least the old columns are there - or if the db was migrated to a new schema ( a pain ) the issue of the missing fields was addressed.<p>But dealing with data in old formats was an Uber pain. And the lack of visibility into <i>just</i> knowing the schema used to generate that JSON object is a PITA.<p>God forbid if you had <i>new</i> code that never even knew about the old 2011 format.<p>Lastly, what happens if a bug slips through and some JSON field is missing, has odd spelling ( capitalization wrong ), etc.<p>I would love to hear about how old data is handled in schemaless.<p>My experience with MongoDB was less than pleasant.
Is it me or could they have done this way more easily by building some indexing and triggering functionality on top of Cassandra? Even two years ago when they started. Instead they built sharding, indexing, triggering and a Cassandra-like data model on top of MySQL.
Is it just me or is the reasoning behind the switch from postgres to mysql very vague? They describe a sharded mysql database... Sharding postgres isn't necessarily any more difficult, instagram apparently uses it in a sharded manner with <i>many</i> shards. You'd think storing json in the pretty sweet jsonb column type in postgres would be a nice bonus for querying or indexing on.<p>I guess someone at uber must really like mysql, a good enough reason as any other I suppose. I'd love to hear about what other reasons as to why mysql turned out to be the choice here, as I've usually gone the other way (mysql to pgsql) for many of the great features and performance pgsql has.
An interesting system with very close semantics that Google built on top of bigtable: <a href="http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36726.pdf" rel="nofollow">http://static.googleusercontent.com/media/research.google.co...</a> .
Since that's built top of bigtable, you could in theory extend Schemaless to do 2PC for the cases that need it.<p>The implementation (using MySQL) seems very close to Vitess (<a href="http://vitess.io/overview/" rel="nofollow">http://vitess.io/overview/</a>) which manages mysql as a series of "tablets", but exposes most MySQL features directly in the query language.
Odd that they chose MySQL, when they were previously using Postgres. In particular, Postgres' JSON support is so extensive (including indexing, which now is even more extensive [1]), and offers performance benefits over MySQL.<p>The advantage of MySQL in this situation is probably the support for multimaster replication.<p>[1] <a href="http://pgxn.org/dist/jsquery/" rel="nofollow">http://pgxn.org/dist/jsquery/</a>
main lesson - for a new generation of what would at first look seems like OLTP business, the OLTP pieces like transactional triggers and transactional indexes aren't a requirement anymore. I.e. those requirements seems to go the same way - south - as the transactional consistency of search indexes had went several years ago.