Any experience with using Aurora in place of DynamoDB?<p>A couple years ago there was an interesting tidbit at re:Invent about customers moving from DynamoDB to Aurora to save significant costs.[1] The Aurora team made the point that DynamoDB suffers from hotspots despite your best efforts to evenly distribute keys, so you end up overprovisioning. Whereas with Aurora you just pay for I/O. And the scalability is great. Plus you get other nice stuff with Aurora like, you know, traditional SQL multi-operation transactions.<p>It was kind of buried in a preso from the Aurora team and the high-level messaging from Amazon was still, NoSQL is the most scalable thing. Aurora was and is still seemingly positioned against other solutions within the SQL realm. I sort of get it in theory that NoSQL is still <i>theoretically</i> infinitely scalable whereas Aurora is bounded by 15 read replicas and one write master.. but in practice these days those limits are huge. I think one write master can handle like 100K transactions a second or something.<p>So, I'm really curious where this has gone in the past couple years if anywhere. Is NoSQL still the best approach?<p>[1] <a href="https://youtu.be/60QumD2QsF0?t=1021" rel="nofollow">https://youtu.be/60QumD2QsF0?t=1021</a>
My wishlist for DynamoDB is now down to:<p>* Fast one-time data import without permanently creating a lot of shards (important if you are restoring from a backup)<p>* Better visibility into what causes throttling (e.g. was it a hot shard? Was it a brief but large burst of traffic?)<p>* Lower p99.9 latency. It occasionally has huge latency spikes.<p>* Indexes of more than 2 columns<p>* A solution for streaming out updates that is better than dynamodb streams
Congrats to the DynamoDB team for going beyond the traditional limits of NoSQL.<p>There is a new breed of databases that use consensus algorithms to enable global multi-region consistency. Google Spanner and FaunaDB where I work are part of this group. I didn’t catch anything about the implementation details of DynamoDB transactions in the article. If they are using a consensus approach, expect them to add multi-region consistency soon. If they are using a traditional active/active replication approach, they’ll be limited to regional replication.
This is cool, it lifts the burden of having to bake "atomicity" into your app if you're using a key/value store like DynamoDB. I can see a nice balance of combining this with some built in error checking in the app itself.
I'd be interested to see comparisons/benchmarks against FoundationDB. DynamoDB transactions make dynamo a serious alternative to FDB now. I can see the two manage advantages for FDB being: 1) you can deploy it on premise (which is potentially important for some B2B companies), 2) it shuffles data around so that hot-spotting of a cluster is eliminated (which dynamo appears to still suffer from).
Postgresql gets native JSON support (at least since 9.2 onwards) to store schemaless free flowing text. Dynamodb gets transaction guarantees.<p>There is globalization and intermingling happening on technology too.<p>On a similar thought, a few years back, C# and Java got `Any` generic types, while Python/JS got static types (via python3 typings, typescript)
>If an item is modified outside of a transaction while the transaction is in progress, the transaction is canceled and an exception is thrown<p>You are still responsible to implements a Queue or a Lock on the Items you want to mutate.<p>That said this is a huge milestone for DynamoDB, we can now safely mutate multiples items while remaining ACID.
Max 10 items per transaction, that's quite a restriction! I guess you have to plan all the transactions you would perform and make sure they meet the bounds.