I've seen two large usages of DynamoDB at two different companies, and for what it's worth, in both cases we've had similar trouble as the author. In one case we ended up ripping it out and moving to a sharded Postgres scheme, and in the other we've left in place for now because a migration will be such a monumental effort, but it's pretty much universally maligned.<p>Fundamentally, the problem seems to be that choosing a partitioning key that's appropriate for DynamoDB's operational properties is ... unlikely. In their own docs on choosing a partition key [1] they use "user ID" as an example of one with good uniformity, but in reality if you choose something like that, you're probably about to be in for a world of pain: in many systems big users can be 7+ orders of magnitude bigger than small users, so what initially looked like a respectable partitioning key turns out to be very lopsided.<p>As mentioned in the article, you can then try to increase throughput, but you won't have enough control over the newly provisioned capacity to really address the problem. You can massively overprovision, but then you're paying for a lot of capacity that's sitting idle, and even then sometimes it's not enough.<p>Your best bet is probably to choose a partition key that's perfectly uniformly distributed (like a random ID), but at that point you're designing your product around DynamoDB rather than vice versa, and you should probably wonder why you're not looking at alternatives.<p>---<p>[1] <a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html#GuidelinesForTables.UniformWorkload" rel="nofollow">http://docs.aws.amazon.com/amazondynamodb/latest/developergu...</a>