See also - Sql Server Hyperscale. We've been using this for about 18 months now and it feels like it has saved us a lot of hassle.<p><a href="https://learn.microsoft.com/en-us/azure/azure-sql/database/hyperscale-architecture" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/azure-sql/database/h...</a><p>The <i>only</i> downside we can spot so far is the presence of a 100mb/s throttle for txn log writes in order to satisfy replication requirements. Beyond this, it is indistinguishable from an express instance on a local dev machine. You lose some of the other on-prem features when you go managed, but most new applications don't need that stuff anymore. The message broker pieces are the only ones I miss, but there are a lot of other managed options for that, and you can still DIY with a simple Messages table and 3-4 stored procedures.<p>On the read & reporting side, I see no downsides. You mostly get OLAP+OLTP in the same abstraction without much headache. If someone really wanted to go absolutely bananas with reporting queries, data mining, AI crap, whatever, you could give them their own geo replica in a completely different region of the planet. Just make sure they aren't doing any non-queries and everything should be fine.<p>For large binary data, we rely on external blob storage and URLs. The txn log write limit shouldn't feel like much of a restriction if you are using the right tools for each part of the job. Think about how many blob URLs you could fit within 100 megabytes. If you make assumptions about URL structure, you can increase this by a factor of 2-3x.