I work at a large FinTech in a team responsible for a federated GraphQL service and, recently, related datasets in Redshift. We've come to see our datasets as another kind of API representing the same underlying data from our GraphQL API.<p>Our service is event sourced and follows CQRS and so our Redshift dataset is just another projection. This got me thinking, can other services simply use this projection directly instead of a GraphQL or REST API?<p>In general, services could provide an API for mutations that updates a projection which other services can directly query using SQL. Although services are accessing a database directly, it's not breaking encapsulation because the data is a projection intended for use by other services like an API.<p>There's a few benefits I see here. Firstly, if all services were modelled like this, querying multiple "services" could be handled with simple joins. Latency is reduced as data doesn't pass through an intermediate service. Scaling reads is straightforward. The same "API" works for both transactional and analytical use-cases. And finally, low-write services could be scaled down and no-write services could just be data pipelines.<p>So HN what do you think about a database as a service API?
I've experimented with this as an alternative API layer. When using role-based RLS in postgresql and being strict with what is exposed to application users it can be quite nice, but it also brings in a lot of additional considerations. Things like rate-limiting, caching, timeouts, access logging, etc. are all very much standard practice on a HTTP or RPC layer, but not so much for a SQL interface. When using postgrest (a REST layer on top of postgresql) I've considered just exposing the same API over SQL since I already have the authZ fully implemented in the DB but I still don't know if I want to implement everything else that goes along with a api in the DB.
This works fine as long as the data flow is one directional. E.g. legacy backend -> db -> read-only frontend<p>However once the frontend and backend begin to mutate data, you're going to have problems doing that consistently; it's an anti-pattern I recommend against. Better to have an API handle everything.
If you can guarantee that you get the same result for the same projection (a query and or rest call also could be characterized as a projection) then I don't see an issue.<p>LOTS of rest services are just urls params -> sql -> result set -> json<p><a href="https://postgrest.org/en/stable/" rel="nofollow">https://postgrest.org/en/stable/</a><p>Isn't this what graphql was supposed to offer?
You may be interested in an open source project I am developing, see:
<a href="https://webapibridge.org" rel="nofollow">https://webapibridge.org</a><p>Web API calls are converted into MySQL Stored Procedure calls and the result set returned. I am interested in extending this to other databases.