No matter how you integrate different applications, be it via APIs, messaging, or a database, it's vital to separate your application's internal data model from models which it exposes. If you don't do that, you're in for a never-ending story of upstream services unknowingly breaking downstream services, or upstream services not being able to evolve in any meaningful way.<p>So if they mean directly exposing a service's data model from the database to other services, I'm very skeptical. If they mean providing that access by means of some abstraction, e.g. database views, it can be an option in some cases.<p>You'll still loose lots of flexibility you'd gain by putting some middleware in between, e.g. ability to scale out compute independently from storage, ability to merge data from multiple sources into one API response, ability to implement arbitrarily complex business logic e.g. in Java, etc.
While they quickly acknowledge that integrating by means of a database is widely regarded as an anti-pattern, the rest of the article doesn't really address why this wouldn't be an anti-pattern, other than to pretend that the major reasons for avoiding this pattern are deployment complexity and furnishing multi-region services.<p>> A customer pattern we see solves this problem, and it’s the integration database pattern. This pattern went out of style in the 90s along with Oracle servers in the closet and dedicated DBAs, but with technological advances like scalable multi-region transactions and serverless computing resources, its utility has returned.<p>Considering they are flogging a product, this feels especially dishonest to me.
Having worked on old systems that used this approach, I recall that it did have a lot of pluses. Now, for the minuses:<p>1) changing the language you use in a server (say, from PHP to Python) is big, but changing the language you use in your database (from SQL to anything else) is even more intimidating. If you are integrating through the database, this limitation matters more.<p>2) you need to have DBA's who are not only highly competent, but also have good people skills, since they will often be saying "no", or at least "not that way", and if they don't know how to do that in a constructive manner then it becomes a net productivity drain. Fortunately, where I worked that used this pattern, the DBA's had exceptional people skills as well as technical skills. This is, I am led to believe, not always the case.
My first two programming jobs out of college back in the early 2010's took the approach in this article. Albeit, with older technology.<p>This is bringing back old (bad) memories of the times where I was debugging stored procedures that called triggers that called the same stored procedures.<p>A lot of that was due to poor design and bad choices. Some of that was due to developers trying to fit processes and patterns into a language, i.e. SQL, that lacked the expressiveness for it.<p>I'm not a fan of this approach to say the least.<p>I'd be interested in hearing other's experiences with this though.<p>And I'm going to check out Fauna since it looks like a cool database and to see if anything has changed with this approach since I encountered it almost a decade ago.
Use the right tool for the job. What is the most deployed database? SQLite3. In what pattern is it most often deployed? One language from one environment accessing one database with full read/write. Low call volume, high data complexity, and embedded (tightly application-coupled) use. This is observably <i>the</i> normal use case for a database and the simplest mode of implementation. Problems occur when software people start solving problems that don't exist: typically performance, future scalability, dubious theories of security and future language/database migrations. KISS.
If I'm reading this correctly the crux of the argument appears to be if your serverless database engine provides cross-region data replication and consistency <i>and</i> provides a GraphQL interface then applications can use GraphQL to go directly to the database, thus solving a lot of the problems we endured in the past when using SQL to go to the database. It's an interesting idea that at first glance appears to be worth looking into further, though I still have a uneasy feeling because I remember all the pain in the past!
At Firebase it was sometimes called the client-database-server architecture. The pattern was documented in 2013 [1]<p>If you use Firebase as an ephemeral message bus it's a great pattern. It has problems if you use it like a traditional database because migrations are very tricky. DBs that support views (or GraphQL) can make migrations much easier<p>[1] <a href="https://firebase.googleblog.com/2013/03/where-does-firebase-fit-in-your-app.html" rel="nofollow">https://firebase.googleblog.com/2013/03/where-does-firebase-...</a>
Stored procedures and the integration database have come back for our users in a big way. It would be great to hear examples of how others are applying this pattern with other databases and APIs.
I wouldn't integrate through a database but I wouldn't refuse to integrate through a distributed cache.<p>Anyway using REST for inter services communication decreases performance and increases latency and 99% of projects still do it.
The contempt towards the “old ways” in this piece is really irksome, especially since those “old ways” are perfectly acceptable for 95% of applications (you only need this hyper scale stuff if you’re running a MAANG level app). The rest of us still doing things the “old way” are perfectly happy to have simple systems that run reliably, as we sit and watch the complete train wreck of complexity everyone else is building using these “new ways” that are supposedly better.