I think people in this industry make using complicated, powerful paradigms part of their identity. They don’t feel like they’re important unless they’re reaching for N-tier architecture or exotic databases or lambdas or whatever else it is.<p>Most apps I’ve worked on could have been a monolith on postgres but they never ever are as soon as I’m not the sole engineer.
It took me a bit to realize the author is selling me something. I guess good job there sir.<p>I’ve built a bunch of distributed architectures. In every case I did, we would have been better served with a monolith architecture and a single relational DB like Postgres. In fact I’ve only worked on one system that had the kind of scale that would justify the additional complexity of a distributed architecture. Ironically that system was a monolith with Postgres.
Orchestration tier. Oy.<p>So something goes wrong, and you need to back out an update to one of your microservices. But that back-out attempt goes wrong. Or happens after real-world actions have been based on that update you need to back out. Or the problem that caused a backout was transient, everything turns out to be fine, but now your backout is making its way across the microservices. Backout the backout? What if <i>that</i> goes wrong? The "or"s never end.<p>Just use a centralized relational database, use transactions, and be done with it. People not understanding what can go wrong, and how RDB transactions can deal with a vast subset of those problems -- that's like the 21st century version of how to safely use memory in C.<p>Yes, of course, centralized RDBs with transactions are sometimes the wrong answer, due to scale, or genuinely non-atomic update requirements, or transactions spanning multiple existing systems. But I have the sense that they are often rejected for nonsensical reasons, or not even considered at all.
> <i>In the beginning (that is, the 90’s), developers created the three-tier application. [...] Of course, application architecture has evolved greatly since the 90's. [...] This complexity has created a new problem for application developers: how to coordinate operations in a distributed backend? For example: How to atomically perform a set of operations in multiple services, so that all happen or none do?</i><p>This doesn't seem like a correct description of events. Distributed systems existed in the 90s and there was e.g. Microsoft Transaction Server [0] which was intended to do exactly this. It's not a new problem.<p>And the article concludes:<p>> <i>This manages the complexity of a distributed world, bringing the complexity of a microservice RPC call or third-party API call closer to that of a regular function call.</i><p>Ah, just like DCOM [1] then, just like in the 90s.<p>[0] <a href="https://en.wikipedia.org/wiki/Microsoft_Transaction_Server" rel="nofollow">https://en.wikipedia.org/wiki/Microsoft_Transaction_Server</a><p>[1] <a href="https://en.wikipedia.org/wiki/Distributed_Component_Object_Model" rel="nofollow">https://en.wikipedia.org/wiki/Distributed_Component_Object_M...</a>
Company hawking an orchestrating backend server says you should use an orchestrating backend server?<p>You still have four layers, it's just that one is hidden with annotations.
<i>"In the beginning (that is, the 90’s), developers created the three-tier application. Per Martin Fowler, these tiers were the data source tier, managing persistent data, the domain tier, implementing the application’s primary business logic, and the presentation tier, handling the interaction between the user and the software. The motivation for this separation is as relevant today as it was then: to improve modularity and allow different components of the system to be developed relatively independently."</i><p>Immediately, I see problems. Martin Fowler's "Patterns of Enterprise Application Architecture" was first published in 2002, a year that I think most people will agree was not in "the 90's." Also, <i>was</i> that the motivation? Are we sure? Who had that motivation? Were there any other motivations at play?
Workflows/orchestration/reconciliation-loops are basically table stakes for any service that is solving significant problems for customers. You might think you don't need this, but when you start needing to run async jobs in response to customer requests, you will always eventually implement one of the above solutions.<p>IMO the next big improvement in this space is improving the authoring experience. In short, when it comes to workflows, we are basically still writing assembly code.<p>Writing workflows today is done in either a totally separate language (StepFunctions), function-level annotations (Temporal, DBOS, etc), or event/reconciliation loops that read state from the DB/queue. In all cases, devs must manually determine when state should be written back to the persistence layer. This adds a level of complexity most devs aren't used to and shouldn't have to reason about.<p>Personally, I think the ideal here is writing code in any structure the language supports, and having the language runtime automatically persist program state at appropriate times. The runtime should understand when persistence is needed (i.e. which API calls are idempotent and for how long) and commit the intermediate state accordingly.
bought a ten years old company, a division of a public company, some million dollars.<p>got an overly complex, over 30 micro services architecture, over usd20k in monthly cloud fees.<p>rewrote the thing into a monolith in 6 months. reduced development team in half, costs of servers by 80-90%, latency by over 60%<p>newer is not better. each micro service must be born from a real necessity out of usage stats, server stats, cost analisis. not by default following tutorials.
I haven't noticed the same trend or evolution of application tiers, perhaps we live in different echo chambers. Teams using microsevices need to evaluate whether it's still a good fit considering the inherent overhead it brings. Applying a bandaid solution on top of it, if it isn't a good fit, only makes the problem worse.
Following the <i>Getting Started</i>[0] section it seems like <i>DBOS</i> requires the configuration of a Postgres-compatible database[1] (NOTE: <i>DBOS currently only supports Postgres-compatible databases.</i>). Then, after decorating your application functions as workflow steps[2], you'll basically run those workflows by spawning a bunch of worker threads[3] next to your application process.<p>Isn't that a bit... unoptimized? The orchestrator domain doesn't seem to be demanding on compute, so why aren't they making proper use of <i>asyncio</i> here in the first place? And why aren't they outsourcing their runtime to an independent process?<p>EDIT:<p>So "To manage this complexity, we believe that any good solution to the orchestration problem should combine the orchestration and application tiers." (from the article) means that your application runtime will also become the orchestrator for its own workflow steps. Is that a good solution?<p>EDIT2:<p>Are they effectively just shifting any uptime responsibility (delivery guarantees included) to the application process?<p>[0]: <a href="https://github.com/dbos-inc/dbos-transact-py/tree/a3bb7cb6dd53ec58ef4d96a4c9314a16391a0aa5#getting-started" rel="nofollow">https://github.com/dbos-inc/dbos-transact-py/tree/a3bb7cb6dd...</a><p>[1]: <a href="https://docs.dbos.dev/python/reference/configuration#database" rel="nofollow">https://docs.dbos.dev/python/reference/configuration#databas...</a><p>[2]: <a href="https://github.com/dbos-inc/dbos-transact-py/blob/a3bb7cb6dd53ec58ef4d96a4c9314a16391a0aa5/dbos/_core.py#L846" rel="nofollow">https://github.com/dbos-inc/dbos-transact-py/blob/a3bb7cb6dd...</a><p>[3]: <a href="https://github.com/dbos-inc/dbos-transact-py/blob/a3bb7cb6dd53ec58ef4d96a4c9314a16391a0aa5/dbos/_dbos.py#L784" rel="nofollow">https://github.com/dbos-inc/dbos-transact-py/blob/a3bb7cb6dd...</a>
> By persisting execution state to a database, a lightweight library can fulfill the primary goal of an orchestration system: guaranteeing code executes correctly despite failures. If a program fails, the library can look up its state in Postgres to figure out what step to take next, retrying transient issues and recovering interrupted executions from their last completed step.<p><pre><code> program =
email all customers
failure =
throttled by mailchimp</code></pre>
This is pretty weak, makes very bold statements ("this is the way it has to be now") with no evidence.<p>Reads like the set up for a sales pitch, which came at the end
> At the technical and organizational scale of modern enterprises, the complexity of orchestrating distributed systems is unavoidable.<p>*citation needed<p>We continue to make things much more complex than they need to be. Even better when NON "enterprise" applications also buy into the insane complexity because they feel like they have to (but they have nowhere near the resources to manage that complexity).