This is a great example of how as technology changes, it changes use cases, which can prompt a revisiting of what was once considered a good idea. You'll often see the pendulum of consensus swing in one direction, and then swing back to the exact opposite direction less than a decade later.<p>2010s saw REST-conforming APIs with json in the body largely as an (appropriate) reaction to what came before, and also in accordance with changes around what browsers were able to do, and thus how much of web apps moved from the backend to the front.<p>But then, that brought even more momentum where web apps started doing /even more/. There was a time when downloading a few megabytes per page, generating an SVG chart or drawing an image, interacting to live user interaction was all unthinkable. But interactive charting is now de facto. So now we need ways to access ranges and pieces of bulk data. And it looks a lot more like block storage access than REST.<p>---<p>These are core database ideas: you maintain a fast and easy to access local cache of key bits of data (called a bufferpool, stored in memory, in e.g. mysql). In this local cache you keep information on how to access the remaining bulk of the data (called an index). You minimize dipping into "remote" storage that takes 10-100x time to access.<p>Database people refer to the "memory wall" as a big gap in the cache hierarchy(CPU registers, L1-L3, main memory, disk / network) where the second you dip beyond it, latency tanks (Cue the "latency numbers every programmer should know" chart). And so you have to treat this specially and build your query plan to work around it. As storage techniques changed (e.g. SSDs, then NVMEs and 3d x-point etc), databases research shifted to adapt techniques to leverage new tools.<p>In this new case, the "wall" is just before the WAN internet, instead of being before the disk subsystem.<p>---<p>This new environment might call for a new database (and application) architectural style where executing large and complex code quickly at the client side is no problem at all in an era of 8 core CPUs, emscripten, and javascript JITs. So the query engine can move to the client, the main indexes can be loaded and cached within the app, and the function of the backend is suddenly reduced to simply storing and fetching blocks of data, something "static" file hosts can do no problem.<p>The fundamental idea is: where do I keep my data stored, where do I keep my business logic, and where do I handle presentation. The answer is what varies. Variations on this thought:<p>We've already had products that completely remove the "query engine" from the "storage" and provides it as a separate service, e.g. Presto / Athena where you set it up to use anything from flat files to RDBMSs as "data stores" across which it can do fairly complicated query plans, joins, predicate pushdown, etc. Slightly differently, Snowflake is an example of a database that's architected around storing main data in large, cheap cloud storage like s3, no need to copy and keep entire files to the ec2 node, only the block ranges you know you need. Yet another example of leveraging the boundary between the execution and the data.<p>People have already questioned the wisdom of having a mostly dumb CRUD backend layer with minimal business logic between the web client and the database. The answer is because databases just suck at catering to this niche, but nothing vastly more complicated than that. They certainly could do granular auth, serde, validation, vastly better performance isolation, HTTP instead of a special protocol, javascript client, etc etc. Some tried.<p>Stored procedures are also generally considered bad (bad tooling, bad performance characteristics and isolation, large element of surprise), but they needn't be. They're vastly better in some products that are generally inaccessible to or unpopular with large chunks of the public. But they're a half baked attempt to keep business logic and data close together. And some companies had decided at a certain time that their faults were not greater than their benefits, and had large portions of critical applications written in this way not too long ago.<p>---<p>Part of advancing as an engineer is to be able to weigh the cost of when it's appropriate to sometimes free yourself from the yoke of "best practices" and "how it's done". You might recognize that something about what you're trying to do is different, or times and conditions have changed since a thing was decided.<p>And also to know when it's not appropriate: existing, excellent tooling probably works okay for many use cases, and the cost of invention is unnecessary.<p>We see this often when companies and products that are pushing boundaries or up against certain limitations might do something that seems silly or goes against the grain of what's obviously good. That's okay: they're not you, and you're not them, and we all have our own reasons, and that's the point.