TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Databases and why their complexity is now unnecessary

363 pointsby adamfeldmanover 1 year ago

61 comments

davedxover 1 year ago
&gt; The better approach, as we’ll get to later in this post, is event sourcing plus materialized views.<p>Right, so the solution is more complexity? Of course it is. Sigh
评论 #38930555 未加载
评论 #38941274 未加载
评论 #38930475 未加载
评论 #38942066 未加载
评论 #38930880 未加载
评论 #38932197 未加载
pgaddictover 1 year ago
Did I miss something, or does that post completely omit concepts like concurrency, isolation, constraints and such? And are they really suggesting &quot;query topologies&quot; (which seem very non-declarative and essentially making query planning&#x2F;optimization responsibility of the person writing them) are a superior developer environment?
评论 #38946340 未加载
bob1029over 1 year ago
&gt; No single data model can support all use cases.<p>In theory, there is no domain (or finite set of domains) that cannot be accurately modeled using tuples of things and their relations.<p>Practically speaking, the scope of a given database&#x2F;schema is generally restricted to one business or problem area, but even this doesn&#x27;t matter as long as the types aren&#x27;t aliasing inappropriately. You could put a web retailer and an insurance company in the same schema and it would totally work if you are careful with naming things.<p>Putting everything into exactly one database is a superpower. The #1 reason I push for this is to avoid the need to conduct distributed transactions across multiple datastores. If all business happens in one transactional system, your semantics are dramatically simplified.
评论 #38930251 未加载
评论 #38933339 未加载
评论 #38930367 未加载
评论 #38930569 未加载
评论 #38930539 未加载
评论 #38932309 未加载
评论 #38930679 未加载
评论 #38930585 未加载
评论 #38930410 未加载
评论 #38934596 未加载
评论 #38935129 未加载
russdpaleover 1 year ago
Seems like a bunch of buzzwords and such. I&#x27;ve been working with databases for years for one of the largest companies in the world and no one has ever said &quot;topology&quot; before.<p>Any time I would save with this is wasted on learning java and this framework.<p>There isn&#x27;t anything wrong with databases.
评论 #38931097 未加载
评论 #38933972 未加载
shay_kerover 1 year ago
What&#x27;s an ELI5 of Rama? I found the docs confusing as well: <a href="https:&#x2F;&#x2F;redplanetlabs.com&#x2F;docs&#x2F;~&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;redplanetlabs.com&#x2F;docs&#x2F;~&#x2F;index.html</a><p>Please no buzzwords like &quot;paradigm shift&quot; or &quot;platform&quot;. If diagrams are necessary, I&#x27;d love to read a post that explains clearer.
评论 #38930496 未加载
评论 #38930481 未加载
评论 #38930514 未加载
评论 #38930393 未加载
danscanover 1 year ago
I did a year long project to build a flexible engine for materialized views onto 1-10TB live event datasets, and our architecture was roughly converging toward this idea of &quot;ship the code to where the indexes are&quot; before we moved onto a different project<p>I&#x27;m <i>very</i> compelled by Rama, but unfortunately won&#x27;t adopt it due to JVM for totally irrational reasons (just don&#x27;t like Java&#x2F;JVM). Would love to see this architecture ported!
kgeistover 1 year ago
&gt;The solution is to treat these two concepts separately. One subsystem should be used for representing the source of truth, and another should be used for materializing any number of indexed stores off of that source of truth. Once again, this is event sourcing plus materialized views.<p>At work we decouple the read model from the write model: the write model (&quot;source of truth&quot;) consists of traditional relational domain models with invariants&#x2F;costraints and all (which, I think, is not difficult to reason about for most devs who are already used to ORM&#x27;s), and almost every command also produces an event which is published to the shared domain event queue(s). The read model(s) are constructed by workers consuming events and building views however they fit (and they can be rebuilt, too). For example, we have a service which manages users (&quot;source of truth&quot; service), and another service is just a view service (to show a complex UI) which builds its own read model&#x2F;index based on the events of the user service (and other services). Without it, we&#x27;d have tons of joins or slow cross-service API calls.<p>Technically we can replay events (in fact, we accidentally once did it due to a bug in our platform code when we started replaying ALL events for the last 3 years) but I don&#x27;t think we ever really needed it. Sometimes we need to rebuild views due to bugs, but we usually do it programmatically in an ad hoc manner (special scripts, or a SQL migration). I don&#x27;t know how our architecture is properly called (I never heard anyone call it &quot;event sourcing&quot;).<p>It&#x27;s just good old MySQL + RabbitMQ and a bit of glue on top (although not super-trivial to do properly I admit: things like transactional outboxes, at least once delivery guarantee, eventual consistency, maintaining correct event processing order, event data batching, DB management, what to do if an event handler crashes? etc.) So I wonder, what we&#x27;re missing without Rama with this setup, what problems it solves and how (from the list above) provided we already have our battle-tested setup and it&#x27;s language-agnostic (we have producers&#x2F;consumers both in PHP and Go) while Rama seems to be more geared towards Java.
评论 #38946251 未加载
评论 #38944905 未加载
avereveardover 1 year ago
Eh materializing data upon mutation can bring you some gains if your product does like one thing and needs to do it very fast. But as soon as you get complex transactions with things that need to be updated in a atomic write or you want to add a new feature that needs data organized in a different way then you&#x27;re in trouble.<p>Also deeply unsatisfied of &quot;just slap an index on it&quot; that was lightly trow around on the part about building an application. The index is a global state, it was just moved one step further down the layer.
评论 #38942708 未加载
ram_rarover 1 year ago
Even after reading this doc [1], I am not clear on who is the target audience and what are you trying to solve? It would be helpful to take a real world example and translate how easy &#x2F;efficient it would be to do this via RAMA.<p>[1] <a href="https:&#x2F;&#x2F;redplanetlabs.com&#x2F;docs&#x2F;~&#x2F;why-use-rama.html#gsc.tab=0" rel="nofollow">https:&#x2F;&#x2F;redplanetlabs.com&#x2F;docs&#x2F;~&#x2F;why-use-rama.html#gsc.tab=0</a>
评论 #38930594 未加载
评论 #38930621 未加载
ecshaferover 1 year ago
I don&#x27;t see how you can claim this is proved by a &quot;twitter scale mastodon client&quot; unless you are actually running a 40m daily user website. Simulating a real environment, and the accompanying code and infra changes, real users, network usage, etc is impossible.
评论 #38942705 未加载
brianmccover 1 year ago
We do go in circles&#x2F;cycles quite a lot as an industry. I wonder if the trend is back towards SQL, right now, too many teams been burned by Event Sourcing when they just needed a decent SQL DB? Just idle conjecture....
koposover 1 year ago
The comments here are needlessly pessimistic and dismissive of a new data flow paradigm. In fact, this looks like the best NoSQL experience there is. SQL while is a standard now, had to prove itself many times over and also was a result of a massive push by few big tech backers.<p>Rama still looks like it needs some starter examples - that is all.<p>From what i could gather reading the documentation over few weeks... Rama is an engine supporting Stored Procedure over NoSQL systems. That point alone is worth a million bucks. I hope it lives up to the promise.<p>Now back to my coding :D
评论 #38942152 未加载
评论 #38938565 未加载
bccdeeover 1 year ago
Reminds me a lot of &quot;Turning the Database Inside-Out&quot;[1], but I think Red Planet Labs is overstating their point a little. TtDIO is a lot more careful with its argument, and it doesn&#x27;t claim to have some sort of silver bullet to sell me.<p>[1]: <a href="https:&#x2F;&#x2F;www.confluent.io&#x2F;blog&#x2F;turning-the-database-inside-out-with-apache-samza&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.confluent.io&#x2F;blog&#x2F;turning-the-database-inside-ou...</a>
chrisjcover 1 year ago
I haven&#x27;t read through all of the documentation and while I actually love Java, I&#x27;m surprised that there isn&#x27;t some kind of declarative language (DDL but more than just the &quot;data&quot; in Data Description Language) even if that means relying on non-standard SQL objects&#x2F;conventions.<p><pre><code> CREATE OR REPLACE MODULE MY_MOD ... CREATE OR REPLACE PSTATE MY_MOD.LOCATION_UPDATE (USER_ID NUMBER, LOC... CREATE PACKAGE MY_PACKAGE USING MY_MOD DEPLOY OR REDEPLOY MY_PACKAGE TASKS = 64 THREADS=16 ... </code></pre> Perhaps the same could be said for DML (Data Manipulation Language). I can imagine most DML operations (insert&#x2F;update&#x2F;delete&#x2F;merge) could be used, while event-source occurs behind the scenes with the caller being none the wiser. Might there be an expressive way to define the serialization of parts of the DML (columns) down to the underlying PState. After all, if the materialized version of the PStates is based on expressions to the underlying data, then surely the reverse expression would be enough to understand how to mutate said underlying data. Or at least a way for Rama to derive the respective event-sourcing processes and handle it behind the scenes? Serialization&#x2F;deserialization could also defined in SQL-like expressions as part of the schema&#x2F;module.<p>I say all of this while being acutely aware that there is undoubtedly as many people out there that dislike SQL as there are that dislike Java, or maybe more.<p>I really like this:<p>&gt; Every backend that’s ever been built has been an instance of this model, though not formulated explicitly like this. Usually different tools are used for the different components of this model: data, function(data), indexes, and function(indexes).
the_dukeover 1 year ago
Every time I tried to use event sourcing I have regretted it, outside of some narrow and focused use cases.<p>In theory ES is brilliant and offers a lot of great functionality like replaying history to find bugs, going back to any arbitrary point in history, being able to restore just from the event log, diverse and use case tailored projections, scalability, ...<p>In practice it increases the complexity to the point were it&#x27;s a pointless chore.<p>Problems:<p>* the need for events, aggregates and projections increases the boilerplate tremendously. You end up with lots of types and related code representing the same thing. Adding a single field can lead to a 200+ LOC diff<p>* a simple thing like having a unique index becomes a complex architectural decision and problem ... do you have an in-memory aggregate? That doesn&#x27;t scale. Do you use a projection with an external database? well, how do you keep that change ACID? etc<p>* you need to keep support for old event versions forever, and either need code to cast older event versions into newer ones, or have a event log rewrite flow that removes old events before you can remove them from code<p>* if you have bugs in you can end up needing fixup events &#x2F; event types that only exist to clean up , and as above, you have to keep that around for a long time<p>* similarly, bugs in projection code can mess up the target databases and require cumbersome cleanup &#x2F; rebuilding the whole projection<p>* regulation like GDPR requires deleting user data, but often you can&#x27;t &#x2F; don&#x27;t want to just delete everything, so you need an anonimizing rewrite flow. it can also become quite hard to figure out where the data actually is<p>* the majority of use cases will make little to no use of the actual benefits<p>A lot of the above could be fixed with proper tooling. A powerful ES database that handles event schemas, schema migrations, projections, indexes, etc, maybe with a declarative system that also allows providing custom code where necessary.<p>I&#x27;ll take a look at Rama I guess.
cmrdporcupineover 1 year ago
<i>Data models are restrictive</i><p>That&#x27;s kind of the point. Model your data. Think about it. Don&#x27;t (mis)treat your database as a &quot;persistence layer&quot; -- it&#x27;s not. It&#x27;s a knowledge base. The &quot;restriction&quot; in the relational model is making you think about knowledge, facts, data, and then structure them in a way that is <i>then</i> more universal and less restrictive for the future.<p>Relations are very expressive and done right is far more flexible than the others named there. That was Codd&#x27;s entire point:<p><a href="https:&#x2F;&#x2F;www.seas.upenn.edu&#x2F;~zives&#x2F;03f&#x2F;cis550&#x2F;codd.pdf" rel="nofollow">https:&#x2F;&#x2F;www.seas.upenn.edu&#x2F;~zives&#x2F;03f&#x2F;cis550&#x2F;codd.pdf</a><p><i>&quot;Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation) ...&quot;</i> and then goes on to explain how the predicate-logic based relational data model is a more universal and flexible model that protects users&#x2F;developers from the <i>static</i> impositions of tree-structured&#x2F;network structure models.<p>All the other stuff in this article is getting stuck in the technical minutiae of how SQL RDBMSs are implemented (author seems obsessed with indexes). But that&#x27;s somewhat beside the point. A purely relational database that jettisons SQL doesn&#x27;t have to have the limitations the author is poking at.<p>It&#x27;s so frustrating we&#x27;re still going over this stuff decades later. This was a painful read. People developing databases should already be schooled in this stuff.
评论 #38933034 未加载
评论 #38941555 未加载
评论 #38943863 未加载
评论 #38931437 未加载
评论 #38932225 未加载
specialistover 1 year ago
<a href="https:&#x2F;&#x2F;blog.redplanetlabs.com&#x2F;2024&#x2F;01&#x2F;09&#x2F;everything-wrong-with-databases-and-why-their-complexity-is-now-unnecessary&#x2F;#Restrictive_schemas" rel="nofollow">https:&#x2F;&#x2F;blog.redplanetlabs.com&#x2F;2024&#x2F;01&#x2F;09&#x2F;everything-wrong-w...</a><p>&gt; <i>It’s common to instead use adapter libraries that map a domain representation to a database representation, such as ORMs. However, such an abstraction frequently leaks and causes issues. ...</i><p>FWIW, I&#x27;m creating a tool (strategy) that is neither an ORM or an abstraction layer (eg JOOQ) or template-based (eg myBatis). Just type safe adapters for normal SQL statements.<p>Will be announcing an alpha release &quot;Any Week Now&quot;.<p>If anyone has an idea for how to monetize yet another database client library, I&#x27;m all ears. I just need to eat, pay rent, and buy dog kibble.
评论 #38942061 未加载
评论 #38933250 未加载
manicennuiover 1 year ago
A lot of these same problems were solved in a similar way with Datomic and xtdb.<p><a href="https:&#x2F;&#x2F;www.datomic.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.datomic.com&#x2F;</a> <a href="https:&#x2F;&#x2F;xtdb.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;xtdb.com&#x2F;</a>
kaba0over 1 year ago
&gt; However, storing data normalized can increase the work to perform queries by requiring more joins. Oftentimes, that extra work is so much you’re forced to denormalize the database to improve performance.<p>Databases have materialized views though, that solves this problem.
评论 #38938754 未加载
w10-1over 1 year ago
I was in favor of doubling the complexity by prefixing RDB with event logs, for retrospective QA&#x2F;analysis and prospective client segregation.<p>Databases now are a snapshot of the data modeling and usage at a particular point in application lifecycle. We manage to migrate data as it evolves, but you can&#x27;t go back in time.<p>Why go back? In our case, our interpretation of events (as we stuffed data into the DB) was hiding the data we actually needed to discover problems with our (bioinformatics and factory) workflow - the difference between expected and actual output that results from e.g., bad batches of reagent or a broken picker tip. We only stored e.g., the expected blend of reagents because that&#x27;s all we needed for planning. That meant we had no way to recover the actions leading to that blend for purposes of retrospective quality analysis.<p>So my proposal was to log all actions, derive models (of plate state) as usual for purpose of present applications, but still be able to run data analysis on the log to do QA when results were problematic.<p>Ha ha! They said, but still :)<p>Event prefixing might also help in the now&#x2F;later design trade-off. Typically we design around requirements now, and make some accommodation for later if it&#x27;s not too costly. Using an event log up front might work for future-proofing. It also permits &quot;incompatible&quot; schema to co-exist for different clients, as legacy applications read the legacy downstream DB, while new ones read the upcoming DB.<p>For a bio service provider, old clients validate a given version, and they don&#x27;t want the new model or software, while new clients want the new stuff you&#x27;re building for them. You end up maintaining different DB models and infrastructure -- yuck! But with event sourcing, you can at least isolate the deltas, so e.g., HIPAA controls and auditing live in the event layer, and don&#x27;t apply to the in-memory bits.<p>TBH, a pitch like Rama&#x27;s would play better in concert with existing DB&#x27;s, to incrementally migrate the workflows that would benefit from it. Managers are often happy to let IT entrepreneurs experiment if it keeps them happy and away from messing with business-critical functions.<p>YMMV...
评论 #38941230 未加载
MagicMoonlightover 1 year ago
If you gut all the features like persistence and rolling back errors then you can definitely make things less complex.<p>But then someone wants to access their email and it turns out the server restarted so it’s gone.
jakozaurover 1 year ago
My biggest problem with databases is that they are very hard to evolve. They accumulate a history of decisions and are in a suboptimal state. Legacy is widespread in enterprises. Oracle is still milking $ 50B+ annually, and the databases are the primary driver of why you need Oracle and why they can upsell you other products after a compliance audit.<p>The schema changes are hard (e.g. try to normalize&#x2F;denormalize data), production is the only environment when things go wrong, in-place changes with untested revert options are default, etc.
评论 #38931055 未加载
dcowover 1 year ago
I get weird looks when I tell people we ran for 3.5 years on an s3api in front of bucket storage. It scaled to meet our needs and was especially appropriate for our app’s storage profile. And now that the startup doesn’t exist I’m glad that I never wasted time messing with “real” DBs. There’s definitely an industry bias toward using DBs.
csoursover 1 year ago
How about this: ACID RDBMS in many cases are sugar. That is, they provide very NICE features, but those features can be implemented in other ways. In the cloud world, the sugar may not be worth the costs.<p>I think the weak case is much stronger than the strong case - that is, you can refactor to remove RDBMS dependencies; but that moves the complexity elsewhere.
nojvekover 1 year ago
Read the tutorial<p><a href="https:&#x2F;&#x2F;redplanetlabs.com&#x2F;docs&#x2F;~&#x2F;tutorial1.html#gsc.tab=0" rel="nofollow">https:&#x2F;&#x2F;redplanetlabs.com&#x2F;docs&#x2F;~&#x2F;tutorial1.html#gsc.tab=0</a><p>This is quite complex compared to setting up Postgres or mysql and sending some sql over some port.<p>I’m not sure I get what they are selling.
jrockwayover 1 year ago
A few years ago I tried writing an application (something like Status Hero for internal use) with a non-traditional database. I used Badger, which is just a transactional k&#x2F;v store, and stored each &quot;row&quot; as protobuf value and an ID number key. (message { int id = 1 }, query by type + ID, store anything with interface { GetId() int }.)<p>I had additional messages for indexes and per-message-type IDs. (I like auto-incrementing IDs, sue me.) A typical transaction would read indexes, retrieve rows, manipulate them, save the rows, save the indexes, and commit.<p>The purity in my mind before I wrote the application was impressive; this is all a relational database is doing under the hood (it has some bytes and some schema to tell it what the bytes mean, just like protos). But it was actually a ton of work that distracted me from writing the app. The code to handle all the machinery wasn&#x27;t particularly large or anything, but the app also wasn&#x27;t particularly large.<p>I would basically say, it wasn&#x27;t worth it. I should have just used Postgres. The one ray of sunshine was how easy it is to ship a copy of the database to S3; the app just backed itself up every hour, which is a better experience than I&#x27;ve had with Postgres (where the cloud provider deletes your backups when you delete the instance... so you have to do your own crazy thing instead).<p>The article is on-point about managing the lifecycle of data. Database migrations are a stressful part of every deployment. The feature I want is to store a schema ID number in every row and teach the database how to run a v1 query against a v2 piece of data. Then you can migrate the data while the v1 app is running, then update the app to make v2 queries, then delete the v1 compatibility shim. If you store blobs in a K&#x2F;V store, you can do this yourself. If you use a relational model, it&#x27;s harder. You basically take the app down that knows v1 of your schema, upgrade all the rows to v2, and deploy the app that understands v2. The &quot;upgrade all the rows to v2&quot; step results in your app being unavailable. (The compromise I&#x27;ve seen, and used, which is horrible is &quot;just let the app fail certain requests while the database is being migrated, and then have a giant mess to clean up when the migration fails&quot;. Tests lower the risk of a giant mess, and selective queries result in fewer requests that can&#x27;t be handled by the being-migrated database, so in general people don&#x27;t realize what a giant risk they&#x27;re taking. But it can all go very wrong and you should be horrified when you do this.)
kevsimover 1 year ago
Is this Rama solution similar to the kind of thing you can get with Kafka with KTables?<p>If so I&#x27;d be curious how they&#x27;ve solved making it in anyway operational less complex to manage then a database. It&#x27;s been a few years since I&#x27;ve run Kafka but it used to kind of be a beast.
cryptonectorover 1 year ago
Event sourcing (+ materialized views and indices) != abandon your RDBMS. You can have both. Though you might find that traditional RDBMSes don&#x27;t optimize well enough in the event sourcing (+ materialized views and indices) model.
lambda_gardenover 1 year ago
<p><pre><code> indexes = function(data) query = function(indexes) </code></pre> How does this model a classic banking app where you need to guarantee that transfers between accounts are atomic?
评论 #38942106 未加载
estebarbover 1 year ago
I remembers me CouchDB + incremental map reduce. Except that in CouchDB you can mutate state. Idk, but keeping all the history doesn&#x27;t take a lot of space?
评论 #38941170 未加载
marcosdumayover 1 year ago
Well, I guess it&#x27;s official. The belief on software architecture dogma is so strong that we can consider it a church.<p>I imagine that will bring great tax benefits to programing schools.
skywhopperover 1 year ago
This is marketing spiel masquerading as a bad take. Rama may or may not be cool tech, but the idea that they are anywhere close to being able to get rid of structured database systems for complex systems is absolutely laughable to the point that it makes me uninterested in learning more about the tech. Please tone down the hyperbole if you want serious attention.
strangattractorover 1 year ago
Seems similar datomic.<p><a href="https:&#x2F;&#x2F;www.datomic.com&#x2F;benefits.html" rel="nofollow">https:&#x2F;&#x2F;www.datomic.com&#x2F;benefits.html</a>
评论 #38942011 未加载
pulse7over 1 year ago
Could be generalized to &quot;Everything wrong with &lt;placeholder&gt; and why their complexity is now unnecessary&quot;...
sigmonsaysover 1 year ago
this seems like a classic bait and switch post selling a product called rama<p>The approach here seems drastically more complicated; for simple apps, you go for a well known master-&gt;slave setup. For complicated apps you scale (shard, cluster, etc).<p>Pick your database appropriately
LispSporks22over 1 year ago
What’s micro batch streaming?
评论 #38931077 未加载
HackerThemAllover 1 year ago
A lame marketing mumbling to persuade people to buy a specific product.
tplover 1 year ago
What sort of cost increase can I expect using something like this?
评论 #38942138 未加载
es7over 1 year ago
Did anyone else think this was satire for the first few minutes of reading it?<p>Calling databases global state and arguing why they shouldn’t be used was ridiculous enough that I wanted to call Poe’s Law here.<p>But it does look like the author was sincere. Event Sourcing is one of those cool things that seem great in theory but in my experience I’ve never seen it actually help teams produce good software quickly or reliably.
评论 #38940226 未加载
qaqover 1 year ago
Most SQL RDBMS are a materialised view over a transaction log.
continuationalover 1 year ago
Does it have ACID transactions?<p>Does the indexes have read after write guarantees?
评论 #38931046 未加载
0xbadcafebeeover 1 year ago
strapping in for the clickbait blog post...<p><i>&quot;Global mutable state is harmful&quot;</i> - well... yes, that&#x27;s totally correct. <i>&quot;The better approach [..] is event sourcing plus materialized views.&quot;</i> .....errr... that&#x27;s <i>one</i> approach. we probably shouldn&#x27;t hitch all our ponies to one post.<p><i>&quot;Data models are restrictive&quot;</i> - well, yes, but that&#x27;s not necessarily a bad thing, it&#x27;s just &quot;a thing&quot;. <i>&quot;If you can specify your indexes in terms of the simpler primitive of data structures, then your datastore can express any data model. Additionally, it can express infinite more by composing data structures in different ways&quot;</i> - perhaps the reader can see where this is a bad idea? by allowing infinite data structures, we now have infinite complexity. great. so rather than 4 restrictive data models, we&#x27;ll have 10,000.<p><i>&quot;There’s a fundamental tension between being a source of truth versus being an indexed store that answers queries quickly. The traditional RDBMS architecture conflates these two concepts into the same datastore.&quot;</i> - well, the problem with looking at it this way is, there is no truth. if you give any system enough time to operate, grow and change, eventually the information that was &quot;the truth&quot; eventually receives information back from something that was &quot;indexing&quot; the truth. &quot;truth&quot; is relative. <i>&quot;The solution is to treat these two concepts separately. One subsystem should be used for representing the source of truth, and another should be used for materializing any number of indexed stores off of that source of truth.&quot;</i> this will fail eventually when your source of truth isn&#x27;t as truthy as you&#x27;d like it to be.<p><i>&quot;The restrictiveness of database schemas forces you to twist your application to fit the database in undesirable ways.&quot;</i> - it&#x27;s a tool. it&#x27;s not going to do everything you want, exactly as you want. the tradeoff is that it does one thing really specifically and well.<p><i>&quot;The a la carte model exists because the software industry has operated without a cohesive model for constructing end-to-end application backends.&quot;</i> - but right there you&#x27;re conceding that there has to be a &quot;backend&quot; and &quot;frontend&quot; to software design. your models are restrictive because your paradigms are. <i>&quot;When you use tooling that is built under a truly cohesive model, the complexities of the a la carte model melt away, the opportunity for abstraction, automation, and reuse skyrockets, and the cost of software development drastically decreases.&quot;</i> - but actually it&#x27;s the opposite: a &quot;cohesive model&quot; just means &quot;really opinionated&quot;. a-la-carte is actually a significant improvement over cohesion <i>when it is simple and loosely-coupled</i>. there will always be necessary complexity, but it can be managed easier when individual components maintain their own cohesion, and outside of those components, maintain an extremely simple, easy interface. <i>that</i> is what makes for more composable systems that are easier to think about, not cohesion between all of the components!<p><i>&quot;A cohesive model for building application backends&quot;</i> - some really good thoughts in the article, but ultimately &quot;cohesion&quot; between system components is not going to win out over individual components that maintain their cohesion and join via loosely-coupled interfaces. if you don&#x27;t believe me, look at the whole Internet.
phartenfellerover 1 year ago
I have been working as a database consultant for a few years. I am, of course, in my bubble, but there are a few things I really don&#x27;t enjoy reading.<p>&gt; No single data model can support all use cases. This is a major reason why so many different databases exist with differing data models. So it’s common for companies to use multiple databases in order to handle their varying use cases.<p>I hate that this is a common way of communicating this nowadays. Relational has been the mother of all data models for decades. In my opinion, you need a good reason to use something different. And this is also not an XOR. In the relational world, you can do K&#x2F;V tables, store and query documents, and use graph functions for some DBs. And relational has so many safety tools to enforce data quality (e.g. ref. integrity, constraints, transactions, and unique keys). Data quality is always important in the long run.<p>&gt; Every programmer using relational databases eventually runs into the normalization versus denormalization problem. [...] Oftentimes, that extra work is so much you’re forced to denormalize the database to improve performance.<p>I was never forced to denormalize something. Almost always, poor SQL queries are a problem. I guess this can be true for web hyperscalers, but these are exceptions.
评论 #38938694 未加载
评论 #38940359 未加载
评论 #38939910 未加载
评论 #38940066 未加载
评论 #38938678 未加载
评论 #38939960 未加载
评论 #38938742 未加载
评论 #38939289 未加载
评论 #38939820 未加载
morsecodistover 1 year ago
This is more an advertisement for a type of database than a statement that they are unnecessary.<p>From what I can tell in the article it seems their differentiator is Event Sourcing and having arbitrary complex index builders on top of the events. It seems similar to EventStoreDB[1].<p>I have always been interested by the concept of an event sourcing database with projections and I want to build one eventually so it is interesting to see how they have approached the problem.<p>Also they mention on their site:<p>&gt; Rama is programmed entirely with a Java API – no custom languages or DSLs.<p>It makes sense why they have gone this route if they want a &quot;Turing-complete dataflow API&quot; but this can be a major barrier to adoption. This is a big challenge with implementing these databases in my opinion because you want to allow for any logic to build out your indexes&#x2F;projections&#x2F;views but then you are stuck between a new complicated DSL or using a particular language.<p>1: <a href="https:&#x2F;&#x2F;developers.eventstore.com&#x2F;server&#x2F;v23.10&#x2F;#getting-started" rel="nofollow">https:&#x2F;&#x2F;developers.eventstore.com&#x2F;server&#x2F;v23.10&#x2F;#getting-sta...</a>
评论 #38930795 未加载
评论 #38937891 未加载
评论 #38937286 未加载
评论 #38940604 未加载
评论 #38941428 未加载
评论 #38931204 未加载
评论 #38940801 未加载
评论 #38938269 未加载
registerover 1 year ago
Pure B...t. The title is deceiving and should be instead something along the lines of: How to architect an application at Mastodon scale without relying on databases. Also I would be very interested in seeing the actual technology rather than reading sensational claims about the unparalled level of scalability it supports. What does it provide in order to recover from failure and exceptions and to guarantee consistency of state?<p>Relational databases are and will always be necessary as they provide a convenient model for querying, aggregating, joining and reporting data.<p>Much of the value in a database lies in how it supports extracting value from business information rather what extreme scalability features it supports.<p>Try to create a decent business report from events and then we can speak again.
评论 #38937892 未加载
评论 #38937878 未加载
benlivengoodover 1 year ago
I didn&#x27;t notice a mention of transactions in the article, nor of constraints. It&#x27;s all fine to claim that you can compose arbitrary event source domains together and query them but IMHO the biggest power of RDBMS are transactions and constraints for data integrity. Maybe Rama comes with amazing composability features that ensure cross-domain constraints, but I would be really surprised if they can maintain globally consistent real-time transactions.<p>I&#x27;ve worked on huge ETL pipelines with materialized views (Photon, Ubiq, Mesa) and the business logic in Ubiq to materialize the view updates for Mesa was immense. None of it was transactional; everything was for aggregate statistics and so it worked well. Ads-DB and Payments used Spanner for very good reasons.
评论 #38941435 未加载
评论 #38938008 未加载
评论 #38941161 未加载
jcrawfordorover 1 year ago
I feel like I went into this from a position of genuine interest, I&#x27;m always on the lookout for significant developments in backend architecture.<p>But when I hit the sentence &quot;This can completely correct any sort of human error,&quot; I actually laughed out loud. Either the author is overconfident or they have had surprisingly little exposure to humans. More concretely, it seems to completely disregard the possibility of invalid&#x2F;improper&#x2F;inconsistent events being introduced by the writer... the way that things go wrong. And I don&#x27;t see any justification for disregarding this possibility, it&#x27;s just sort of waved away. That means waving way most of the actual complexity I see in this design, of having to construct your PState data models from your actual, problematic event history. Anyone that&#x27;s worked with ETLs over a large volume of data will have spent many hours on this fight.<p>I think the concept is interesting, but the revolutionary zeal of this introduction seems unjustified. It&#x27;s so confident in the superiority of Rama that I have a hard time believing any of the claims. I would like to see a much more balanced compare&#x2F;contrast of Rama to a more conventional approach, and particularly I would like to see that for a much more complex application than a Twitter clone, which is probably just about the best possible case for demonstrating this architecture.
评论 #38942491 未加载
评论 #38942944 未加载
评论 #38942825 未加载
评论 #38943526 未加载
评论 #38943161 未加载
fifticonover 1 year ago
I work in a shop with about 6 years of event-sourcing experience (as in, our production has run on eventsourcing since 2017). My view is, that &#x27;humans are not mature enough for eventsourcing&#x27;. For eventsourcing to work sanely, it must be used responsibly. Reality is, that people make mistakes, and eventsourcing HURTS whenever your developers don&#x27;t act maturely on the common history you have built. For us, it has meant a bungee jump of &quot;move ALL the things to eventsourcing&quot;, followed by a long slow painful &#x27;move everything that doesn&#x27;t NEED eventsourcing out of eventsourcing again, into relational database, and only keep the relevant eventsourcing parts in the actual eventsource db&#x27;.<p>The main consequences for us have been &#x27;consume a huge&#x2F;expensive amount of resources&#x27; to do what we already did earlier, with vastly fewer resources, at the benefit of having some things easier to do, and a lot of other things suddenly complex. In particular, it was not a &#x27;costless abstraction&#x27;, instead it forced us to always consider the consequences for our eventsourcing.
评论 #38937589 未加载
评论 #38940589 未加载
winridover 1 year ago
I get what they&#x27;re trying to do but I&#x27;m not sure this [0] syntax is the answer.<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;redplanetlabs&#x2F;twitter-scale-mastodon&#x2F;blob&#x2F;master&#x2F;backend&#x2F;src&#x2F;main&#x2F;java&#x2F;com&#x2F;rpl&#x2F;mastodon&#x2F;modules&#x2F;Notifications.java#L70">https:&#x2F;&#x2F;github.com&#x2F;redplanetlabs&#x2F;twitter-scale-mastodon&#x2F;blob...</a>
评论 #38938011 未加载
评论 #38940734 未加载
评论 #38938935 未加载
gwbas1cover 1 year ago
I spent a lot of time reading this yesterday, and started looking at Rama&#x27;s docs.<p>I think the a database that encapsulates denormalization so that derived views (caches, aggregations) are automatic is a killer feature. But far too often awesome products and ideas fail for trivial reasons.<p>In this case, I just can&#x27;t understand how Rama fits into an application. For example:<p>Every example is Java. Is Rama only for Java applications? Or, is there a way to expose my database as a REST API? (That doesn&#x27;t require me to jump through a million hoops and become an expert in the Java ecosystem?)<p>Can I run Rama in Azure &#x2F; AWS &#x2F; Google cloud &#x2F; Oracle cloud? Are there pre-built docker images I can use? Or is this a library that I have to suck into a Java application and use some kind of existing runtime? (The docs mention Zookeeper, but I have very little experience with it.)<p>IE: It&#x27;s not clear where the boundary between my application (Java or not) and Rama is. Are the examples analogous to sprocs (run in the DB) or business logic (run in the application)?<p>The documentation is also very hard. It appears the author has every concept in their head, because they know Rama inside and out, yet can&#x27;t emphasize with the reader and provide simpler bits of information that convey useful concepts. There&#x27;s both &quot;too much&quot; (mixing of explaining pstates and the depot) and &quot;too little&quot; (where do I host it, what is the boundary between Rama and my application?)<p>Another thing I didn&#x27;t see mentioned is tooling: every SQL database has at least one general SQL client. (MSSQL studio, Azure data studio,) that allows interacting with the database. (Viewing the schema, ad-hoc queries, ECT.) Does Rama have this, or is every query a custom application?<p>Anyway, seems like a cool idea, but it probably needs some well-chosen customers who ask tough questions so the docs become mature.
keeganpoppenover 1 year ago
this comment section has gotta be in the absolute upper echelons of non-RTFA i have seen on HN in a long time. even for HN. i acknowledge my own bias, though: i’ve been an admirer of nathan marz’s work from afar for years now, and basically trust him implicitly. but… wow. what fraction of the comments even engage with the substance of the article in any way? it’s not like they didn’t put their money where their mouth(s) is&#x2F;are: they feel strongly enough about the problem that they built an entire goddamn “dont call it a database” (and business) around it.<p>i’ve always been pretty sympathetic to code-&#x2F;application- driven indexing, storage, etc.— it just seems intuitively more correct to me, if done appropriately. the biggest “feature” of databases, afaict, is that most people dont trust themselves to <i>do</i> this appropriately xD. and they distrust themselves in this regard so thoroughly that they deny the mere <i>possibility</i> of it being useful. some weird form of learned helplessness. you can keep cramming all of your variously-shaped blocks into tuple-shaped holes if you want, but it seems awfully closed-minded to deny the possibility of a better model <i>on principle</i>. what principle? the lindy effect?
big_whackover 1 year ago
A lot of the commenters seem like database fans instinctively jumping to defend databases. The post is talking about contexts where you are dealing with petabytes of data. Building processing systems for petabytes has a separate set of problems from what most people have experienced. Having a single Postgres for your startup is probably fine, that&#x27;s not the point here.<p>There is no option to just &quot;put it all in a database&quot;. You need to compose a number of different systems. You use your individual databases as indexes, not as primary storage, and the primary storage is probably S3. The post is interesting and the author has been working on this stuff for a while. He wrote Apache Storm and used to promote some of these concepts as the &quot;Lambda architecture&quot; though I haven&#x27;t seen that term in a while.
评论 #38931184 未加载
评论 #38931025 未加载
评论 #38933510 未加载
评论 #38932483 未加载
saberienceover 1 year ago
Such a poorly written article doesn&#x27;t encourage me to use a brand new and untrusted database; if you can&#x27;t write a clear article, why would I trust your database code?<p>This is a thinly veiled ad for Rama but the explanation for why it&#x27;s so much &quot;better&quot; isn&#x27;t clear and doesn&#x27;t make much sense. I strongly urge the author to work with some who is a clear and concise technical writer to help with articles such as these.
fiparover 1 year ago
I can&#x27;t wrap my head around the way this solves the global mutable state problem.<p>First, here&#x27;s what I do understand about databases and global state: compared to programming variables, I don&#x27;t think databases are shared, mutable global state. Instead, I see them as private variables that can be changed through set&#x2F;get methods (e.g., with SQL statements if on such a DB).<p>So I agree shared, global state is dangerous (I&#x27;m not sure I&#x27;d call it harmful) and the reason I like databases is that I assume a DB, being specialized at managing data, will do a better job at protecting the integrity of that global state than I&#x27;d do myself from my program.<p>With luck, there may even be a jepsen test of the DB I&#x27;m using that lets me know how good the DB is at doing this job.<p>In this post there&#x27;s an example of a question we&#x27;d ask Rama: “What is Alice’s current location?”<p>How&#x27;s that answered without global state?<p>Because of the mention of event sourcing, I&#x27;d guess there&#x27;s some component that knows where Alice was when the system was started, and keeps a record of events every time she changes her place. If Alice were the LOGO turtle, this component would keep a log with entries such as &quot;Left 90 degrees&quot; or &quot;Forward 10 steps&quot;.<p>If I want to know where Alice is now, I just need to access this log and replay everything and that&#x27;d be my answer.<p>Now, I&#x27;m certain my understanding here must be wrong, at least on the implementation side, because this wouldn&#x27;t be able to scale to the mastodon demo mentioned in the post, which makes me very curious: how does Rama solve the problem of letting me know where Alice is without giving me access to her state?
评论 #38942559 未加载
评论 #38942073 未加载
评论 #38942268 未加载
jmullover 1 year ago
Pretty interesting once you read past the marketing push.<p>I mostly like the approach, but there are a lot of questions&#x2F;issues that spring to mind (not that some of them don&#x27;t already have answers, but I didn&#x27;t read everything). I&#x27;ll list some of them:<p>* I&#x27;m pretty sure restrictive schemas are a feature not a bug, but I suppose you can add your own in your ETL &quot;microbatch streaming&quot; implementation (if I&#x27;m reading this right, this is where you transform the events&#x2F;data that have been recorded to the indexed form your app wants to query). So you could, e.g., filter out any data with invalid schema, and&#x2F;or record and error about the invalid data, etc. A pain, though, for it to be a separate thing to implement.<p>* I&#x27;m not that excited to have my data source and objects&#x2F;entities be Java.<p>* The Rama business model and sustainability story seem like big question marks that would have to have strong, long-lasting answers&#x2F;guarantees before anyone should invest too much in this. This is pretty different and sits at a fundamental level of abstraction. If you built on this for years (or decades) and then something happened you could be in serious trouble.<p>* Hosting&#x2F;deployment&#x2F;resources-needed is unclear (to me, anyway)<p>* Quibble on &quot;Data models are restrictive&quot;: common databases are pretty flexible these days, supporting different models well.<p>* I&#x27;m thinking a lot of apps won&#x27;t get too much value from keeping their events around forever, so that becomes a kind of anchor around the neck, a cost that apps using Rama have to pay whether they really want it or not. I have questions about how that scales over time. E.g., say my has depot has 20B events and I want to add an index to a p-state or a new value to an enum... do I need to ETL 20 billion events to do routine changes&#x2F;additions? And obviously schema changes get a lot more complicated than that. I get that you could have granular pstates but then I start worrying about the distributed nature of this. I guess you would generally do migrations by creating new pstates with the new structure, take as much time as you need to populate them, then cut over as gradually as you need, and then retire the old pstates on whatever timeline you want.... But that&#x27;s a lot of work you want to avoid doing routinely, I&#x27;d think.<p>I&#x27;m starting to think of more things, but I better stop (my build finished long ago!)
评论 #38934011 未加载
coldteaover 1 year ago
&quot;We announced Rama on August 15th with the tagline “the 100x development platform”.&quot;<p>Alternative to database company peddling their ware with misinformed rant.
twotwotwoover 1 year ago
A very simple thing about this (and many systems!) is that if your whole thing is &quot;log writes and do the real work later&quot;, you lose read-your-writes and with it the idea that your app has a big persistent memory to play in.<p>This doesn&#x27;t only matter if you&#x27;re doing balance transfers or such; &quot;user does a thing and sees the effects in a response&quot; is a common wish. (Of course, if you&#x27;re saving data for analytics or such and really don&#x27;t care, that&#x27;s fine too.)<p>When people use eventually-consistent systems in domains where they have to layer on hacks to hide some of the inconsistency, it&#x27;s often because that&#x27;s the best path they had out of a scaling pickle, not because that&#x27;s the easiest way to build an app more generally.<p>I guess the other big thing is, if you&#x27;re going to add asynchrony, it&#x27;s not obvious this is where you want to add it. If you think of ETLs, event buses, and queues as tools, there are a lot more ways to deploy them--different units of work than just rows, different backends, different amounts of asynchrony for different things (including none), etc. Why lock yourself down when you might be able to assemble something better knowing the specifics of your situation?<p>This company&#x27;s thing is riding the attention they get by making goofy claims, so I&#x27;m a bit sorry to add to that. I do wonder what happens once they&#x27;re talking to actual or potential customers, where you can&#x27;t bluff indefinitely.
评论 #38941189 未加载
评论 #38941239 未加载
igammaraysover 1 year ago
As a simple soloprenuer full stack dev who&#x27;s never worked on an application serving more than a few thousand users, I can understand and relate to all of the problems written about here (some very compelling arguments), found myself nodding most of the way through, but I simply don&#x27;t understand the proposed solution. Even the Hello World example from the docs flew over my head. And I&#x27;ve been programming apps in production for 15 years, and I like Java.<p>This needs a simple pluggable adaptor for some popular frameworks (Django, Laravel, or Ruby on Rails) and then I can begin to have an idea how this actually be used in my project.
thaanpaaover 1 year ago
Disk drives are also large global mutable states. So is RAM at the operating system level.<p>The article conflates the concept of data storage with best programming practices. Sure, you should not change the global state throughout your app because it becomes impossible to manage. The database is actually an answer to how to do it transactionally and centrally without messing up your data.
keeganpoppenover 1 year ago
this post is actually extremely based
intrasightover 1 year ago
&quot;Everything wrong blogs that do not support reader view&quot;<p>Why does anyone create a blog like that?