> Thin Events add coupling<p>That’s not my experience. In fact I’d say fat events add coupling because they create an invisible (from the emitter) dependency on the event body, which becomes ossified.<p>So I’d say the opposite: thin events <i>reduce</i> coupling. Sure, the receiver might call an API and that creates coupling with the API. But receivers are also free to call or not call any other API they want. What if they don’t care about the body of the object?<p>So I’m on team thin. Every time I’ve been tempted by the other team, I’ve regretted it. It’s also in my experience a lot more difficult to version events than it is to version APIs, so reducing their surface area also solves other problems.
I'll bite. Neither. Both. Depending on system.<p>When the "state" is large, or changes often, obviously you can't send full state every time - that would be too much for end-nodes to process on every event. Both cpu - deserialization, and bandwidth. Delta is the answer.<p>Delta though is hard, since there always is an inherent race between getting the first full snapshot, and subscribing to updates.<p>On the other hand doing delta is hard. Therefore, for simple small updated not-often things, fat events carrying all state might be okay.<p>There is a linear tradeoff on the "data delivery" component:<p>- worse latency saves cpu and bandwidth (think: batching updates)<p>- better latency burns more cpu and bandwidth<p>Finally, the receiver system always requires some domain specific API. In some cases passing delta to application is fine, in some cases passing a full object is better. For example, sometimes you can save a re-draw by just updating some value, in other cases the receiver will need to redraw everything so changing the full object is totally fine.<p>I would like to see a pub/sub messaging system that solves these issues. That you can "publish" and object, select latency goal, and "subscribe" to the event on the receiver and allow the system to choose the correct delivery method. For example, the system might choose pull vs push, or appropriate delta algorithm. As a programmer, I really just want to get access to the "synchronized" object on multiple systems.
I‘d pick neither and just let the system in possession of data send with the event only the part of data it owns (i.e. something in between fat and thin). Saves API call back, the body doesn’t have to be fully deserialized, so no format coupling, the rest can be fetched from other services on demand (coherent state though is not guaranteed, but that’s usually not critical with well designed bounded contexts).
I'm a fan of fat events, and let the receiver decide if they want to trust the event or not, or go ahead and make a call to the service to get the data.<p>for example:<p>if one receiver wants to know if you have read a book, then there is no reason to make a call to the service.<p>but if a service wants to know the last book you read, and doesn't trust the events to be in order, then it would make sense to just call the service.
As always, it depends. Yay engineering and trade-offs.<p>Hey just remember: both is always an option if you're consumers disagree. Thin stream from the consumers who don't trust the fat data, fat stream for the event log and other consumers that prefer it.
Fat events once overloaded our message broker with OOM under high load and the broker's default behavior was to block all publishers until the queue was emptied (to release memory) - downtime as a result. Another issue was that under high load, if the event queue was too large, handlers would end up processing very stale data resulting in all kinds of broken behavior (from the point of view of the user).<p>Thin events resulted in DDoS of our service a few times because handlers would call our APIs too frequently to retrieve object state (which was partially mitigated by having separate machines serve incoming traffic and process events).<p>(A trick we used which worked for both fat and thin events was to add versioning to objects to avoid unnecessary processing).<p>We also used delta events as well but they had same issues as thin events because handlers usually have to retrieve full object state anyway to have meaningful processing (not always, depends on business logic and the architecture).<p>There are so many ways to shoot yourself in the foot with all three approaches and I still hesitate a lot when choosing what kind of events to use for the next project.
For me, this depends on the semantics of the system. Is the sender commanding the receiver to carry out the rest of the process, or is the sender broadcasting information to a dynamic set of interested parties? In other words, are you building a pipeline or a pub-sub?<p>If the former, there is inherently tight coupling between sender and receiver, and the sender should send all necessary context to simplify the system design.<p>If the latter, then we talking about a decoupled system, where the sender cannot make assumptions about what info the receiver does or doesn't need to take further action. A thin event is called for to keep the contract simple.<p>One of my frustrations with the event-driven trend is that people don't always seem to think through what they're designing. It's easy to end up with a much more complex system than a transactional architecture.<p>Generally, I favor modeling as much of my system as possible as pipelines, and use pub-subs sparingly, as places where you have fan out to parallel pipelines.<p>Raw events are like GOTOs. They are extremely powerful, but also very difficult to reason about.
Thin events have the benefit of easy retry/resend logic. Depending on your message queue solution you might need to sometimes resend events.
If the event is 'user account changed', receiving it a few times too many causes only performance issues, but not correctness problems.
Sometimes this is the better tradeoff.<p>It is easier to send an event 'user account changed' than to analyze in detail what exactly changed, which also allows you to decouple the event logic from everything else.<p>Of course not every system benefits from such solutions, but sometimes simplicity wins.
I have a different take, why not just send RPC functions with all of their parameters?<p>Then the payload is guaranteed to be small but still able to handle complex operations.
I'd settle for developers just making sure events were events that have happened and not action commands they want to happen.<p>Nothing is worse than an event driven system polluted with action commands.