Terrible analogy, especially when there exists a better one within the food service industry. How about rewriting that article from the perspective of a coffee shop?<p>We've all been to the neighborhood coffee place where the girl will take your order, turn around and make your entire drink, hand it to you and ask for payment. We've all stood in that line.<p>We've also all been to Starbucks, where the girl takes your order, writes it on a cup, takes your money, then moves on to the next customer. And by the time you walk to the other end of the counter the guy in front of you already has coffee in his hand.<p>It still doesn't fit web servers exactly, but at least it fits the real world.
"Event-driven" architecture doesn't imply a fundamentally different way of handling requests than thread-based. The only difference is that in an event-driven architecture, the scheduling is handled in the userspace code. In a thread-based architecture, it's done in the kernel. The advantage of doing it in the userspace code is that it can be done in a simpler, more specialized way.<p>The hardware is doing fundamentally the same thing either way, but in the threaded model, it's also doing a lot of other stuff that you probably don't care about.<p>So, to explain it to my grandma:
It's just a simpler way to think about it. There isn't really a big difference.
While a great attempt at an analogy, I don't think that it really helps things. I've never misunderstood THAT part of event-based asynchronicity (is that even a word?) The part that is confusing to me is HOW it works and eventually to the point of WHY and/or HOW it is supposedly better than traditional threading (other than cleaner-looking code). I've never seen a good explanation in non-OS programmer terms.<p>To me, it seems that no matter how you take the "messages" to do work, that work still has to be done. It surely doesn't magically use less resources because you told the OS that it could just call you back when it is done, as opposed to you having to hang around? Something has to be hanging around on one side or the other, and the "call back" takes resources as well, surely? It seems that you are just trading tit for tat. Maybe the reason is to not utilize some specific resource in the meantime?
What a terrible analogy. I mean, there is a pizza-shop or other food service analogy in there, I've made it plenty of times. The problem is that when you make the phone call = request in the analogy, you make the phone connection analogous to the socket. At least have the operator put the caller on hold!<p>Of course then Grandma, not being an idiot, will say "why not just have driver bring the pizza and not have the phone all tied up to begin with?" and she is <i>absolutely correct</i>. It is better to just set up a scenario where you have waiters, and customers show up a the shop, and in blocking your waiter doubles as the cook, so you need one waiter per meal... and so on. This analogy passes a slightly closer examination.
Hmm, perhaps a closer analogy:<p>Traditional Web Server:<p>The pizza shop receives a call for the initial order and starts the pie. Then the customer calls back periodically to check if the pie is done because the pizza shop cannot call back or deliver.
Oh hey, I came up with the exact same analogy about 6 months ago in explaining Tornado and epoll on quora with a bit more technical detail, just replace pizza with pies:<p><a href="http://www.quora.com/Can-someone-explain-poll-epoll-in-Laymans-terms-How-is-Tornado-taking-advantage-of-this-technology/answer/Ben-Newhouse" rel="nofollow">http://www.quora.com/Can-someone-explain-poll-epoll-in-Layma...</a>
What is being described here is blocking and non-blocking IO. The analogy is pretty good.<p>However, the pizza company can probably still only cook 256 pizzas at the same time (due to running out of pan-handles).
event driven development should be avoided whenever possible. coroutines exist for a reason.<p>Writing event-driven applications is very prone to errors and invalid(impossible) states.