The morning paper had a nice set of blog posts about this:<p>Why they're equivalent (duals):
<a href="https://blog.acolyer.org/2014/12/08/on-the-duality-of-operating-system-structures/" rel="nofollow">https://blog.acolyer.org/2014/12/08/on-the-duality-of-operat...</a><p>Why threads are a bad idea:
<a href="https://blog.acolyer.org/2014/12/09/why-threads-are-a-bad-idea/" rel="nofollow">https://blog.acolyer.org/2014/12/09/why-threads-are-a-bad-id...</a><p>Why events are a bad idea:
<a href="https://blog.acolyer.org/2014/12/10/why-events-are-a-bad-idea/" rel="nofollow">https://blog.acolyer.org/2014/12/10/why-events-are-a-bad-ide...</a><p>Unifying events and threads (in Haskell):
<a href="https://blog.acolyer.org/2014/12/11/a-language-based-approach-to-unifying-events-and-threads/" rel="nofollow">https://blog.acolyer.org/2014/12/11/a-language-based-approac...</a><p>Unifying events and treads (in Scala):
<a href="https://blog.acolyer.org/2014/12/12/scala-actors-unifying-thread-based-and-event-based-programming/" rel="nofollow">https://blog.acolyer.org/2014/12/12/scala-actors-unifying-th...</a>
Yes, might have worked in 1995. Now, however, when even your lowly phone has 4 (or more) processor cores and a full-fledged GPU...<p>Learn concurrent programming techniques — or perish. Threads and sync primitives are low-level, but important, and you have to understand them to figure out what compromises and biases were taken in higher-level models.<p>And, frankly, it isn't that bad (debugging existing code is bad, but playing with monitors and semaphores and critical sections is easy, until code is small and isolated).
Note that it's from 1995. Back then, many people still thought that machines with multiple processors were exotic things of little interest even in the enterprise. That list most notably included one Linus Torvalds. It also included OP author John Osterhout, who should also have known better - even more so, since Stanford was one of the places where such things were not so exotic. He even says, right up front, that threads still have their uses when you need true CPU concurrency. Now that's a common case. Generalizing from this presentation to the current day is probably a worse idea than threads ever were.
John Osterhout is a creator of Tcl, which embraces event-driven model for programming. I think it gives more perspective into his opinion.<p>That said, Tcl was one of the first scripting languages which got very nice thread model - see AOLServer [1].<p>[1] <a href="https://en.wikipedia.org/wiki/AOLserver" rel="nofollow">https://en.wikipedia.org/wiki/AOLserver</a><p>We used Tcl threads in one of our programs to control various hardware things while main UI responded to events sent from the threads. Everything worked very well, especially for a program in scripting language like Tcl.
Twenty years later this seems to still be good advice. Martin Thompson talks about this in his Mechanical Sympathy talk.<p>He says the first thing he does, as a performance consultant, is turn off threading. Claims that's often all he needs to achieve the desired improvements...<p>It's a good talk, I highly recommend it.<p><a href="https://www.infoq.com/presentations/mechanical-sympathy" rel="nofollow">https://www.infoq.com/presentations/mechanical-sympathy</a>
The bad idea is taking a threaded language and retrofitting events, which Python is doing. This results in an even worse mess. Python now has two kinds of blocking, one for threads and one for events. If an async event blocks on a thread lock, the program stalls.<p>Or taking a event-driven language and retrofitting concurrency, which Javascript is doing. That results in something that looks like intercommunicating processes with message passing. That's fine, but has scaling problems, because there's so much per-process state.<p>Languages which started with support for both threads and events, such as Go, do this better. If a goroutine, which is mostly an event construct, blocks on a mutex, the underlying thread gets redispatched. There's only one kind of blocking.
From a Linux perspective, threads and processes are essentially the same construct. The major difference is the set of flags that are passed when the process is created. Oversimplifying, if shared memory is requested, then it's a thread. Otherwise, it's a process. Meaning forking servers are multi-threaded.<p>On other operating systems, specifically those starting with the letter W, there's a major distinction. There are other constructs as well, such as "fibers".<p>Now, today's world is different from what it was in 1995. We used to have a single core, so threads and multiple processes were only a logical construct. Now, we have multiple cores, so we shold, at a bare minimum spawn multiple processes/threads. What's running inside them can then be debated as if it were 1995.
Obligatory CSP[0] reference.<p>There are only a handful of examples, that i can think of, where threading(multiprocessing, concurrency, and other names for it) is useful.<p>[0] <a href="http://www.usingcsp.com/" rel="nofollow">http://www.usingcsp.com/</a>
If I had a dollar for every time I heard "thread programming is hard."<p>I've programmed using threads for 23 years. I've never had a non-trivial debug issue caused by trouble using semaphores, mutexes, and shared data. It's no harder than writing a hash table or balancing a tree.
I'd rather have real threads available in my language, and use shared state sparingly. Your N single-threaded processes have to talk to each other anyway, or at least to a master, and might even share memory through memory-mapping. Threads are just a tool, and they give you options. You can use them in a share-nothing way if you want.<p>As someone who grew up on the Java VM and started my start-up/web career on it, I've always felt like Java programmers have a different relationship with threads than C programmers. Java gives you cross-platform, usable, debuggable native threads; it basically makes them free if you want them. In C/C++, on the other hand, threads are a library, and using them is a grungy affair. If you grew up on Rails, meanwhile, threads don't exist ("when you say worker thread, do you mean worker process? I'm confused").<p>Node.js was created by C programmers and launched with a lot of anti-thread propaganda, much like the link. They equated threads with shared state, and also said threads were too memory inefficient to be practical for a high-performance server (they meant that holding a native thread for every open connection would require too much stack space, which is true, but that's not what they said).
> Where threads needed, isolate usage in threaded application kernel: keep most of code single-threaded.<p>This is the point where performance tops: each CPU is filled with operations, and operations that don't need to wait for the result of other threads.
I'm sure many people don't think this applies to them because they don't use threads. However, in the modern day, replace "thread" with "process" and "memory" with "database", and many web applications have very similar problems. They just never actually manifest because of the small number of requests per second.
I agree. The actor model where actors can be scheduled on any thread is the best (Erlang, Goroutines). Second best is the node.js model of single threaded evented programming.
IMO, the main problem with using threads is that they are such an 'all or nothing' approach to sharing data.<p>If you want to make use of multiprocessing, the traditional choice is either to use two separate processes (sharing nothing), or to use threads (and share everything). But for most tasks, these opposite ends of the spectrum are not what you need. There's plenty of data and state in most programs that doesn't need to be shared, and a huge source of threading bugs is through mistakenly altering some data that another thread was using.<p>The problem is that sharing partial state between processes is painful and many languages and OSs make it difficult to do. You have to play around with mmap() or other shared memory tools, and then pay great attention to not mix pointers or other incompatible data between the processes.
There's a link to a related discussion on HN entitled "Why Events Are a Bad Idea (for high-concurrency servers) (2003)"<p><a href="https://news.ycombinator.com/item?id=14548487" rel="nofollow">https://news.ycombinator.com/item?id=14548487</a>
Threads have been useful for me in my latest experiment. Its a C++ application that runs mame_libretro dll (copies) each in their own thread, while a BASIC intepreter runs in another thread, and the main game engine (Irrlicht) runs in the main thread. Irrlicht isn't multithreaded so I just put commands from the BASIC thread into an outgoing deque which I lock and unlock as I access it. Then there are mutexes for the video data.<p>I think that threads are definitely a bit tricky though since it is easy to mess up locking/unlocking or not lock things necessary and if you do then you have debugging headaches. So when not needed they should be avoided I think.
Whatever... But Multithreaded systems are fun to design, develop and most importantly - troubleshoot... more harder the 'real time'ness more the fun :)
Threads are a bad idea for the same reason manual handling of memory space is a bad idea. Languages should provide primitives that only allow for safe construction of expressions that are run concurrently by the runtime.
It's the same with every tool: The more powerful it is, the worse the consequences are when it's abused. That applies to programming in particular.