I personally feel using light-weight processes for intra-application multitasking is far more superior than having concurrently running threads that share the same block of memory.<p>Light-weight processes are far more secure than threads in the sense they don't share memory and thus avoid a whole host of problems associated with it.<p>IMO, they are also easier to work with (while programming); I find the message-passing IPC model simpler and more manageable.<p>Additionally when it comes to parallel computing; even there light-weight processes are a win-win scenario. There's no need for complex algorithms that manage shared memory between CPUs when each CPU can be assigned 1/more L.W.-processes and they all interact by message passing.<p>I think on a well designed OS, L.W.-procs should be as efficient as threads.<p>Some applications like Google Chrome already use L.W.-procs (for each tab the user opens a seperate process is launched). It surprises me that a lot more people don't use it already, given its many advantages.<p>Which model of multitasking do you think is better? (especially in terms of programmer efficiency)
I'm a fan of lightweight processes. I really recommend reading the following:<p><a href="http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf" rel="nofollow">http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1....</a><p>Threads are a seemingly straightforward adaptation of the dominant sequential model of computation to concurrent systems. Languages require little or no syntactic changes to support threads, and operating systems and architectures have evolved to efficiently support them. Many technologists are pushing for increased use of multithreading in software in order to take advantage of the predicted increases in parallelism in computer architectures. In this paper, I argue that this is not a good idea. Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism. Threads, as a model of computation, are wildly nondeterministic, and the job of the programmer becomes one of pruning that nondeterminism. Although many research techniques improve the model by offering more effective pruning, I argue that this is approaching the problem backwards. Rather than pruning nondeterminism, we should build from essentially deterministic, composable components. Nondeterminism should be explicitly and judiciously introduced where needed, rather than removed where not needed. The consequences of this principle are profound. I argue for the development of concurrent coordination languages based on sound, composable formalisms. I believe that such languages will yield much more reliable, and more concurrent programs.
The key features are <i>isolation</i> and <i>fault management</i>. Isolation means that each process is separated from the others and can only communicate via a safe means. What happens when a process dies? Good systems have far more fault-tolerance.<p>An additional benefit is <i>security</i>. Your model can rely on some processes having the capability to access confidential data. These act as proxies for accessing that data and protects it. The OpenBSD operating system has used this "privilege separation" trick for years.<p>Everything with shared memory will die in the long run. The hardware can't keep on fooling us with a big memory space shared among all processes anyway.
If you need to process a bitmap in parallel, it's a pain to use processes, since there's a significant amount of memory that has to be passed around. And it's inefficient, as you would always have to copy memory to the other process.<p>That's what threads are for - parallel paths that need to share memory. Processes are for parallel paths that rarely need to access the same data or to synchronize between them.<p>Don't forget also that when your CPU shares time, it divides up the time equally among the processes. So if your application creates 20 processes, it takes a disproportionate amount of CPU time (I believe, someone correct me if I'm wrong).
I don't think this needs to be an "either/or" type of scenario. Depends on the specific requirements.<p>Sometimes you need a multi-threaded model...i think a GUI library is a good example of that. Other times, a multi-process model would be easier to set up...a "just run in the background" sort of thing.<p>I also don't quite understand why suddenly everybody is afraid of writing multi-threaded applications. Just make sure you understand how the threading model works in your specific technology and what abstractions are provided. If your application is complex enough to require a "complex algorithm to manage shared memory", then i think you need to really take your time to understand what you're trying to do.<p>Having said that; I acknowledge that chasing down bugs caused by threads is not fun at all. Especially race conditions since they are seemingly un-predictable. But i think it is getting easier with newer platforms and tools.
The decision depends on the interactions between them.<p>Threads make intercommunication cheap but at the price of exposing to problems like deadlocks or data access synchronization overhead.<p>My rule of thumb is to use process and move to threads only when there is a significant added value.
It's a false dichotomy, I think - they're not the only models around. Cf Erlang's process model - no shared memory, and far less resources per process than either OS processes or threads.
I don't think lw-process is a good answer to server-side programming. for desktop programming, multi-process can avoid program fails and provide superior performance. but for server-side, considering the consuming of memory and computing resource, threads and lock-free data structure are the better choice. Well, when needing a robust server-side program, you indeed have to program something like lw-process&thread hybrid.
There is no good answer to this that encompasses all scenarios. Some times it makes more sense to use threads, some times LW procs.<p>Things like chrome make an interesting case for using LWP in an environment that usually uses threads, but it still does not necessarily mean it is a better solution.