You know, I'm not entirely sure how I feel about this. On the one hand: yeah, I get that having really multithreaded stuff is pretty handy, especially for certain computationally-bound tasks.<p>On the other hand, I quite like the single-threadedness of javascript. Promises-based systems (or async/await) give us basically cooperative multitasking anyway to break up long-running (unresponsive) threads without worrying about mutexes and semaphores. I understand exactly when and where my javascript code will be interrupted, and I don't need to wrap blocks in atomic operation markers extraneously.<p>I've written plenty multithreaded code, starting with old pthreads stuff and eventually moving on to Java (but my own experience with threaded stuff is limited mainly to C and Java), and it can be a <i>real pain</i>. I guess limiting shared memory to explicitly named blocks means you don't have as much to worry about vis-a-vis nonreentrant code messing up your memory space.<p>That said, it is a pretty useful construct, and I see where this can benefit browser-based games dev in particular (graphics can be sped up a lot with multicore rendering, I bet).
I'm excited about the `SharedArrayBuffer` addition, but quite meh on the `Atomic.wait()` and `Atomic.wake()`.<p>I think CSP's channel-based message control is a far better fit here, especially since CSP can quite naturally be modeled inside generators and thus have only local-blocking.<p>That means the silliness of "the main thread of a web page is not allowed to call Atomics.wait" becomes moot, because the main thread can do `yield CSP.take(..)` and not block the main UI thread, but still simply locally wait for an atomic operation to hand it data at completion.<p>I already have a project that implements a bridge for CSP semantics from main UI thread to other threads, including adapters for web workers, remote web socket servers, node processes, etc: <a href="https://github.com/getify/remote-csp-channel" rel="nofollow">https://github.com/getify/remote-csp-channel</a><p>What's exciting, for the web workers part in particular, is the ability to wire in SharedArrayBuffer so the data interchange across those boundaries is extremely cheap, while still maintaining the CSP take/put semantics for atomic-operation control.
> if we want JS applications on the web to continue to be viable alternatives to native applications on each platform<p>This is where I disagree with the direction Mozilla has been going for years. I don't want the web to be a desktop app replacement with HTTP as the delivery mechanism. I'm fine with rich single page web apps, but I don't understand the reason why web apps need complete feature parity with desktop apps.<p>Why not let the web be good at some things and native apps be good at others?
This is the last piece needed to allow multi-threaded code with shared state to emscripten [0]. A very good thing indeed<p>[0] <a href="http://kripken.github.io/emscripten-site/docs/porting/guidelines/portability_guidelines.html" rel="nofollow">http://kripken.github.io/emscripten-site/docs/porting/guidel...</a>
The saving grace of JavaScript's everything-is-async, single threaded model was that it was just slightly less difficult to reason about than multiple threads and shared state. (Though I'd say that's debatable...)<p>My guess is that, despite the sugar coating that JavaScript's async internals have received of late, writing stable multi-threaded code with JavaScript is going to be hard.<p>JavaScript now has the safety of multi-threaded code with the ease of asynchronicity!
On my grossly overpowered workstation, I can only crank the number of workers in the Mandelbrot demo to 20 [1]. Attempting to go beyond 20, the console reports:<p><pre><code> RangeError: out-of-range index for atomic access
</code></pre>
That said, 20 workers is about 11x faster than the single-threaded version.<p>[1] <a href="https://axis-of-eval.org/blog/mandel3.html?numWorkers=20" rel="nofollow">https://axis-of-eval.org/blog/mandel3.html?numWorkers=20</a>
I keep hoping that JS would evolve to support the actor model, a la Erlang/Elixir, with their process based persistence, concurrency via message passing, etc. It just seems so much simpler and tractable than this proposal.
> This leads to the following situation where the main program and the worker both reference the same memory, which doesn’t belong to either of them:<p>If only Mozilla had some technology that could deal with ownership of memory...<p>Seriously, if rust doesn't have an ASM.js optimized target yet, it really should.
><i>Consider synchronization: The new Atomics object has two methods, wait and wake, which can be used to send a signal from one worker to another: one worker waits for a signal by calling Atomics.wait, and the other worker sends that signal using Atomics.wake.</i><p>Having not yet played with this myself: is anyone familiar with what kind of latency overhead is involved with signaling in the Atomics API? I'm not very familiar with the API yet, so I've no idea how signaling is implemented under the hood.<p>The MessageChannel API by contrast (i.e. <i>postMessage</i>) can be quite slow, depending. While you can use it within a render loop, it usually pays to be very sparing with it. Typical latency for a virtually empty <i>postMessage</i> call on an already-established channel is usually .05ms to .1ms. Most serialization operations will usually balloon that to well over 1ms (hence the need for shared memory). Plus transferables suck.<p>><i>Finally, there is clutter that stems from shared memory being a flat array of integer values; more complicated data structures in shared memory must be managed manually.</i><p>This is probably the biggest drawback to the API, at least for plain Javascript. It really favors asm.js or WebAssembly compile targets for seamless operation, whereas plain Javascript can't even share native types without serialization/deserialization operations to and from byte arrays.
I'm excited to see progress in the area of JS concurrency, but I'm not sure how useful this is going to be. It lets me share ArrayBuffers between workers, but all of my data is in the form of Objects, not primitive arrays.<p>One place where I would like to use this is for collision detection, like in this example: <a href="http://codepen.io/kgr/pen/GoeeQw" rel="nofollow">http://codepen.io/kgr/pen/GoeeQw</a><p>But I'm relying on objects with polymorphic intersects() methods to determine if they intersect with each other, and once I encode everything up as arrays, I lose the convenience and power of objects.
If only we did not have mutable data structures, there would be no or few problems to find in this.<p>Concurrency isn't hard - try Clojure core/async and you will find out. Shared mutable state is mind-boggingly hard
If the problem that this is trying to solve is that `postMessage` is slow and you can't transfer slices of arrays, then perhaps they should solve it by speeding up `postMessage` and making array slicing cheap? Forcing a shared-memory concurrency model into JavaScript seems like a bit of an overreaction.