I don't get it. How is this different than starting new threads?<p>In the article example, it doesn't look like anything is <i>returned</i> from each parallel function call. the main loop just invokes the func for each I, and they print when done. No shared memory, no scheduling or ordering.. what's the advantage here?<p>In code examples, seems shared memory & scheduling are not a thing either. More like functional or chain programming - a function calls next func and passes output to it. Each loop runs independently, asynchronously from others.
Reminds me of ECS model in gamedev.<p>That's great and all, but it doesn't solve or simplify intricacies of parallel programming so much as it circumnavigates them, right?<p>Is the advantage it being low-level and small?<p>I think the same "concept" can be done in Bash:
```for i in $(seq 1 100); do fizzbuzz $i & ; done```