This is getting complicated. OS process management, threads and lightweight pocesses, green field process control, virtual machines, containers, sandboxing in n browsers with m different technologies, now this WASM stuff..and orchestrating this all across the cloud and the global internet, ending in homes and corporate machine rooms.<p>Enverywhere you have to think: who can load/run a module/process and from where, how to authenticate and authorize, which API to give to it, etc...<p>A historical note:<p>Bell Labs Plan 9 had a universal OS level solution, that Linux has somewhat adopted, but could not make general enough, partly due to the higher lever ecosystem being stuck to old ways:<p>- per process name spaces with mountable/inheritable/stackable union directories and optionally sharebale memory (Linux light-weight process, LWP, comes close, it was also historically copied from Plan 9)<p>- Almosty all APIs (even "system calls") as synthetic file systems (Where do you think /proc came from?)<p>- which you could mount and access (efficiently) locally or through a secure unified network protocol (9P)<p>On Plan 9 you could just run different parts of the browser (JavaScript engine, WASM or anything) in a tailored limited LWP with limited mounts as synthetic file system APIs...<p>Note that Docker kind of retro-fits Plan 9 ideas in Linux kernel to embrace and extend the original ideas of Plan 9...