It seems they aimed to answer the question: What are the consequences of limiting the WA spec to only try to sandbox the WA binary from outside memory, and not try to prevent it from exploiting itself? The answer they got was that yes, a WA binary may be capable of exploiting itself. I think this is an interesting and valuable result, but I don't think the result is surprising, or that it invalidates the design of WA. The outputs from WA are not guaranteed to be what you expect. Or put another way: C is still C. I wonder if such a vulnerability could be found in a program written in safe Rust.<p>Notably, they don't appear to even try to break the WA-host memory barrier, which I actually find to be a validation of the core design goal of WebAssembly: isolate the damage a vulnerable program can inflict to the memory space (and thus also output) of that program. Protect the host from the program, but not the program from itself. Also, maybe don't dump WA output you can't validate directly into DOM.
The whole point of a sandbox is that it's only got toys in it so you don't care when the cat shits in it. But, of course, if the sandbox only contains toys, it's not very <i>useful</i>. You can't sit in it and order a pizza if all you've got is a toy phone.<p>So you leave your wallet, phone, and other critical devices in there. But now it has ceased to be a worry-free safe play space. It's no longer acceptable for the cat to shit in it.<p>This is a fundamental tension and we'll probably keep rediscovering this security problem any time a new container model is developed.
Hi everyone, Daniel here (one of the authors, I am the PhD student in the video). Great to see the paper submitted and discussed :-)<p>Sorry for being a bit late to the discussion, I will try to answer the questions in detail that were asked below.<p>I want to clarify one misunderstanding that seems to come up several times, namely the distinction between "host security" and "security of the WebAssembly program itself". WebAssembly <i>does</i> have measures and a good design for host security. E.g., in the browser, WebAssembly programs are run in a sandbox (just as JavaScript is), and writes inside WebAssembly's linear memory should never affect values outside of linear memory (e.g., VMs insert bounds checks when reading/writing from/to WebAssembly pointers). Those techniques protect against <i>malicious</i> WebAssembly binaries, which is of course important in the Web.<p>Here, we look at a different side of WebAssembly's security story: What if the WebAssembly binary is <i>vulnerable</i> and gets fed malicious input? In this attacker model, we can at most do what the host environment allows us to do. But especially for large WebAssembly programs with lots of imports, or WebAssembly binaries for standalone VMs (outside of the browser, without a tried-and-tested sandbox), this can still be a lot of attacker capability! And when we look into the protections <i>inside</i> WebAssembly's linear memory (not between linear memory and host memory), we find that there are very little. All linear memory is always writable, no stack canaries, no guard pages, no ASLR, no safe unlinking in smaller allocators etc. This is worrying as more code gets linked together into a single WebAssembly binary.<p>If you have any further questions, I am more than happy to answer, here and also via email. Thanks again for the interest!
Like they say at worst WASM can make a mess of it's own data.<p>The selling point of WASM outside of browser to me is native modules for other languages where you can whitelist the exposed APIs (and provide sandboxed versions) while having cross-platform binaries.<p>So for example in the future node can ship with WASM module support and JS can load a C WASM module binary which I can deploy on Windows/Linux/Mac, and I can review what that module has access to via some module manifest, and node WASM exposes wrapped POSIX API based on manifest configuration.
This doesn't really seem to offer anything useful, or even contain new information. eg:<p>* no corruption of, nor access to host memory.<p>* no corruption of, nor access to memory or other processes.<p>It looks like what they did is create a wasm file that generates a JS "Alert !!!" string, then blindly runs that string in the browser without any kind of validation?<p>It's hard to tell from looking at their code though:<p><a href="https://github.com/sola-st/wasm-binary-security/blob/master/end-to-end-exploits/browser-libpng-xss/02-compile-pnm2png-wasm/out/main.html" rel="nofollow">https://github.com/sola-st/wasm-binary-security/blob/master/...</a><p>The html page there seems to be missing the "main.js" to see what's happening. :(<p>---<p>Hmmm, it might be this main.js, which looks (without checking) like it came from Emscripten:<p><a href="https://github.com/sola-st/wasm-binary-security/blob/master/attack-primitives/stack-buffer-overflow/out/main.js" rel="nofollow">https://github.com/sola-st/wasm-binary-security/blob/master/...</a>
If you want good security, you can't download new untested code. Sandboxing is important, but the most important part is to not allow arbitrary code to run.<p>WASM seems like very good solution if you want to load random code and still retain as much safety and flexibility as possible. The burden is now moved to code that interacts with potentially misbehaving code. Check and verify all outputs, limit resource usage. Using WASM system It's not so different from calling remote server in the cloud. Anything can happen during executing but only the interacting with the system can cause harm outside the system.
I’ve only watched the video, but to me this as much highlights how WebAssembly was designed to run in restricted browser sandboxes with CSP and other mitigations readily available, and that the uses of WebAssembly outside of such relatively hardened environments is, as shown, more risky. Outside of the confusion that a WebAssembly binary might be tricked to misbehave though, the rest feels like standard security measures are enough. I’m not yet convinced we need more mitigations at the WebAssembly layer vs better security linting to try and catch the vulnerabilities, and perhaps not trusting any web assembly outputs without validating them first? I might be missing something here. I’ll probably need to read the paper next :)
Yes, and people are trying to run this outside browser too(like a jvm alternative). Not sure what's the craze about wasm outside browsers considering all the risks.