Hmm, where to begin? This is an old idea. It has all been tried before in the JVM world and yet support for it is now being <i>removed</i>, which is in my view a pity given that Now Is The Time. But the problems encountered trying to make it work well were real and would need to be understood by anyone trying the same in the JS world.<p>Understand that Java had it relatively easy. Java was designed with a sandbox as part of the design from day one, the venerable SecurityManager. The language has carefully controlled dynamism and is relatively easy to statically and dynamically analyze, at least compared to JavaScript. The libraries were designed more or less with this in mind, and so on.<p>So what went wrong?<p>Firstly, the model whereby you start with a powerful "root" capability and then shave bits off doesn't have particularly good developer usability. It requires you to manually thread these little capabilities through the call stack and heap, which is a nightmare refactoring job even in a language like Java let alone something with sketchy refactoring tooling like JavaScript. <i>Lots</i> of APIs become awkward or impossible, something as basic as:<p><pre><code> var lines = readFile("library-data.txt");
</code></pre>
is now impossible because there's no capability there, yet, developers do expect to be able to write such code. Instead it would have look like this:<p><pre><code> function readFile(appDataPath) {
var url = appDataPath.resolve("library-data.txt");
var lines = appDataPath.readLines();
}
readFile(rootFileSystem.resolve("/app/data"));
</code></pre>
Can you do it? Yes. Does it make code that was once concise and obvious verbose and non-obvious? Also yes.<p>Consider also the pain that occurs when you need a module that has higher privileges than the code calling it (e.g. a graphics library that needs to load native code, but you don't want to let the sandboxed code do that). In the pure caps model you end up needing a master process that "tunnels" powerful caps through to the lower layers of the system, breaking abstractions all over the place.<p>Secondly, this model means you can never add new permissions, change the permissions model or have different approaches because refining permissions == refactoring all your code, globally, which isn't feasible.<p>Thirdly, this model imposes cap management costs on <i>everyone</i> even if they don't care about security because they know the code is trustworthy e.g. because their colleagues wrote it, it came from a trustworthy vendor, or because it'll run in a process sandbox. Even if you know the code is good it doesn't matter, you still have to supply it with lots of capabilities, you still have to implement callbacks to give it the capabilities it needs on demand and so on.<p>These problems caused Java to adopt a mixed capability/ambient permissions model. In the SecurityManager approach you assigned permissions based on <i>where</i> code came from and stack walks were used to intersect all the sources on the stack. Java also allowed libraries to bundle data files within them, and granted libraries read access to their resources by default. That solved the above problems but introduced new ones, in particular, it lowered performance due to the stack walking, plus now library developers had to document what permissions they needed and actually test the code in a sandboxed context. They never did this. Also the approach was beaten from time to time by people finding clever ways to construct pseudo-interpreters out of highly dynamic code, such that malicious code could get run without the bad guy being on the stack at all.<p>Fourthly, it's dependent on everyone playing defense all the time. If your object might get passed in to malicious code, then it has to be designed with that in mind. A classic mistake:<p><pre><code> class Foo {
private ArrayList<String> commands;
void addCommand(String command) { commands.add(command); }
List<String> getCommands() { return commands; }
}
</code></pre>
The author's intent was to make an object in which you can read the list of commands but not write them. But, they're returning the collection directly instead of using an immutable wrapper. Fine in normal code, but oops, in sandboxed code now you have a CVE. Bugs like this are non obvious and the tooling needed to find them isn't straightforward. These bugs are a drain on development.<p>Fifthly, Spectre attacks mean that a library that can get data to an attacker via any route can exfiltrate data from anywhere in the process. You may not care about this, and for many libraries there may be no plausible way they can exfiltrate data. But it's another sharp edge.<p>Finally, it all depends on the ecosystem having minimal native code dependencies. The moment you have native code in the mix, you can't do this kind of sandboxing at all.<p>Now. All these are <i>challenges</i> but they don't mean it's impossible. Sandboxing of libraries is clearly and obviously where we have to go as an industry. The Java approach didn't fail only due to the fundamental difficulties outlined above - the SecurityManager was poorly documented and not well tuned for the malicious libraries use case, because it was really meant for applets. After the industry gave the Java team so much shit over that, they just sort of gave up on the whole technology rather than continuing to iterate on it. It may be that a team with fresh eyes and fresh enthusiasm can figure out solutions for the above issues and make in-process sandboxing really happen. I wish them the best, but anyone who wants to work on that should start by spending time understanding the SecurityManager architecture and how it ended up the way it did.<p><a href="https://dl.acm.org/doi/pdf/10.1145/2030256.2034639" rel="nofollow">https://dl.acm.org/doi/pdf/10.1145/2030256.2034639</a>