I don't think this meets the definition of "safe" in "safe" Rust: "safe" doesn't just mean "won't crash due to spatial memory errors," it means that the code is <i>in fact</i> spatially and temporally memory safe.<p>In other words: this won't detect memory unsafety that doesn't result in an abnormal exit or other detectable fault. If I'm writing an exploit, my entire goal is to perform memory corruption <i>without</i> causing a fault; that's why Rust's safety property is much stronger than crash-freeness.
There <i>is</i> a way to sandbox native code without forking to a new process, and it looks like this<p><a href="https://hacks.mozilla.org/2020/02/securing-firefox-with-webassembly/" rel="nofollow">https://hacks.mozilla.org/2020/02/securing-firefox-with-weba...</a><p>Firefox employs processes for sandboxing but for small components they are not worth the overhead. For those they employed this curious idea: first compile the potentially unsafe code to wasm (any other VM would work), then compile the wasm code to C (using the wasm2c tool). Then use this new C source normally in your program.<p>All UB in the original code becomes logical bugs in the wasm, that can output incorrect values but not corrupt memory or do things that UB can do. Firefox does this to encapsulate C code, but it can be done with Rust too
This isn't mentioned anywhere on the page, but fork is generally not a great API for these kinds of things. In a multi-threaded application, any code between the fork and exec syscalls should be async-signal-safe. Since the memory is replicated in full at the time of the call, the current state of mutexes is also replicated and if some thread was holding them at the time, there is a risk of a deadlock. A simple print! or anything that allocates memory can lead to a freeze. There's also an issue of user-space buffers, again printing something may write to a user-space buffer that, if not flushed, will be lost after the callback completes.
Rather design the application from the start to use multiple processes, OS IPC and actual OS sandboxing APIs.<p>Pseudo sandboxing on the fly is an old idea and with its own issues, as proven by classical UNIX approach to launching daemons.
If you can afford to sacrifice that much performance just to run some potentially unsafe code, then you can probably afford to not be writing Rust in the first place and instead use a garbage-collected language.
This is cool from a theoretical perspective, but `fork()` can be prohibitively expensive, at least on the hot path. This is a cool tool that should be used with care.
This is likely to violate async-signal-safety [1] in any non-trivial program, unless used with extreme care. Running code in between a fork() and an exec() is fraught with peril; it's not hard to end up in a situation where you deadlock because you forked a multi-threaded process where one of the existing threads held a lock at the time of forking, among other hazards.<p>[1] <a href="https://man7.org/linux/man-pages/man7/signal-safety.7.html" rel="nofollow">https://man7.org/linux/man-pages/man7/signal-safety.7.html</a>
Forking and this package can be useful if you know that the unsafe code is really unsafe and have no hope of making it better.<p>But I wouldn't use this often. I'd be willing to bet that you'd lose all performance benefits of using Rust versus something like Python or Ruby that uses forking extensively for parallelism.
this seems like a good place to ask, I don’t write very much unsafe Rust code… but when I do, it’s because I’m calling the Win32 API.<p>Tools like valgrind do not work on windows, and I am nowhere near smart enough to know the entire layout of memory that should exist.<p>When using Windows and calling system system functions, there’s a lot of casting involved; to convert wide characters and DWORDS to rust primitives for example. And given that I don’t have a good debugging situation, I’m terrified that I’m corrupting or leaking memory.<p>does anyone know any good tools that work on windows to help me out here?
This also means the function might not do what you want, i.e. if it takes a `&mut T` argument, that argument can't actually be mutated, and anything that relies on interior mutability, even if it's not a mut argument, also won't work.<p>Rust allows memory-impure things, like interior mutability of arguments, so you can get different (i.e. incorrect) results when using this to run otherwise fine rust code.<p>For example:<p><pre><code> fn some_fn(x: &mut i32) {
*x = 2;
}
fn main() {
let mux x = 1;
mem_isolate::execute_in_isolated_process(|| {
some_fn(&mut x);
}).unwrap();
println!("{x}"); // prints '1' even though without 'mem_isolate' this would be 2
}</code></pre>
> It forces functions to be memory pure (pure with respect to memory), even if they aren't.<p>What if the unsafe code is not supposed to be pure but mutates some memory? For example, does this allow implementing a doubly-linked list?
Please please please add a big huge warning to your crate that it should never be used in multi-threaded programs. fork() is not safe when there is more than one thread present, as the child process can easily deadlock (or worse) if the fork() happens at just the wrong time with respect to what other threads are doing.
This is super interesting! I would be very curious to see how we can get into even more safety when running WebAssembly in Wasmer with this crate (similar to V8 isolates).<p>Awesome work!
>>We call this trick the "fork and free" pattern. It's pretty nifty.<p>It should be called "fork and see" pattern instead :D
As a joke, it's funny. Obviously you would not want to actually deploy this. I feel like most comments are too quick to criticize using this in prod (don't!) and missing the point.