To be absolutely clear, the behavior exhibited in the OP is indeed considered a bug by the Rust developers: see <a href="https://github.com/rust-lang/rust/issues/16012#issuecomment-165589146" rel="nofollow">https://github.com/rust-lang/rust/issues/16012#issuecomment-...</a> for the latest discussion. TL;DR: this currently isn't exploitable on Windows, and patches to LLVM adding support for stack probes will ideally allow this to be easily solved for non-Windows platforms as well.
From the article:<p>> The Rust program got a segmentation fault because it attempted to write to inaccessible memory, but only through the stack pointer. None of the undefined behaviors disallow this, which I think is why it’s ok for this Rust program to segfault.<p>What I got out of that is that Rust <i>does not work as advertised</i> if there are still situations where a program could segfault. The entire premise of Rust, as I understood it at least, is that it does things in a safe manner and the programmer does not have to worry about it. Now I learned that there are undefined behaviors. In my view, for a language that bills itself as safe, there should not exist such things as undefined behaviors. As far as I am concerned, then, based on the advertising of Rust, this is <i>false advertising</i>.
For those wondering about segfaults specifically in Rust (I know it's not the point of the blog post but it might be interesting to others), this thread talks about why they occur/whether they'll ever be eliminated entirely:<p><a href="https://users.rust-lang.org/t/rust-guarantees-no-segfaults-with-only-safe-code-but-it-segfaults-stack-overflow/4305" rel="nofollow">https://users.rust-lang.org/t/rust-guarantees-no-segfaults-w...</a>
The first sample code; "This program segfaults because the entire stack is set to 0 at program start."<p>I'd be surprised; as a strong general rule, the stack does not get zeroed [Edit: see end of thread! It's the OS zeroing everything - learn something very day]. I'd expect it to segfault because the pointer value is whatever leftover non-zero value happens to be in that piece of memory, so it points into random memory the user program shouldn't be messing with (sticking in a printf to output the value of the pointer confirms this on at least one system). Wouldn't be surprised if some implementations took security really seriously and zero everything, or if a debug build was zero happy, but under normal circumstances, the stack doesn't get zeroed.
> Curiously, I found that if I had a buffer size of even 1 byte over (8 MB - 8 KB), I still got the segfault. I’m not yet sure what’s going on there!<p>This is because of gcc padding. Programs have to allocate whole page from OS. So if you want just 1 int, you have to get whole page for it (compilers can optimize it in some conditions).
This is result of MMU that works for memory block and not for single bytes (performance issue I think)
But as I know by default page size have 4KB.<p>Another reason may be that compiler tries to allocate 2^n bytes because of performance. and 8KB is close enough I think.
> Segfaults are our friends and teachers<p>Too bad memory is not better segmented then. For instance, when linking against a library, that library's memory ends up in the same "segment" as the program itself. Therefore, right now, you can totally screw up a library's internal data structures without even causing a segfault directly.
<i>These are called guard pages. Attempts to write there would result in a segmentation fault.</i><p>...which are caught by the OS and used to either truly kill the process when the stack overflows, or to dynamically allocate more memory as the stack grows downwards. That's how it works on Windows, at least; I'm not as clear about Linux.
The robust solution to this problem is not hardcoding the pipe buffer size and changing the size of pipe buffers within your program to match your hard coded value, but rather calling fgetconf to query the pipe buffer size for the pipe FD you are working with.