That we seriously discuss using 24 out of 64 pointer bits to prevent one of the many problems with buffer overflow, but we cannot seriously discuss making buffer overflows impossible is very depressing.<p>How about we use 24 bits of data pointers to keep the array size, or 1 bit to indicate "this is a pointer with a size" and 23 bits for the size, and then our load/store with index instructions, as well as freshly added pointer arithmetic instructions, trap when the index exceeds the size? Instead of using bits in instruction pointers to not let one of many kinds of buffer overflow create valid instruction pointers? No good?
With address space randomization, if you have a valid pointer to memory A, you can compute a valid pointer to memory B if they are from the same section. You can't do that with this, because the address is part of the signature.
How does using the "unused" bits of a 64-bit pointer differ, functionally, from address space randomization with 64 bits? The search space is the same. Misses are still trivially detectable.<p>By my reading, this allows not a whitelist of pages, but a whitelist of arbitrary addresses. Different granularities entirely. Can anyone else bring a light to bear on this?
Intuitively, I would have preferred they used a bigger pointer type (96 bits or 128 bits) instead of using unused part of the current pointers that will shrink when will need a bigger address space.
"attaches a cryptographic signature to pointer values"<p>I guess everyone who thought that "signed integers" are cryptographically signed weren't THAT wrong after all :D
I think the deal is that you can't create a good address using the upper bits of a good one ...<p>It's not the misses you worry about, it's the hits