I find it telling that the recommendations for mitigating these sorts of vulnerabilities include adding some slop to the end of allocations (which I believe will only prove to make off-by-one errors "acceptable" when they are clearly still incorrect; perhaps we will see off-by-two error now…) and introducing more randomization to allocations.<p>This second point I find particularly insidious because the very essense of this paper is that other attempts at randomization (e.g. ASLR) have been found wanting: as long as overflow is feasible there will be creative ways to access the instruction counter.<p>I wonder how much performance would degrade if a language with a similar runtime to C but with bounds checking (e.g. Ada or Rust) were used to write this sort of software. It seems like the only reasonable way to completely prevent these exploits, and I suspect profiling would reveal there are still hot-spots where selective optimization could bring performance back on par (if it is not already).