> iOS is execute-only; Android tried a few years ago (abandoned)<p>Wonder if the author is aware of the reasons why this was disabled (it's functionally gone on both platforms). On iOS newer processors have PAC which provides much stronger guarantees against ROP and Linux disabled it because execute-only mappings bypass PAN: <a href="https://blog.siguza.net/PAN/" rel="nofollow">https://blog.siguza.net/PAN/</a>.<p>> Dumb applications that invent their own ABI (very few)<p>I mean I know this is meant to be bait but I'll take it, applications that use their own internal ABI are valid programs.<p>> On every kernel entry, if the RPKU register has been changed kill the process<p>> When a process does a system call, the SP register MUST point to stack memory!<p>Has <a href="https://xerub.github.io/ios/kpp/2017/04/13/tick-tock.html" rel="nofollow">https://xerub.github.io/ios/kpp/2017/04/13/tick-tock.html</a> vibes<p>> Stack and Syscall Protection detect a variety of easier exploit patterns, pushing the ROP programmer to explore more challenging schemes, which may not be viable<p>> Increasing exploitation difficulty is a valid strategy<p>Ok so this is the actual interesting part of the paper, because it seems like they are trying to shore up their syscall-origin protections which are not very strong in the presence on ROP, except trying to do so on hardware that doesn't really have CFI protections.<p>As far as I can tell, this Xonly protection only attempts to disrupt blind ROP ("you can't read the code anymore"), rather than construction of a full ROP chain. There are some attempts to validate things on entry to the kernel (pc, sp) but they are under control of userspace so what probably will happen here is that they get switched back to sane values prior to kernel entry and then adjusted to attacker-controlled values again. I expect this to require some cleverness on the side of attackers but this is typically how such checks are bypassed, assuming that there is not some other overlooked way to get around it.<p>This brings us to OpenBSD's strategy for exploit mitigation, which is in my eyes has far too much tunnel vision: it tries to match on individual exploit strategies, rather than trying to protect against more general problems. The policy of "let's make exploitation harder" is actually very close to something I'm working on right now and it has a number of important caveats that I don't see addressed here.<p>These things are true:<p>* Reducing the reliability of an exploit makes it far less attractive.<p>* Adding non-perfect mitigations against common exploitation strategies makes it so that people can't just throw a proof-of-concept against another platform against your system.<p>However, these are also true:<p>* Attackers are very, very good at turning "we made this 99% secure!" into "this will basically never work".<p>* Attackers will construct new strategies that you didn't think of to attack the same underlying problem if you don't fix it, if given adequate time.<p>I am not an exploit author, so take this with a grain of salt, but I would guess that an experienced team could probably come up with a way to do either of the above in maybe a year. And at that point, once it's broken, the cost from the OpenBSD side to improve upon this protection is high, because they will break the entire design of this thing, which requires a human to revisit this and create a new clever design to keep attackers at bay. In that way it will become just a routine step in an exploit to evade the protection, as opposed to say NX, which completely killed the ability to ever do shellcode execution from the stack, necessitating the development of ROP over multiple years. Good mitigations are highly asymmetric in terms of effort required to design them versus how long an attacker needs to take to fully bypass them. Usually this means that if you're spending significant time designing something it will probably want to be sound rather than reducing the window of opportunity for an exploit.