Manual parsing of complex binary formats (in this case a netfilter control protocol) in hand rolled C code in ring 0. An approach from the innocent 1990's.<p>Quipping aside, an interesting thing is that this kind of thing used to be callable by root only and even in the early days of putting things behind more fine grained capabilities, CAP_NET_ADMIN probably wasn't taken very seriously as representing untrusted users. Which begs the question of whether it would be more secure to keep these things root-only and make people do the hard and dangerous part of untrusted input handling in userspace, where it's also easier (and not forbidden by kernel coding style rules...) to use safer PLT techniques to do it.
I wonder how these vulnerabilities would be treated if the researchers didn't bother to find an exploit for them?<p>The description of the exploit is so detailed that it gives the impression that discovering the original vulnerability was fairly quick and simple, and the real work is in building a functioning exploit to wriggle through the discovered kernel flaw.<p>If the researchers were to just stop once they discover the vulnerability, and immediately report it to the kernel security team, would it be treated with the same level of seriousness? Could a CVE with the same severity be issued, or would there be arguments over whether or not the bug is exploitable?
Didn't see a link to the fix. Here it is: <a href="https://github.com/torvalds/linux/commit/b29c457a6511435960115c0f548c4360d5f4801d" rel="nofollow">https://github.com/torvalds/linux/commit/b29c457a65114359601...</a>
> While the vulnerability was found by code auditing, it was also detected once by <a href="https://syzkaller.appspot.com/bug?id=a53b68e5178eec469534aca80fca1735fb3d8071" rel="nofollow">https://syzkaller.appspot.com/bug?id=a53b68e5178eec469534aca...</a>, however not with a reproducible C code.<p>This made me pause. I had naively assumed (well, actually, never thought about it) that fuzzing would always expose a clear and obvious error path, but apparently there's a lot of manual digging required to find the error mode?
For folks who are operating at this layer of bounds and overflows and such, are you relying entirely on a mental visualization of the data structures when reasoning about them? Or does this kind of exercise require some sketching etc? I find it to be quite challenging to explore these topics without having to draw on paper, so I was curious as to whether this is something that you eventually graduate from, or whether sketching things out remains a large part of the process.<p>Are there some tools that help this kind of thought process? Or do you have to use drawing primitives - rectangles, etc to do it?
Would following steps prevent this exploits and if yes, why they were not implemented still?<p>1. When spraying, they make use of the fact that same arena is used for all struct of a similar sizes. This allows them to fill holes in arenas with arbitrary data, creating fake structs. Why not have arena per struct or at least per subsystem?<p>2. Have some secret tag stored before each struct allocated in the arena. When freeing , check that tag is still intact. This should detect if there was a write spanning multiple adjacent structs.<p>3. I didn't quite understand how they managed to create malicious release function. They have control over buffer where they can write code, but memory page containing that buffer isn't executable, right?
Is it just me, or is this is super similar to Dirty Cow [0] in terms of severity? Insane find!!<p>[0]: <a href="https://en.wikipedia.org/wiki/Dirty_COW" rel="nofollow">https://en.wikipedia.org/wiki/Dirty_COW</a>
I get that the terms of the contest stipulated payout limitations, but $10,000 really seems like chump change for this bug. Bypasses all protecting, executed arbitrary code. That's worth a lot in the right hands.
Our regular reminder that the union of lifetimes of all known kernel exploits covers the entire history of Linux. There has never been a time during which your Linux systems were not vulnerable to takeover, somehow.
Vulnerabilities once you have access to the machine are not really worth spending time on because servers should rely on shell hardening and clients should not be trusted anyway.<p>What we need is ways to avoid the kernel for networking memory, I figured Oracle would have realized this for Java by now but they are dragging their feet!