The paper [1] on AFLFast is, IMO, a great example of where academia shines: carefully looking at how and why something works, developing some theory and a working model, and then using that to get a substantial improvement on the state of the art (and doing a nice evaluation to show that it really works).<p>[1] <a href="https://www.comp.nus.edu.sg/~mboehme/paper/CCS16.pdf" rel="nofollow">https://www.comp.nus.edu.sg/~mboehme/paper/CCS16.pdf</a>
In the first pass, 6 bugs were found and reported. 3 heap-use-after-free, 3 heap-buffer-overflow. Similar numbers in the second.<p>I'm so glad new programming languages are making strides which prevent this sort of thing outright. They don't prevent all bugs, but they sure prevent some of the most damaging ones.
The published SEGV's are not security relevant. They only happen in DEBUGGING output, which is not compiled into production perl's. Unless you use an old redhat system, where they shipped 10x slower debugging perl.<p>I fixed the publicly reported bugs in 2 minutes.
I cannot fix the other bugs since they were not reported to cperl (the perl5 fork which is doing the actual development of perl5). The perl5 security team is doing horrible work, so I would prefer to get the reports also, for independent and usually better fixes.<p>Brian Carpenter and Dan Collins provided excellent afl work lately for perl5.
My understanding is that fuzz testing uses pseudo-random variation of the seed code; given a different seed to the PRNG, how common is it for the same fuzz test to identify different flaws?