TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Evaluating Fuzz Testing

7 点作者 r929510 个月前

1 comment

PoignardAzur10 个月前
&gt; <i>Fuzz testing has enjoyed great success at discovering security critical bugs in real software. Recently, researchers have devoted significant effort to devising new fuzzing techniques, strategies, and algorithms. Such new ideas are primarily evaluated experimentally so an important question is: What experimental setup is needed to produce trustworthy results? We surveyed the recent research literature and assessed the experimental evaluations carried out by 32 fuzzing papers. We found problems in every evaluation we considered. We then performed our own extensive experimental evaluation using an existing fuzzer. Our results showed that the general problems we found in existing experimental evaluations can indeed translate to actual wrong or misleading assessments. We conclude with some guidelines that we hope will help improve experimental evaluations of fuzz testing algorithms, making reported results more robust.</i><p>Oh, I&#x27;ve been looking for an overview like this!<p>There&#x27;s a ton of fuzzing papers, and they all claim some speedups over AFLs using some intuitively reasonable optimizations, but if you start stacking them you lose AFL&#x27;s main draw, its simplicity. And the community maintaining AFL++ seems skeptical about most of these optimizations. So an overview of the ecosystem is very welcome.<p>EDIT: Oh, it&#x27;s from 2018. That&#x27;s depressing. I don&#x27;t think the ecosystem has improved much since this paper was introduced.
评论 #40856480 未加载