TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Pixel-perfect timing attacks with HTML5

195 点作者 leetreveil将近 12 年前

9 条评论

jffry将近 12 年前
This is a fascinating attack. Definitely read the bits on the SVG filter timing attacks. They construct something that allows distinguishing black pixels from white pixels, apply a threshold filter to an iframe, and then read out pixels from the contents of that iframe.<p>Then they turn this around, set an iframe&#x27;s src to &quot;view-source:<a href="https://example.com/&quot;" rel="nofollow">https:&#x2F;&#x2F;example.com&#x2F;&quot;</a>, and read out information from there (in a more efficient manner).
评论 #6173542 未加载
评论 #6175197 未加载
zubspace将近 12 年前
The paper describes how to prevent the sniffing attack:<p><i>Website owners can protect themselves from the pixel reading attacks described in this paper by disallowing framing of their sites. This can be done by setting the following HTTP header:<p>X-Frame-Options: Deny<p>This header is primarily intended to prevent clickjacking attacks, but it is effective at mitigating any attack technique that involves a malicious site loading a victim site in an iframe. Any website that allows users to log in, or handles sensitive data should have this header set.</i><p>I wonder, why is this option an opt-out and not an opt-in? Shouldn&#x27;t this be the default?
评论 #6173283 未加载
评论 #6173817 未加载
评论 #6177047 未加载
评论 #6173551 未加载
randallu将近 12 年前
These same guys had previously used WebGL to suck out text in the same way; unfortunately the demo is no longer at the same URL, but it is what&#x27;s responsible for the fairly weird implementation of CSS Shaders: <a href="http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html" rel="nofollow">http:&#x2F;&#x2F;www.schemehostport.com&#x2F;2011&#x2F;12&#x2F;timing-attacks-on-css-...</a><p>It&#x27;s amazing that the same thing can be observed with the standard SVG software filters, though. I&#x27;d imagine that using X-Frame-Deny as they suggest is a much better solution than killing all JS (because you just know some incompetent ad network will manage to flip the switch and break millions of pages with that ability...).
评论 #6173898 未加载
Someone将近 12 年前
For those, like me, wondering why that &#x27;detect visited&#x27; hack doesn&#x27;t simply bolden visited links or changes its font or font size and uses getComputedStyle or getBoundingClientRect [1] to see whether that changes the bounds of the element: that trick has been mitigated three years ago. See <a href="http://hacks.mozilla.org/2010/03/privacy-related-changes-coming-to-css-vistited/" rel="nofollow">http:&#x2F;&#x2F;hacks.mozilla.org&#x2F;2010&#x2F;03&#x2F;privacy-related-changes-com...</a>.<p>[1] not explicitly mentioned there, but I think the solution described intends to plug that hole, too.
M4v3R将近 12 年前
These attacks are getting more and more creative. I begin to think that there is no such thing as perfect security in a world that constantly demands new features.
评论 #6173495 未加载
评论 #6173268 未加载
评论 #6172979 未加载
mistercow将近 12 年前
It seems to me like a web server ought to be able to send some signal to browsers on either a single page or subdomain basis, which disables JS for those pages. If another page includes such a JS-disabled page in an iframe, then at the very least, all scripts on the parent page should be immediately terminated, and ideally loading of the iframe should fail if any scripts have executed (obviously an exception should be made for, e.g. Chrome extensions).<p>This should completely nullify a vast number of potential attacks for sites that are particularly sensitive. There&#x27;s no reason, for example, that the logged-in portion of a banking site should need to use JS. That seems like a reasonable sacrifice for adding significant security to critical websites.
评论 #6172885 未加载
tripzilch将近 12 年前
I have a soft spot for side-channel attacks, they are often a beautiful example of out-of-the-box thinking. This whitepaper is no exception, in particular the second part about (ab)using the SVG filters.<p>I was thinking, of course it doesn&#x27;t help much in mitigating this attack, but they calculate average rendering times over several repeats of the same operation. When profiling performance timings, it&#x27;s usually much more accurate to take the <i>minimum</i> execution time. The constant timing that you want to measure is part of the low-bound on the total time, any random OS process&#x2F;timing glitch is going to <i>add</i> to that total time, but it will not somehow make the timespan you are interested in randomly run faster. There might be some exceptions to this, though (in which case I&#x27;d go for a truncated median or percentile range average or something).<p>Also had some ideas to improve performance on the pixel-stealing, as well as the OCR-style character reading. With the latter one could use Bayesian probabilities instead of a strict decision tree, that way it&#x27;ll be more resilient to accidental timing errors so you don&#x27;t need to repeat as often to ensure that <i>every</i> pixel is correct, just keep reading out high-entropy pixels and adjust the probabilities until there is sufficient &quot;belief&quot; in a particular outcome.<p>But as I understand from the concluding paragraphs of this paper, these vulnerabilities are already patched or very much on the way to being patched, otherwise I&#x27;d love to have a play with this :) :)
ptolts将近 12 年前
That was the most interesting thing I&#x27;ve read in a while.
Sephr将近 12 年前
To mitigate the new detect visited vectors, browsers could render everything as unvisited and then asynchronously render a &#x27;visited&#x27; overlay (in a separate framebuffer) at a later time. SVG filters will have to be processed twice for the visited-sensitive data, so a vendor may just wish to limit SVG filters to only processing the &#x27;unvisited&#x27; framebuffer for the sake of performance.