This is a fascinating attack. Definitely read the bits on the SVG filter timing attacks. They construct something that allows distinguishing black pixels from white pixels, apply a threshold filter to an iframe, and then read out pixels from the contents of that iframe.<p>Then they turn this around, set an iframe's src to "view-source:<a href="https://example.com/"" rel="nofollow">https://example.com/"</a>, and read out information from there (in a more efficient manner).
The paper describes how to prevent the sniffing attack:<p><i>Website owners can protect themselves from the pixel reading attacks described in this
paper by disallowing framing of their sites. This can be done by setting the following HTTP
header:<p>X-Frame-Options: Deny<p>This header is primarily intended to prevent clickjacking attacks, but it is effective at
mitigating any attack technique that involves a malicious site loading a victim site in an
iframe. Any website that allows users to log in, or handles sensitive data should have this
header set.</i><p>I wonder, why is this option an opt-out and not an opt-in? Shouldn't this be the default?
These same guys had previously used WebGL to suck out text in the same way; unfortunately the demo is no longer at the same URL, but it is what's responsible for the fairly weird implementation of CSS Shaders: <a href="http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html" rel="nofollow">http://www.schemehostport.com/2011/12/timing-attacks-on-css-...</a><p>It's amazing that the same thing can be observed with the standard SVG software filters, though. I'd imagine that using X-Frame-Deny as they suggest is a much better solution than killing all JS (because you just know some incompetent ad network will manage to flip the switch and break millions of pages with that ability...).
For those, like me, wondering why that 'detect visited' hack doesn't simply bolden visited links or changes its font or font size and uses getComputedStyle or getBoundingClientRect [1] to see whether that changes the bounds of the element: that trick has been mitigated three years ago. See <a href="http://hacks.mozilla.org/2010/03/privacy-related-changes-coming-to-css-vistited/" rel="nofollow">http://hacks.mozilla.org/2010/03/privacy-related-changes-com...</a>.<p>[1] not explicitly mentioned there, but I think the solution described intends to plug that hole, too.
These attacks are getting more and more creative. I begin to think that there is no such thing as perfect security in a world that constantly demands new features.
It seems to me like a web server ought to be able to send some signal to browsers on either a single page or subdomain basis, which disables JS for those pages. If another page includes such a JS-disabled page in an iframe, then at the very least, all scripts on the parent page should be immediately terminated, and ideally loading of the iframe should fail if any scripts have executed (obviously an exception should be made for, e.g. Chrome extensions).<p>This should completely nullify a vast number of potential attacks for sites that are particularly sensitive. There's no reason, for example, that the logged-in portion of a banking site should need to use JS. That seems like a reasonable sacrifice for adding significant security to critical websites.
I have a soft spot for side-channel attacks, they are often a beautiful example of out-of-the-box thinking. This whitepaper is no exception, in particular the second part about (ab)using the SVG filters.<p>I was thinking, of course it doesn't help much in mitigating this attack, but they calculate average rendering times over several repeats of the same operation. When profiling performance timings, it's usually much more accurate to take the <i>minimum</i> execution time. The constant timing that you want to measure is part of the low-bound on the total time, any random OS process/timing glitch is going to <i>add</i> to that total time, but it will not somehow make the timespan you are interested in randomly run faster. There might be some exceptions to this, though (in which case I'd go for a truncated median or percentile range average or something).<p>Also had some ideas to improve performance on the pixel-stealing, as well as the OCR-style character reading. With the latter one could use Bayesian probabilities instead of a strict decision tree, that way it'll be more resilient to accidental timing errors so you don't need to repeat as often to ensure that <i>every</i> pixel is correct, just keep reading out high-entropy pixels and adjust the probabilities until there is sufficient "belief" in a particular outcome.<p>But as I understand from the concluding paragraphs of this paper, these vulnerabilities are already patched or very much on the way to being patched, otherwise I'd love to have a play with this :) :)
To mitigate the new detect visited vectors, browsers could render everything as unvisited and then asynchronously render a 'visited' overlay (in a separate framebuffer) at a later time. SVG filters will have to be processed twice for the visited-sensitive data, so a vendor may just wish to limit SVG filters to only processing the 'unvisited' framebuffer for the sake of performance.