We've addressed the issues disclosed to us, and if you try any of the 5 POCs in the paper you will find they no longer work in the latest Safari. Details of the fixes here: <a href="https://webkit.org/blog/9661/preventing-tracking-prevention-tracking/" rel="nofollow">https://webkit.org/blog/9661/preventing-tracking-prevention-...</a><p>There may be room for more improvement here but be aware what the POCs illustrate is not an active vulnerability any more.<p>In addition, we don't believe this channel was ever exploited in the wild.<p>(If anyone is aware of other issues in this area, I encourage you to practice responsible disclosure and report to Apple or to the WebKit project.)
Reposting from the other [1] thread:<p>Basically Safari keeps track of which domains are being requested in a 3rd party context (i.e. I load example.com in my browser and the page loads the facebook sdk - Safari increments a counter for facebook by 1). Once a given domain reaches 3 hits, Safari will strip cookies and some other data in 3rd party requests to that domain.<p>The problem is that advertisers can use this to fingerprint users: register arbitrary domains, make 3rd party requests to them, and detect whether or not that request is having data stripped. Each domain is an additional "bit" of data.<p>This is similar to "HSTS Cookies" [2] and also to issues with Chrome's XSS auditor, which is why it was removed [3].<p>[1]: <a href="https://news.ycombinator.com/item?id=22120136" rel="nofollow">https://news.ycombinator.com/item?id=22120136</a><p>[2]: <a href="https://nakedsecurity.sophos.com/2015/02/02/anatomy-of-a-bro.." rel="nofollow">https://nakedsecurity.sophos.com/2015/02/02/anatomy-of-a-bro...</a>.<p>[3]: <a href="https://twitter.com/justinschuh/status/1220021377064849410" rel="nofollow">https://twitter.com/justinschuh/status/1220021377064849410</a>
There is a fundamental difficulty when trying to implement privacy: A limit on the disclosure of information is <i>itself</i> a disclosure of information.<p>A good privacy design needs to confront this issue directly. Sometimes there's nothing to be done. I think in some cases it's mathematically unsolvable (cf. Cynthia Dwork's paper on Differential Privacy). But an explicit consideration can at least surface some trade-offs. The more fine-grained and selective your redactions, the more information they reveal.
Last time Google researchers made similar discoveries, 2012, it was used to ... track users :-)<p><a href="https://www.ghacks.net/2012/02/21/microsoft-google-is-also-bypassing-ie-privacy-settings/" rel="nofollow">https://www.ghacks.net/2012/02/21/microsoft-google-is-also-b...</a><p>"We used known Safari functionality to provide features that signed-in Google users had enabled. It’s important to stress that these advertising cookies do not collect personal information."<p>and bypassing IE third party cookie protection:
"impractical to comply with Microsoft’s request while providing modern web functionality." Google says complying with tracking protection is Impractical!
Haven't read TFA yet, but at first glance this sounds similar to the approach used by the "Privacy Badger" browser extension - if it sees the same tracker on multiple sites, it "learns" and begins blocking it. Would it also be susceptible to similar information leaks with this threat model?
I’ve been following privacy issues and technology for a while, but haven’t come across a foundational discussion of (a) the merits of and (b) technical implementations of different approaches to avoid fingerprinting:<p>“hiding” vs “blending in”(making me look identical to countless others - maybe even randomizing who I look like in a smart way).<p>I wonder if any subject area experts reading this thread would be willing to share a summary of their knowledge and thoughts here.
Conversely, Chrome is heading in the right direction:<p>>Chrome plans to more aggressively restrict fingerprinting across the web. One way in which we’ll be doing this is reducing the ways in which browsers can be passively fingerprinted, so that we can detect and intervene against active fingerprinting efforts as they happen. [0]<p>This will include things like restricting the volume of Browser API checks allowed, etc, to reduce the number of bits that can be used in a fingerprint.<p>[0] <a href="https://blog.chromium.org/2019/05/improving-privacy-and-security-on-web.html" rel="nofollow">https://blog.chromium.org/2019/05/improving-privacy-and-secu...</a>
Wow. I understand ITP's high level design, but didn't know it's implementation is so naive. Maintaining global database with a few rules which can be easily reverse engineered and giving its access to any documents? How did it go through the internal review process? Does Apple have any privacy/security review process for its major products?<p>I understand that privacy engineering is <i>very</i> hard and sometime can get not very obvious with implicit statistical dependency chains, but this kind of direct problem could (or should?) be caught in an early stage of design. Anyway, ITP is all about privacy and deserves attentions from dedicated privacy engineers.