For those who feel a bit out-of-the-loop, this excerpt pretty much sums up why the EFF feels that this is, if not worse than 3rd party cookies, not significantly better:<p><pre><code> The proposal rests on the assumption that people in “sensitive categories” will visit specific “sensitive” websites, and that people who aren’t in those groups will not visit said sites. But behavior correlates with demographics in unintuitive ways. It's highly likely that certain demographics are going to visit a different subset of the web than other demographics are, and that such behavior will not be captured by Google’s “sensitive sites” framing. For example, people with depression may exhibit similar browsing behaviors, but not necessarily via something as explicit and direct as, for example, visiting “depression.org.” Meanwhile, tracking companies are well-equipped to gather traffic from millions of users, link it to data about demographics or behavior, and decode which cohorts are linked to which sensitive traits. Google’s website-based system, as proposed, has no way of stopping that.
</code></pre>
The way I interpret this is that, based on your browsing history in Chrome (or any browser that implements this kind of functionality) you are placed into a number of categories (or, if one reverses the metaphor, a number of descriptive tags are attached to you). Google is aiming to ensure that certain categories/tags that might be considered sensitive (mental state, physical illnesses, etc.) will be blocked.<p>(To be clear, this is my interpretation of what they are stating, not an assertion of fact)<p>The EFF is arguing that this isn't really that straightforward, as sensitive details can still be inferred from non-sensitive details.<p>What I'm curious about is, who is doing all the ID generation, categorization, and data centralization? Or is Chrome just going to calculate everything itself, then send the data to sites that ask for it?