Very interesting work, especially the "celebrity cab tracking" in the second article.<p>However interesting differential privacy and injection of random noise into data sets to decrease privacy risks, it seems to me that the most likely result is an "arm's race" between the "anonymizers" and the "deanonymizers": You inject random noise to protect those in the dataset, I develop more sophisticated algorithms to filter out random noise...<p>...you develop "intellgent random noise" such that the injected data looks more likely to be real data, I make my algorithms that much more sophisticated, looking for ersatz data that isn't random enough (think of the six-sigma anecdote about sets of bolts or screws that "too sharp" cut-offs in their variation - indicating they'd been manually filtered).<p>I'm not suggesting the work is useless, but that instead one must do it eyes wide open, knowing that one faces multiple talented motivated "adversaries" who will apply their skills to deanonymization of that which you have anonymized - so be careful about the breadth and depth of claims concerning the level of protection afforded by these techniques.