FYI if you remove the mailbox from your residence, the USPS will silently return every attempted delivery to the sender.<p>You can then get a PO BOX, setup your <i>important</i> things like property taxes/dmv/utilities/banks with it as the mailing address, and carry on supplying your residence as your physical address to whoever appropriately asks without even lying.<p>Most places are just assuming your physical address receives mail and send unsolicited spam to it. Critical services must support a mailing address distinct from a residence address, as it's common for those living at the end of a dirt road without mail service; a perfectly legal way to live.<p>I currently do this, and my PO BOX receives practically zero mail, and I must say it's a glorious signal:noise ratio.<p>There are some frustrations though, some places do refuse to send to a PO BOX, and some shippers which claim to use FedEx or UPS will then go on to use USPS and your purchase doesn't arrive. Non-USPS deliveries will still arrive at the physical address without a mailbox, but USPS deliveries will not - those must go to the PO BOX. YMMV
One idea is to use active disinformation such as receiving mail in your own name at a UPS Store or PO Box and ordering things to your home -- magazines especially as they aggressively resell subscription lists -- in the name of an alias or the former owner (once mail forwarding has expired). These are just a couple ideas I picked up from listening to the "Privacy, Security, and OSINT" podcast. An episode dedicated to Advanced Disinformation can be found at:
<a href="https://soundcloud.com/user-98066669/105-advanced-disinformation-telephone-archives" rel="nofollow">https://soundcloud.com/user-98066669/105-advanced-disinforma...</a>
I think that with any measure any real individual can do - the adversaries can easily counteract it.<p>I'm a believer of building many separate complex systems to feed manufactured data into these systems. Let's make it an arms race, and a competition.
The techniques outlined in the article are naive and generally will not work. Robust methods exist for both detecting and filtering sophisticated data poisoning, and the kinds of organizations we are talking about here will already have that capability.<p>Defense against data poisoning isn't just about ad tech. State actor threats routinely engage in sophisticated data poisoning operations that require robust mitigations for system integrity purposes. A "simple" strategy is not remotely at the level of sophistication required to have a chance of bypassing these defenses, which need to withstand state actors.
> Data poisoning, which involves contributing meaningless or harmful data. AdNauseam, for example, is a browser extension that clicks on every single ad served to you, thus confusing Google’s ad-targeting algorithms.<p>Wait a minute, if every ad is clicked on won't that give an enormous amount of money to the companies that created the problem in the first place?
FWIW, I changed my gender, first and last name, e-mail address, phone number and moved states and they still connected me. It doesn’t help when every company you have an account with sells your information to the ad networks and DMPs. Ending 3rd party cookies should help to wind this down however.
I'm interested in learning if there are alternative credit cards available that do not track/resell your purchase history. [1]<p>To me, that seems like a major aspect of surveillance economy which can't be easily disrupted by something like Ad Nauseam or pi-hole.<p>[1] <a href="https://www.fastcompany.com/90490923/credit-card-companies-are-tracking-shoppers-like-never-before-inside-the-next-phase-of-surveillance-capitalism" rel="nofollow">https://www.fastcompany.com/90490923/credit-card-companies-a...</a>
Hey all, I'm the one of the authors of the conference paper discussed here and was quoted in this. Glad to see it's interesting to HN!<p>Wanted to briefly highlight a couple points that I think will be interesting to the HN audience.<p>One of the major goals of the paper is to describe a framework of three "data levers" (ways a group of people can hurt or harm a data-dependent technology). Data poisoning (well known to ML people for a long time) is one of the three "levers". The other two are "data strikes" (withhold future data and/or delete past data via deletion request) and "conscious data contribution" (ala conscious consumerism — give data to a firm you support and want to compete with incumbents).<p>A major point in the paper is that there are some big differences in terms of barrier to entry, legal considerations, ethical considerations, and ability for a data lever to be impactful. Basically, for any given company + technology, there's probably a particular data lever that's a "best fit". It might hard to organize a large enough "data strike" that will meaningful hurt a huge company's search engine, but conscious data contribution could help improve a competitor (esp. if that competitor focuses on search verticals). On the other hand, data strikes could be really great vs. facial recognition, because there's precedent of forcing companies to delete actual <i>model weights</i> ([<a href="https://www.theverge.com/2021/1/11/22225171/ftc-facial-recognition-ever-settled-paravision-privacy-photos](https://www.theverge.com/2021/1/11/22225171/ftc-facial-recognition-ever-settled-paravision-privacy-photos)" rel="nofollow">https://www.theverge.com/2021/1/11/22225171/ftc-facial-recog...</a>).<p>Another point is that there's some nice connections between levers. On the topic of data poisoning defenses: if you've been feeding poisoned data, and get caught (quite likely for naive attacks, as noted below), the company deletes your poison and you've just been "reduced to a data strike".<p>A final point: the paper discusses implications for folks who work in ML, design, HCI, and policy. There's great opportunities to build to tools to support data leverage, and for ML researchers to "bake in" data leverage (e.g. compute a performance v. dataset size learning curve to characterize how "vulnerable" a system is to data strikes). Also, there's huge potential for win-wins with privacy regulation: data deletion and data portability both enhance the public's leverage.<p>I'll end this long comment now, curious to see what others think (and appreciate all the comments already here!)