I can't separate what actually happened from the sensationalizing in the article.<p>It says the websites had Google cookies and others. OK, anybody using Google Analytics has Google cookies. Anyone using FB plugins has FB cookies. Yes, websites certainly have a lot of cookies these days, and the information inferred by use of these can be in a generic fashion to target and rank ads. That's worthy of scrutiny -- but this isn't what the article is about. What is the evidence of "mental health" being treated in some scary specific way?<p>The only substantial thing in the article supporting the headline is one charge that a site sent quiz answers to Player.qualifo.com -- which appears to be an unregistered domain.<p>OK, so what information was sold? When did money change hands? Is there any hard evidence that mental health information was "sold to advertisers", as is claimed in the headline? Or is this just bullshit they made up to get clicks?<p>For that matter, even if you wanted to do some evil psychographic ad thing, why would anyone sell that info to advertisers? What are they gonna do with it? Like, why hand them the literal data, when you could allow them to just bid on ads that may be targeted/ranked using your painstakingly collected psychographic data (a la Cambridge Analytica)? Why sell cow when you can sell the milk?<p>The fearmongering and disinformation around adtech and privacy numbs us to legitimately scary things being done that should be covered more.<p>Makes me think of this Blake Ross rant: <a href="https://medium.com/@blakeross/don-t-outsource-your-thinking-ad825a9b4653" rel="nofollow">https://medium.com/@blakeross/don-t-outsource-your-thinking-...</a>
I have a rather specific condition that I’ve seen YouTube advertisements for the drugs for, despite not even searching for the details of it except through an anonymizer. I’ve always wondered what the cost per click for an advertisement for a $100,000/yr drugs is.
One of the team behind this research posted an interesting walk-through of the traffic from one of the worst offenders: <a href="https://twitter.com/Bendineliot/status/1169259912184115206" rel="nofollow">https://twitter.com/Bendineliot/status/1169259912184115206</a>
This kind of thing makes me want to start a small independent ISP that automatically blocks trackers, a la Pi-Hole, unless the customer specifically opts in for them. Though, I’m not sure if there are legal hurdles in the US for doing this.
Companies have practices for opioid abuse surveillance and prevention that highlight how available this data is. Example<p><a href="https://www2.deloitte.com/us/en/pages/public-sector/solutions/solving-the-countrys-opioid-crisis.html" rel="nofollow">https://www2.deloitte.com/us/en/pages/public-sector/solution...</a><p>They use insurer datasets to model at-risk populations, especially populations with high costs, to identify intervention opportunities. I saw one model where they could identify all pregnant women in a state with 98% accuracy and score them for risk of opioid dependency.<p>You can layer that type of data with online advertising from vendors like Google and identify opportunities to target behavior factors that combined with medical risk factors present opportunities. For example, a blue collar worker with back pain treated opioid treatment has a baseline risk of abuse. If her address changes or behaviors like online gambling happen, that increases the risk of abuse.<p>Similar tech has been developed to combat extremist behavior.<p><a href="http://chicagopolicyreview.org/2019/04/18/can-online-ads-help-prevent-violent-extremism/" rel="nofollow">http://chicagopolicyreview.org/2019/04/18/can-online-ads-hel...</a><p>In short, your health data is not meaningfully private.
Unfortunately, if you have the clout and money and a facile excuse, you can also get data on patients straight from the NHS itself.<p>'Revealed: Google AI has access to huge haul of NHS patient data' - <a href="https://www.newscientist.com/article/2086454-revealed-google-ai-has-access-to-huge-haul-of-nhs-patient-data/" rel="nofollow">https://www.newscientist.com/article/2086454-revealed-google...</a><p>'Data deadlines loom large for the NHS' - <a href="https://www.bmj.com/content/360/bmj.k1215" rel="nofollow">https://www.bmj.com/content/360/bmj.k1215</a>
this is exactly why i view surveillance and advertising as attacks on my personal autonomy. advertisers will use anything and everything against me, whether or not i ever consented to anyone knowing or using any of my information.<p>if i'm surveilled, anything i do can be monetized and that monetization directly and seriously harms me and exposes me to risk that i did not opt into. if an advertiser knows that i am mentally ill on the basis of my search queries or other data which they illegitimately procure without my consent and against my active objections, they can target me for exploitation with an arsenal of dirty psychological tricks designed to get me to buy their products. if i am like most people, in the long run, they will win because i will buy at least one product which they have forced onto me.<p>in other words, if i am forced against my will to view a targeted advertisements, it is an inexcusable and unprovoked attack on my right to refrain from economic activity that i do not wish to undertake. it is an attempt at coercion using weaponized persuasion. it is not an attempt in good faith to improve my life or to help me.<p>this remains true whether that advertisement targets an area where people are explicitly exceptionally vulnerable, such as in the case of mental health, or something more mundane, like my love of fast cars and nice wine. however, in the case of mental health, it may be such that the act of targeting someone with a relevant advertisement genuinely makes their condition worse. so, as we all knew all along, these advertisements are actively, maliciously, and viciously harming people for the sake of a few clicks.<p>the bottom line here is that advertisers and advertiser-enablers are long overdue for their comeuppance. i'd support a ban on targeted advertisements, but that won't happen legislatively. GDPR and similar laws are a start, but they don't go nearly far enough to punish transgressors. i'd be more satisfied with criminal liability for advertisements exploiting protected classes of PII, but we'll see how things evolve.
Oh awesome. So does that mean we can finally do away with the gigantic money sink that is HIPAA? Because if we're gonna have our health information leaked, then there's no reason to keep up this charade.
Always use a VPN and incognito mode when accessing any sites for which your interest has value to others that could harm you.<p>It's not surprising at all that web sites covering medical issues are also tracking interest in those issues, correlating them with identities, and selling them to third parties, perhaps including life and health insurance companies, potential employers, etc.
I worked for a major DSP / advertising platform and I was asked to setup a campaign to advertise some particular drug to bi-polar people in a manner that would specifically target that population.<p>I declined to do so and I was later fired. They were within their rights to fire me and I was within my rights to decline. That is all.
We need to start thinking ahead about who is going to run the federal agencies and "tobacco truth" type organizations funded by the billions of damages from the inevitable settlements, to avoid regulatory capture.
As much as I value privacy and think America needs a new constitutional amendment enshrining it in our rights, this title is clickbait and the entire article is designed to whip you into a panic. Here's the crux:<p>"Privacy International (PI) investigated more than 100 mental health websites in France, Germany and the UK.<p>It found many shared user data with third parties, including advertisers and large technology companies."<p>Yes. Mental health websites use third-party ads just like everyone else. Case closed.<p>They're not "selling mental-health information" as if they're violating HIPAA or something. They're just ordinary websites with ads and other tracking cookies.<p>This is more of the silly kind of half-truths that were used to put cookie warnings on nearly every site on the internet. Don't fall for it.