This Thursday I'm invited to a privacy roundtable with Facebook Legal and Privacy Policy teams in Amsterdam. The round table will be with other entrepreneurs and experts in the privacy field. I'm invited because I'm the founder of Simple Analytics - a privacy friendly analytics SaaS business [1] - and critical about Facebook on Twitter [2].<p>Some people advised me not to go there because it would only do harm to my name and brand, but I think I should. The Facebook teams are going to give a presentation with some new plans where they want feedback on. For internal push back they need critical people from outside Facebook, which I'm happy to contribute for.<p>To make it more interesting for the outside world I'm going to ask a few questions for Facebook in general (privacy wise). And that's where I need some help. What questions do you want answers for from Facebook?<p>Facebook agreed I could use the answers outside of the meeting (with the exception of sharing from non-Facebook attendances).<p>[1] https://simpleanalytics.com<p>[2] https://twitter.com/adriaanvrossum
Well, The big one would be nice:
Facebook makes her money from harvesting and selling privacy sensitive data, or at least that is the perception shared by nation states, the EU and the wider audience. For any claim Facebook makes about respecting privacy to have at least face validity she need to show how she is going to make money without violating her users privacy. So how is Facebook going to make money if they need to respect users privacy?<p>Somewhat more constructive:
Facebook seems to have an unhealthy appetite to collect _all_ user data including privacy sensitive information. But lets be fair: She is definitely not the only company on the quest for the Big Data insights, that seem to always be at least one data point away. Does Facebook have information on which data points they really need to make a commercial viable user profile? What data points are privacy sensitive? Is Facebook looking into alternatives for those privacy sensitive data points? If not: can Facebook enumerate those and ask their users for explicit consent to collect those points and ask for explicit consent in the future for any new data points?<p>Good luck this afternoon. I hope you get some insights.
> Some people advised me not to go there because it would only do harm to my name and brand, but I think I should<p>We need more people who are willing to try and solve problems, not just be critical. Thanks for being willing to have a conversation with them. You're making the right call whether you are able to have an impact or not.
When politicians do focus groups to fine tune their speeches, they are not looking to change their platform, their opinion, or their actions.<p>They are just looking to fine tune for optics. The knowledge they gain from the focus groups just helps them make their message more palatable.<p>I think of fb that way because they are masters of double speak, weasel words, etc. which is the common behaviour of dishonest politicians.<p>Imo many of the questions posted here can be easily deflected, handled with conversation techniques that any politician or lawyer would know well.<p>You want an airtight position, built on a detailed understanding of how they typically deflect in the past. And because you are asking, you are probably the right person to do this.<p>Harari tried, and despite being brilliant and knowledgeable, he was simply talked over: <a href="https://www.youtube.com/watch?v=Boj9eD0Wug8" rel="nofollow">https://www.youtube.com/watch?v=Boj9eD0Wug8</a> Though I suspect he is aiming for a softer approach.<p>Instead of a pile of disconnected questions, I would suggest developing a clear list of requirements, statements which must be true as a set, in order for a social system to have an acceptable level of privacy.<p>The list should be iterated upon, and not sent to them prematurely. It should be built on best practices and knowledge of privacy experts from leading institutions. Then it could be broadly endorsed. Then it could not be as easily weaselled-around.
You should go. But be aware that they're likely using you to look like they legitimately care about their user's privacy. So just don't let yourself be used in that way unless you want to be.<p>The question I've always wanted to ask Facebook is how much is their data worth? No discussion of privacy at Facebook is interesting unless the discussion concerns money and their bottom line. They undoubtably have people inside Facebook calculating how much spcecific bits of PII are worth to them, and what it would to their bottom line if they stopped collecting them. IMO any discussion of privacy that doesn't quantify it in terms of money is basically a waste of time. They're a company and money is all they care about.<p>As a corollary to value ask them about risk. How much do they calculate the risk of holding all that PII to be? How much would their bottom line be hurt if they lost it in a breach?
1. Do your apps upload metadata and/or thumbnails from photos to which they’re permissions to access, but which aren’t explicitly selected by the user for posting/uploading?<p>2. Do your apps “skim” the contents of device clipboards and send this info off device without user intent to do so?<p>And one open-ended question to try to gauge how open they’re being about the whole process:<p>3. What information do you collect that would surprise or upset privacy-conscious individuals?
Before you decide to go, I'd evaluate exactly what you want out of the engagement and keep that in mind during the whole process. It's so easy to get used in this sort of scenario. Facebook obviously has an agenda of some kind and so should you. If those two agendas don't mesh then you should probably disengage or else be open to a one-sided benefit in their favour.<p>Be aware that their PR guys could use your name to dilute your previous critical commentary once you have gotten involved and are part of their 'consulted expert' club. This could potentially leave you fighting their PR which will likely just end up with a muddy mess.<p>Be prepared is what I'd say, a reputation is on the line for you and not much for them.
1. How can one find out about their shadow profiles that have been created by FB?<p>2. How can they delete the data associated with the above?<p>3. Info on how they group personal data from WhatsApp, FB and Instagram<p>4. Who do they share such data with?<p>5. Who within FB is responsible for privacy policies, etc.?
Try to investigate the background to this privacy roundtable initiative.<p>* Which part of Facebook did the initiative come from - privacy policy, or (maybe) PR? How many of the people in the room are from (communications/PR/crisis management/some other related team)<p>* Is it genuinely an attempt to listen to critics and try to improve? (Can they point to examples of improvements they've already implemented?)<p>* What will the outcomes of this initiative be? How will they summarise and communicate their action points; how will any such points be followed up?
You're being invited to Facebook Amsterdam. That's like speaking at a Walmart in Kentucky. They won't have any answers to any questions.
How can I completely delete a messenger conversation? Is it even possible? If I’m talking to someone on messenger, and we both decide we want to delete the entire conversation, that should be possible with two button clicks.<p>I talk to my significant other on messenger. It gives me nightmares that any employee at Facebook could access that conversation at any time in the next thirty years.<p>It’s going to be really interesting when people from my generation start running for office. It’s conceivable a Facebook employee might think it’s “worth it” to check a candidate’s private messages, since he’s a racist Nazi and deserves it, or whatever.
I have one.<p>1. How can someone who does <i>not</i> have an account prevent themselves from being tagged and/or identified in uploaded photos? Corollary: why isn't the tagging and identification of a person an opt-in feature only?
The one privacy control that everybody is waiting for is: automatically delete all my activity data older than N days, where N can be specified by the user.<p>Why isn't it implemented yet?
Assuming you won't get actual answers to any of the critical questions, maybe take the opportunity to make the employees more self-conscious about their jobs at FB through questions. Maybe something like: If you decide on a privacy policy as a team here in Amsterdam, does it have any effect on the overall way how FB handles privacy? Do you, as a team and individuals, feel empowered enough to have actual influence over privacy questions and concerns? Especially in the light of FB saying one thing and then doing things completely different.
I’d take a different approach to your preparation:<p>Try to find videos of FB officers (Zuck, Sandburg) who have already been publicly grilled.<p>Most likely on a corporate level, FB employees already know how to answer and respond to most of these privacy questions.<p>That means you need to figure out their initial canned responses, what assumptions they’re building on, and prepare a line of questioning/reasoning to chip away at their logic in follow-ups.
This one: If you don't use Whatsapp but a friend of yours does, he has to give Whatsapp access to his address book which includes your name as well (although you don't use Whatsapp). So the question is: Does Facebook/Whatsapp have information about such passive users (e.g. the name or phone number)?
Start with them defining, "What is privacy?"
- Privacy is the ability of an individual or group to seclude themselves, or information about themselves, and thereby express themselves selectively. - source <a href="https://en.wikipedia.org/wiki/Privacy" rel="nofollow">https://en.wikipedia.org/wiki/Privacy</a><p>- How does the average customer know they have achieved "privacy". I have a feeling that they have many privacy features, but turned off by default.<p>- If you start with the end in mind. What does success look like?
Not really privacy related but I would ask them why they allowed a fucking lunatic to livestream a mass killing spree, why they didn't do anything to shut down the stream despite numerous people alerting them to what was going on, why their systems couldn't detect near duplicates of said content in the days, weeks and months that followed and finally, why they would allow absolutely anyone to start live streaming to audiences of potential tens of thousands to begin with. This was a disaster waiting to happen and I'm betting Facebook knew <i>damn</i> well that their technology, processes and culture were in no way equipped to deal with it. This is a rant but as someone who grew up in Christchurch I can't help but feel that they've learned <i>nothing</i> and done even less.
Ask them if they agree the like and view count next to every post/image/vid has a psychological effects on individuals and groups.<p>If they agree, ask them if there is anything blocking them from studying the cases where the effects are negative on individuals and groups.<p>If it is possible to list the kind of content where likes and views are having negative consequences to society that data(counts not content) should not be stored on Facebook server or shown to Facebook users.<p>Right now there is too much emphasis during privacy debates on all data.<p>There is no distinction being made between the like and view counts that cause the ALS challenge funding to be produced - a positive to society, and like and view counts that reinforce my antivax aunt's beliefs,<p>Some of these counts are harmful, some are harmless and some are useful. Why store or display the harmful stuff?
Hypothetically, some businesses should not exist. For instance, although a children's-heroin-selling business might be in great demand and turn a huge profit, such a business is not in the best interests of society. Simply optimizing the delivery of things people want is not sufficient to make a good business. "Heavy equipment rental for people under the influence of narcotics" is similar. Without getting into a discussion of social good, or what's moral or not, we can all agree that at times people are willing to make trades for which they themselves would find stupid at other times.<p>Once data is captured it never goes away. As time passes and as it aggregates with other similar data, it actually becomes much more valuable.<p>So, continuing along, hypothetically, what are you going to do if capturing personal data in exchange for "free" services is not a business that should exist?<p>I understand that right now you're engaged in a long and drawn-out split-the-baby campaign, where you try to assure privacy advocates of your intentions and that's there some magic sauce involving algorithms that will solve everything, but what if that is not the case? What if your business model is built on harming people by encouraging them to make trades for personal information where, once we all figure out what we're doing, none of us would agree to fifty years from now? How will you know? Will you tell us? Do you already know? What are your plans?<p>If you truly want to respect privacy and are on the side of people living their lives without being constantly examined like lab rats and having every piece of their existence recorded for any hacker to see forevermore, what are your plans for knowing that it's not working out? What's your tripwire, your exit plan?<p>Because frankly, if you don't have one of those, then this is all just a PR exercise, right? You've already decided that you win, you just haven't figured the details out yet.<p>You can restate the question several different ways, but it all boils down to "How do we know you're serious about this?" Because so far it just looks like a bunch of the usual public relations BS.
I would ask a more generic question.<p>What's the right level of control users should have over their data?<p>Then as a follow up I would ask what's keeping Facebook from implementing those controls.<p>Unless this was already covered in an acceptable way after the Cambridge Analytica f*ckup (I haven't followed what actions Facebook took afterwards to address the issue), I would also ask about what are they doing about policing bad actors, companies trawling or leaking users' private information or abusing it. How are they going to better prevent that in the future. Once it's outside of Facebook they've already lost control of the situation.
1. What sustainable business models will Facebook pursue that respect or even facilitate user privacy?<p>2. What will be simple to use mechanisms / technologies / standards employed by FB to allow users to identify and delete their private information?<p>3. Will those privacy control mechanisms be standardized across Facebook products / technologies?<p>4. Will there be an effort to open source technologies / standards with respect to user privacy, so they can be peer reviewed and if good implemented by others in the industry?<p>Thanks for your efforts!
I work on the assumption that everything I do on Facebook platforms including WhatsApp is secure from random hackers but not secure from the Five Eyes.<p>Years ago in the Snowden docs there was a diagram of a link into Google's infrastructure where they could take the SSL off and put it back on again, fooling people into thinking everything said about SSL and HTTPS implied actual privacy.<p>Since this is a taboo, 'not this again' type of question, can you think of ways to ask this in such a way they can only lie?<p>For instance, what guarantees can Facebook offer to their users that their messages are not being mass intercepted by Five Eyes?<p>I am fine with police with a job to do getting someone's texts, e.g. if someone is in a road traffic accident when they were texting on WhatsApp, I would gladly have the police get access to that person's data. However, the mass surveillance and the chilling effects that go with it are not good for society. It is a breach of privacy. If the government do such things it is still illegal. Even if they write laws that say it is okay, it is not. So rather than sweep this topic under the rug, I would like the answer from Facebook as to what they are doing and what they would do if their customers were subject to mass surveillance from Five Eyes.<p>I don't think it is unreasonable to ask this.
If social media platforms do not legally provide an expectation of privacy, as Facebook has recently claimed in US court[1], why should users expect otherwise?<p>[1]<a href="https://www.nytimes.com/2019/06/18/opinion/facebook-court-privacy.html" rel="nofollow">https://www.nytimes.com/2019/06/18/opinion/facebook-court-pr...</a>
More of a request than a question:<p>If they want to show respect for privacy a user ought to be able to deep-delete (meaning, from backups too) any and all information they ever posted in any form on FB. This might even include information that was the result of inference from posted data.<p>I would like a setting that, by default, erases all of my posts older than, say, 30 days.<p>I would actually pay for this. Not a lot. A nominal amount, like $10 or $20 a year for “premium” options. No problem at all with that concept.<p>Privacy, amongst other things, should mean the user owns their information, not the service. If I can’t ensure my information is deleted I am one data breach or one disgruntled employee away from losing my privacy.<p>In this age of vindictive “the internet hates everything” polarization, privacy is critically important.
How can a user purge all data about them on fb, including shadow accounts and backups?
How can a non-user opt out of having a shadow profile about them?
How can they claim to respext privacy if they dont have flawless answers for these?
Tell them you want easy access to your "friends" email address and other contact information and a quick way to transfer it to other social networks if you want. That is the "privacy" they say they are protecting.
Ask them if (and how) they intend to change their ad platform to sell ads to ethics and privacy conscious owners of small businesses.<p>I’m a small business entrepreneur and I’m frustrated that to compete well in my sector I would have to advertise on Facebook. Their ad system currently seems intractably unethical because they know and actively use so much user data that users have not knowingly given away for the purpose of advertising. I don’t want to be asked in the Final Judgment why I paid into such a scheme of abuse — which is what it currently seems to be.
You don't have to agree with her politics, but I think Peggy Noonan had the right answer on this one: it's a show and there's no good to be found in taking part.<p>Just say no and hit send.<p>> In February 2018 Nicholas Thompson and Fred Vogelstein of Wired wrote a deeply reported piece that mentioned the 2016 meeting. It was called so that the company could “make a show of apologizing for its sins.” A Facebook employee who helped plan it said part of its goal—they are clever at Facebook and knew their mark!—was to get the conservatives fighting with each other. “They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would.” Another goal was to leave attendees “bored to death” by a technical presentation after Mr. Zuckerberg spoke.<p>(<a href="http://peggynoonan.com/overthrow-the-prince-of-facebook/" rel="nofollow">http://peggynoonan.com/overthrow-the-prince-of-facebook/</a>)
Privacy settings:<p>Would FB be willing to work with a neutral third party group of user experience designers? Let's call them the PWHUX Board for Privacy White Hat User Experience. (Or maybe something else, PWHUX sounds a bit rude in English.)<p>This PWHUX Board would create standardized user interface conventions for disclosing and controlling personal privacy settings. This same group might work with other datahoovering businesses to establish multi-vendor standards.
I'd assume the Legal and Privacy Policy teams can't give you answers about strategy from their C-level other than what they've already made public through vague statements. So I wouldn't get angry if I couldn't get anything useful from them.<p>You could ask if they plan to let users know exactly (and be able to opt out) where their data will end up (internal only, 3rd-parties, which ones? Could you select purpose?).<p>And of course, GDPR globally.
If you're seriously soliciting HN for questions, then make sure to record yourself asking questions you pick here so we can hear FB's response.
This is a privacy roundtable that's private?<p>...<p>You see what I'm getting at? They understand privacy just fine when it's their own privacy.<p>I think you would just be a fig leaf.
Ask them why the exposed moderators who now live in constant fear for their lives in their own countries were not offered serious compensation that could last them a significant chunk of their lifetimes (which would be on the order of hundreds of thousands of dollars). [1] Facebook will most likely respond that their threat assessments didn't warrant it. To which you'd ideally respond by asking them why a reasonable victim should consider it fair or reasonable to be forced to trust Facebook's security chops when Facebook already failed him once and put his <i>life</i> in danger.<p>Seriously, it's ludicrous to offer just a "home alarm system" and a ride to work (which I also assume is to their <i>current</i> job... why the hell should they keep doing the same job?) for a moderator who's now going to be in perpetual fear of getting killed. Those people may well no longer be able to work like they used to, for <i>any</i> employer.<p>[1] <a href="https://www.theguardian.com/technology/2017/jun/16/facebook-moderators-identity-exposed-terrorist-groups" rel="nofollow">https://www.theguardian.com/technology/2017/jun/16/facebook-...</a>
Adjacent to pure "privacy" issues is a/the "data ownership" question, or maybe it should be framed as the public vs private data issue.<p>That is, maybe if FB own the data, advertisers buy it and states/others hack into it... the right solution is to "push the arrow through" rather than extract it. Make the data (or most of it) public. Publish it. It's not really "private" in a meaningful way. The subject (object?) does not have control of and/or knowledge of the dataset describing them. Also (this relates to my last point) data is not the sum of its part. A lot of what the data <i>is</i> only exists at the aggregate level, and without publication users can never have control, ownership or any rights to these crucial aspects of their data..<p>To put it in the form of a question: <i>Are there ways of arriving at a better state, with less distrust and paranoia that involves opening data, rather than just better protecting it.</i><p>I'm not suggesting that it's simple or that I know exactly how it should work. But, if advertisers had the same access everyone has, I think it'd be less of an issue. If the default was "data is public," I suspect we'd find better ways of dealing with data that truly needs to stay secret.<p>As an aside, unconnected to privacy, data has become a new class of IP. We may legally consider it copyrighted (raw data) or patent-able (trained NNs), but as a practical matter it is a new type of IP... of rapidly growing importance. There are massive, world changing examples of what can happen when we manage to create cultures of "public IP" or sharing. The scientific revolution was (arguably) directly related to the new culture of publishing experimental results. CS was irrevocably changed by free and/or open source software, especially compilers, operating systems, libraries... The WWweb, in lots of ways. The pace of the current ML explosion is directly related to and enabled by open source, free software, scientific publishing and "open IP" generally.<p>Imagine how held back we would have been, if those cultures of sharing hadn't emerged. I think data sharing is probably similar in this regard to compiler code or scientific experiments. Openness creates value, potentially a lot of it.<p>Privacy is a meaningful reason/excuse for closed data. I think it's worth trying to solve these two together. Dunno how to phrase a question for that.
Ask them what they think is their impact on the journalism and the news. We see destruction of local news, specialty journalism, etc, and lots of it is thought to be attributed to privacy violations by FB and Google. Who needs a local newspaper if a local business can target people on FB or on Candy Crush Saga? That's the bottom line.
What incentives do employees & engineers have for improving privacy or preventing privacy issues & bugs?<p>Kind of like how FB's performance became part of annual review & promotion rubrics for employees recently?<p>Can other employees spike projects started by anti-privacy gordon gekkos to improve short term metrics?
Ask them do they provide all the data what they store on an individual when the data is requested, and if they don't do that, ask them why.<p>A reference read: <a href="https://news.ycombinator.com/item?id=19959064" rel="nofollow">https://news.ycombinator.com/item?id=19959064</a>
Why do you insist on switching to Top Stories even when the user consistently switches back to Most Recent?<p>Why are there not more granular privacy controls?<p>Why is what a user sees of their friends that which is in the audience for a post? I don't need to see what someone "commented on".
Why don’t they provide an API to easily use your personal data in other places? Why don’t they use federation services to let Facebook talk to ActivityPub services? If they truly cared about privacy they would give you a way to use your data outside of the platform.
Which teams budget is larger, privacy or legal?<p>If that question is deniable, then does FB take no efforts to guess at individuals budgets? (Ie household income, rent/mortgage, monthly subscriptions, etc) Does FB grant people privacy for what’s in their bank accounts?
I have had quite a bit of trouble registering an insta account via Tor. I get that the IPs are likely blacklisted for abuse, but I do not see a path to privacy on that platform. Would they be interested in supporting an onion url for insta?
What are they doing to lobby against the age verification procedures in the UK, initially applied to porn, but which the government has clear intent to extend to all social media and is one of the biggest attacks on privacy in history?
Ask how Facebook actually measures its fake accounts. See <a href="https://www.plainsite.org/realitycheck/facebook.html" rel="nofollow">https://www.plainsite.org/realitycheck/facebook.html</a>.
“Given Facebook’s ability to track the amount of time a user spends on any given item in their newsfeed, do you also track how long users spend reading terms and conditions, privacy policies, and so forth? If not, why not?”
I think what everyone truly wants is the ability to be forgotten. "Like I was never there".<p>But Facebook has a strong monetary incentive to never forget anything, ever. They have an incentive to make it unclear just how much data they're keeping about their users. They have a strong incentive to be as opaque as possible. And even if they let users be forgotten, they've got a strong incentive to make that hard to do.<p>How can Facebook balance it's responsibility to shareholders to earn profits with their responsibility as ethical humans to allow people to be forgotten? I do presume that, as people, they want to be ethical (and I'm sure someone will say I'm naive for believing that).<p>And how can Facebook make it's decision on where they lie on that spectrum clear to their users, so people can make informed decisions about what they want to share and do on the platform?<p>The hardest decisions businesses have to make is when to give up profit by doing the right thing. And the most profitable companies are the ones run by sociopaths for whom this is not a difficult problem.
> Some people advised me not to go there because it would only do harm to my name and brand, but I think I should.<p>Is there anything in particular that drives your participation? The reasoning is peculiar.
It's obvious to anyone that their current business models are hostile to privacy. Do they have a plan to fundamentally change the way they make money? If so, what is it?
What is the motivation for improving privacy? Do they aim to do just enough to get good PR or can they demonstrate a more fundamental change to their security culture?
I'd be interested in knowing if Facebook has any ties with the (Dutch) government. And if that is the case, to what extend do these collaborations go.
Will they honor GDPR related requests? The last I saw, the have some checkbox they require european users to "agree" to in order to continue using FB, which basically waives their GDPR rights.<p>In addition, you might want to review the questions from when Zuckerberg was in front of the European parliament. The MEPs asked some good questions and Zuck basically weasled out of it. I'd love to see the same questions brought up again.<p>And also, info about shadow profiles.
A "simple" one,: Do they plan to fullfill GDPR requests?<p>Here is a good story of a guy who tried to get all the data the company had on him without anything close to a real answer:<p><a href="https://news.ycombinator.com/item?id=19959064" rel="nofollow">https://news.ycombinator.com/item?id=19959064</a>
Ask them what their main basis for processing personal data under GDPR is when they collect user data. Also ask about their retention management. How can they ensure personal data is only retained so long as they have a basis for processing?
You might ask if FB organization is a parasitic operation sucking down personal data any way they can and selling it to who?. Who knows who else. You know part of the answer. Watch the squirm ensue.