> And then, one day, they sent us a threat. A crazy threat. I remember it vividly. I was just finishing a run when the email came in. And my heart rate went up after I stopped running. That’s not what’s supposed to happen. They said that we had violated state and federal law. They threatened us with civil and criminal charges. 20 years in prison. They really just threw everything they could at us. And at the end of their threat they had a demand: don’t ever talk about your findings publicly. Essentially, if you agree to silence, we won’t pursue legal action. We had five days to respond.<p>This during a time when thousands or millions have their personal data leaked every other week, over and over, because companies don't want to cut into their profits.<p>Researchers who do the right thing face legal threats of 20 years in prison. Companies who cut corners on security face no consequences. This seems backwards.<p>Remember when a journalist pressed F12 and saw that a Missouri state website was exposing all the personal data of every teacher in the state (including SSN, etc). He reported the security flaw responsibly and it was embarrassing to the State so the Governor attacked him and legally harassed him. <a href="https://arstechnica.com/tech-policy/2021/10/missouri-gov-calls-journalist-who-found-security-flaw-a-hacker-threatens-to-sue/" rel="nofollow noreferrer">https://arstechnica.com/tech-policy/2021/10/missouri-gov-cal...</a><p>I once saw something similar. A government website exposing the personal data of licensed medical professionals. A REST API responded with <i>all</i> their personal data (including SSN, address, etc), but the HTML frontend wouldn't display it. All the data was just an unauthenticated REST call away, for thousands of people in the state. What did I do? I just closed the tab and never touched the site again. It wasn't worth the personal risk to try to do the right thing so I just ignored it and for all I know all those people had their data stolen multiple times over because of this security flaw. I found the flaw as part of my job at the time, I don't remember the details anymore. It has <i>probably</i> been fixed by now. Our legal system made it a huge personal risk to do the right thing, so I didn't do the right thing.<p>Which brings me to my point. We need strong protections for those who expose security flaws in good faith. Even if someone is a grey hat and has done questionable things as part of their "research", as long as they report their security findings responsibly, they should be protected.<p>Why have we prioritized making things nice and convenient for the companies over all else? If every American's data gets stolen in a massive breach, it's so sad, but there's nothing we can do (shrug). If one curious user or security research pokes an app and finds a flaw, and they weren't authorized to do so, OMG!, that person needs to go to jail for decades, how dare they press F12!!!1<p>This is a national security issue. While we continue to see the same stories of massive breaches in the news over and over and over, and some of us get yet another free year of monitoring that credit agencies don't commit libel against us, just remember that we put the convenience of companies above all else. They get to opt-in to having their security tested, and over and over they fail us.<p>Protect security researchers, and make it legal to test the security of an app even if the owning company does not consent. </rant>