You can't leak API keys if there are no API keys to leak! The article recommends OIDC for apps, which is a step up, especially if you rotate the bearer token, however there is another option - use short-lived certs.<p>Our project Machine ID is replacing API keys with short-lived certificates:<p><a href="https://goteleport.com/docs/machine-id/introduction/" rel="nofollow">https://goteleport.com/docs/machine-id/introduction/</a><p>Another great option is SPIFFEE <a href="https://spiffe.io/" rel="nofollow">https://spiffe.io/</a><p>The adoption is slower than we wanted, because it's not trivial to replace API keys, but we see more and more companies using mTLS + short lived certs as alternative to shared secrets.
Even if you try and use best practices, the whole ship is just LEAKY!<p>For example you store your secrets in the env. Well your program crashes and the log capture software dumps the entire env, or it’s included in some crash report.<p>Leaks from every corner.
Does this include deliberately sharing an API key, but in a 'not best practices' way?<p>ie. "Here, just run document.cookie='SID=EB73542386AF235' " Then you'll be logged in as an account that can do what you're trying to do.
It’s way higher than that if you mean just leaving it in an MR or even checking it in. Or even a relatively open internal file system. It’s not if, it’s about the plan when you do.
I skimmed the report pdf and saw no mention of validating the data. So I assume pushing an example env file would be flagged as a leak? I understand that it's tricky to validate and even more so when having millions of data points but the method seems shaky. It's like all those automatic error analysers that repo authors tend to hate due to all false positives.