There's a reasonable case for including internal actors in one's threat model for larger companies or ones working in extraordinarily sensitive product domains. Most startups probably don't need to prevent the team from being able to read credentials, because that's theatre when they have 15 different ways to get to any secret the company has.<p>We use Ansible's vault feature to decrypt a few centralized secret files onto machines at deploy time. This lets us commit the encrypted text of the files. (The source of truth for the key is in Trello, IIRC, but it could be anywhere you have to auth in as an employee to view.)<p>It's modestly annoying (operations like "check what changed in the secret configuration file as a result of a particular commit" are impossible) but seems like a reasonable compromise to ensure that e.g. nobody can insta-create an admin session if they happen to have a copy of our codebase and a working Internet connection.<p>Secrets are communicated to processes which need them in boring Linux-y ways like "file i/o" and "stuff it into an environment variable that the process has access to." If you're capable of doing file i/o or reading arbitrary memory, we're in trouble. Of course, if you can do either of those on our production infrastructure and also connect to our database, we've already lost, so I don't see too much additional gain in locking down our database password.<p>If you're starting from the position "I have a Rails app which has passwords in cleartext in database.yml" this is an easy thing to roll out incrementally: move the password from database.yml to ENV['RAILS_DB_PASSWORD'], spend ~5 minutes getting your deployment infrastructure to populate that from an encrypted file (details depend on your deployment infrastructure -- I am liking ansible, a lot, for this), verify it works, then change passwords. Voila; Github no longer knows your database password and your continuous integration system no longer knows your production credentials. One threat down; zero coordination required with any other system you use or any other team at the company. You can standardize on this across your entire deployment or not, your call, and it's exactly as easy to back out of as it was to get started.
This article confuses me. The author tears down a strawman argument about running centralized key services ("The expensive solution"), then recommends exactly such a solution in Amazon KMS.<p>The only plausible way this can make sense to me is if he said "Running your own key service is a pain, use Amazon KMS". But that's a simple service question, probably wouldn't have taken up as much space.
An interesting article, I'm working on a side project/long term project that will hold medical data, it will be self selected (i.e. people entering their own data rather than a gov dept etc) however security is #1 on my list since frankly the idea of leaking someones medical data (even if they opted in and agreed to the license) scares the living shit out of me.<p>All my side reading recently has been on writing high(er, I follow best practices with my other stuff) security systems across the entire stack, it still frightens me but I see a real need for the side project so I'm going to do everything I can to make it as secure as possible and take a shot.
The recommended solution is still vulnerable to employee compromise: if they can push software that runs as a trusted role, they can steal any secrets that software has access to.
One solution we came up was to encrypt data before it is submitted and let the user have the private key. The private key is never transferred to our servers. (Generated on browser, kept by the user and used on the browser.)
<a href="http://www.jotform.com/encrypted-forms/" rel="nofollow">http://www.jotform.com/encrypted-forms/</a>
Every time I see some one touts AES as the reason that their encryption is secure, I want to ask, in what mode? CBC, CFB, CTR or (the best) GCM? How is the IV generated? Are there any potential padding oracles? If they don't even understand these questions, then it is obvious that AES cannot save them at all.
Everything here except for the engineers question can also be solved by simply hosting these things yourself. You don't need 3rd party code hosting on Github, just use Gitlab or JIRA. You don't need some external CI service, run your own Jenkins node. Chat and email should also be internal (we use XMPP, a local Mattermost instance would be an alternative) and SSL-only. You can do all of this with basically 1 docker command per install on your own dedicated hardware with a fairly underpowered machine. And this prevents leaking of all sorts of information, not just production database passwords.
I was surprised when the author all of a sudden started talking about AWS, and clicking some kind of button that creates a key.<p>(Besides, one would assume has been backdoored by the Amazon staffers anyway)