This article has about as much insight as I would expect from a "nodejs-security.com" article.<p>This article spends a good deal of time conflating two things: putting stuff in .env and using environment variables.<p>The application-parsed .env file is one of the most poorly thought-through ideas that has taken hold in modern application development. It takes something you can do in literally a couple lines of shell (as a container entrypoint) and adds a bunch of complexity for something that is actually just worse.<p>In local dev scenarios, app-parsed .env files suck because you often end up with some kind of dev-specific secret that you don't want committed to the app repo. In my experience this means developers figure out how to pass a .env file around.<p>If you use an actual shell instead, the local-dev .env can shell out to something like the AWS CLI to get secrets from parameter store. Or you could grab them from Hashicorp Vault if you run that.<p>And because a shell fetches it at run time, secret updates are seamless and properly access-controlled in one spot.<p>In proper deployment scenarios, .env sucks because your deployment system (container orchestrator, Lambda, etc) will need to set those values appropriately for their current environment anyway. And by having a .env file the app loads, now you have two places for configuration.<p>Applications simply should not have any involvement in setting values for their own environment variables. They are typically used for core infrastructure-level configuration. The source of truth for this is probably going to be available via something like Terraform. So the application should ultimately inherit they configuration through Terraform.<p>Additionally, this article is simply wrong on environment variables being readable by any user on a Linux system. On Linux, a process's environment can be read by the superuser and the user who owns the process. That's it.
“Even if exposed or leaked, it is one secret to rotate rather than all of your secrets, scattered across all of your services and their environment variables.”<p>I don’t think this is true. You’d rotate all secrets in the store, as they could be accessed/compromised.
Don't use secrets in environment variables, but use <i>this</i> secret in environment variables, but this one gives you access to all secrets.<p>Things like vault which was suggested still requires you to pass the vault token in to your app somehow. And even then if your application does not have direct vault support you will still be using vault to supply secrets via environment variables, its even the recommended way with Nomad and their template system.<p>I really dislike these sort of articles. Because it has a catchy phrase "Do not use secrets in environment variables", and that is all that will be remembered. And next thing you know you will be at a company submitting a PR and some guy will say "Do not use secrets in environment variables", and then advise you to pass them as arguments on the command line (this happened to me).<p>Environment variables are today, the most safe way to pass secrets to a program.<p>.env files ARE NOT Environment variables, they are files. A better title and write up would be "Do not store secrets in file". Once you do that all the weird problems described, with exception of printing them as logs, go away. Then you need a new article "Do not print secrets in your program".<p>But that is all moot, because you should already be filtering out secrets by configuring your local log system to do so. I my self write wrappers and log systems that handle filtering out secrets from logs within the same context of the application I run. Its super simple, it s fast (if you know what a trie is) and you can worry free print secrets.<p>Edit: This article in fact has more damaging impact to security as a whole, mostly because of the conflation and 99/1 on problem/solution. This entire domain will now be blocked on all networks I have control over.
In what world does this work?<p><pre><code> $ curl http://your-website.com/public/../../../../proc/12345/environ
</code></pre>
If your server is serving up your whole filesystem, you likely have a lot of big problems.
Here, let me boil it all down for you. Basically, you can determine if it's safe to store secrets in a given place by feeding it to this Python function, which will return True if it's safe and False if it is not:<p><pre><code> def canIStoreMySecretsHere(location):
return False
</code></pre>
Basically, for any location you might store a secret, a hacker might get access to it. Therefore, it is not safe there.<p>You might think I'm being sarcastic, but... perhaps less than you'd think. It has often seemed to me that secret management is a game of temporal arbitrage, where you stick them in some new sort of place and just pretend that that new place must be secure, until you realize some time later it is not, and then you stick it in a new place, a new "secrets manager" that is safe, until <i>that</i> gets popped, then you stick it somewhere else....<p>(Note this is about symmetric secrets, and things like passwords. Asymmetric things admit more interesting possibilities of bundling some computation with the storage with things like secure enclaves. One can debate the physical security of a secure enclave, but assuming its software is correctly implemented, a secret store where there simply is no API in theory or in practice to extract the secret back out is an actual improvement in secret storage that I am not sarcastic about.)
Aside from the .env points, which, fair enough, don't use "environment variables in a file", this always boils down to one actual security concern: Are you okay with every piece of code in your codebase having quick and easy access to all secrets?<p>The answer _feels_ like it should be no: zero-trust-by-default, etc. but you're fooling yourself - a compromised dependency isn't _just_ going to look at process.env. It's going to be installing a backdoor and having an agent login and poke around. It's going to be netcatting for 3306 and finding out where the credentials are eventually. Security thru obscurity is no security at all.<p>Last thought, Kubernetes' `envFrom` is such a salve of simplicity in this day and age.
In my experience, juniors don't know how to do a secure setup, and busy seniors are often willing to cut corners to complete business-critical tasks. Application secrets management needs a better default setup. The standard for application auth should be more similar to an IAM system.<p>The dev community needs to find a better default than .env files for secrets. While there are plenty of alternatives, they generally all require knowledge of some third party system, which most people, for many reasons do not have the time or interest to learn, and some 3rd party secret to unlock the rest.<p>We need better default abstractions around secrets management. The authentication step to fetch secrets should be pushed to something ephemeral, probably biometrics. Ideally, devs should almost never interact with secrets in any way. They should use secure and convenient MFA methods to authN/Z their access to services, and secrets management happens out of sight. And this should all happen automatically with default tooling.<p>It is fairly easy to authenticate between services without secrets in the context of a single platform like AWS using IAM policies and roles, but I think we need to solve the more general case for secrets management abstraction across platforms and services. OSs, browsers, and dev tooling are becoming more mature with respect to auth methods. Secrets management should be mostly the domain of a select group of people, like any number of other complex computer systems details.
I was surprised to see so much comments here upon reading the article: the article is full of bad takes, and all the commenters seem to agree.<p>This is simply bad advice, let's get this off the frontpage.
systemd 247+ now discourages using env vars for secrets, because there are the LoadCredential and related options that are much more secure, with better isolation and encryption features.<p><a href="https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html" rel="nofollow">https://www.freedesktop.org/software/systemd/man/latest/syst...</a>
ENV variables having the names with PASSWORD or SECRET should be ignored by logging and monitoring systems. Most of the web has been built on trust of following conventions.<p>common secrets used on server side - `JWT_SECRET`, `DATABASE_PASSWORD`, `PGPASSWORD`, `AWS_SECRET_TOKEN` etc.,<p>Being a long time developer, this breaks the standard of backend apps which mostly uses 12 Factor App[1]. This approach introduces a new dependency for fetching secrets. I see all new open-source projects using "paid" or "hosted" solutions. It is no longer easy/simple to host a full open-source app without external dependencies. (I understand -- things are getting complicated with s3 for storage etc.,).<p>[1] <a href="https://12factor.net/" rel="nofollow">https://12factor.net/</a>
So don’t provide secrets to your service via environment variable, it can leak?<p>Instead, expose all your secrets via a public API with IP filtering, then give the credentials to this service to your app - as an environment variable - and voila!<p>This just seems like increasing complexity in a part of your system that should be as simple and non-dynamic as humanly possible, for the upside of.. much larger attack surface?<p>We inject secrets, as env vars, in each Deployment. No runtime access to any secrets store, gives the ability to generate new secrets for each deployment, and minimal complexity for such a critical aspect of the system.
> To call out a practical example: 1Password.<p>Of all the examples that could have been 'called out', this is the least practical one. Jumping from the problem statement straight to hosting with a third party provider completely ignores the huge risk that comes with it. Using environment variables is risky so just give your secrets to some third party... which then provides environment variables anyway. This entire section ought to be dropped, it almost reads like a sponsored bit and there are much better and more widely used solutions used such as sops and vault.
I believe this sort of sums it up: “To begin with, the hint lies in the title of this section: secrets management. Environment variables are hardly managed unless you explicitly use an integration or an orchestrator like Kubernetes to automatically inject environment variable configuration.”<p>It has nothing to do with environment variables… It is just a ephermal way how to inject variables into the process, if you do it right…
External "Secrets management services" are among the most attractive hacking targets. It is beyond me why you would have full trust in those.
Hang on a second. One of the reasons is you might "do this", where "this" is rendering your environment variables into your HTML. First of all, just don't do that. Secondly, What's stopping you from doing the same thing with a secret in a proper managed secret repository?
I made a direnv extension for that purposes.<p>It loads env files and call hashicorp vault if the value is a secret.<p>I find it pretty neat to have an env file that describes all environments variables.<p><a href="https://github.com/gerardnico/direnv-ext">https://github.com/gerardnico/direnv-ext</a>
Environment variables are fine just don’t bake them into your image. K8s and docker support secrets that are just environment variables from the applications’ business logic. You can trivially read them from a secret and populate the environment at runtime, also.
The problem isn't the tech, it's the people.<p>If you allow these mistakes to be possible, they are inevitable. If you take basic precautions, you'll probably be fine.<p>I'd rather take a well-curated and trimmed down .env over a poorly-configured secrets manager that gives away the entire farm when the single secret leaks. Security isn't a single thing nor bolstered by switching a single method of how you store your secrets.<p>The problem is not taking precautions to prevent leaks from happening, not how you are managing your secrets. If your threat model begins or is imminently "when the attacker is logged in as root", just post your stuff on a public bucket to get it over with.
I actually liked this article. A great explainer in why environment variables are a terrible idea. Nike actually open-sourced their Keystore solution if anyone is curious. It was called cerberus.