Not so bad comparing to what? Yeah, comparing to downloading a tar file from the website and running ./configure, make etc - right, it's probably quite a similar risk. But who does that?<p>Every decent Linux distro has a package manager that covers 99% of the software you want to install, and comparing to an apt-get install, pacman -S, yum install and so on - running is a script off some website is way more risky. My package manager verifies the checksum of every file it gets to make sure my mirror wasn't tempered with, and it works regardless of the state of the website of some random software. If I have to choose between a software that's packaged for my package manager and one I have to install with a script - I'll always choose the package manager. And we didn't even start to talk about updates - as if that isn't a security concern.<p>The reason we should discourage people from installing scripts of the internet is because it would be much better if that software would just be packaged correctly.
I disagree with some of this, I.e paste jacking.<p>Plenty of software projects put more care and focus into their software and not in their website, if you're running a vulnerable version of Wordpress or whatever CMS it'd be easy for someone to insert something malicious without being noticed whereas something that modified your code would show up in git, code reviews etc
I'm surprised that no one has yet mentioned that piping curl to bash can be detected by the server (previous discussion at <a href="https://news.ycombinator.com/item?id=17636032" rel="nofollow">https://news.ycombinator.com/item?id=17636032</a>). This allows an attacker to send different code if it's being piped to bash instead of saved to disk.<p>IMHO, "curl to shell" is uniquely dangerous, since all the other installation vectors mentioned don't support the bait-and-switch.
> Not knowing what the script is going to do.<p>Yep, this is why i hate piping curl to sh. Much prefer how e.g. go does this:<p>Tells you to just run<p><pre><code> tar -C /usr/local -xzf go1.13.4.linux-amd64.tar.gz
</code></pre>
It's not that I don't trust the installer script to not install malware. But I don't trust the installer script to not crap all over my system.
My experience is that software that installs via curl|bash tends to ignore my preferences as expressed via $PREFIX/DESTDIR, $XDG_{CACHE,CONFIG,DATA}_HOME, etc. It'll install who-knows-where and probably leave dotfiles all over my home directory.<p>Maybe curl|bash is <i>functionally</i> equivalent to git clone && ./configure && make && make install, but my bet is on the one providing a standard install flow to be a better guest on my system.
The points raised in the article are correct, and I'm much more concerned with the willingness of people to run arbitrary software on their primary computers in general, than the specific case of piping to sh. I think piping to sh just emphasises how insecure the entire practice is, and arguing against that is analogous to close your eyes to protect yourself from the attacking tiger.<p>The only system I've worked with that helps you truly deal with this is Qubes OS. Perhaps Fedora Silverblue will achieve this as well, once it comes out of beta.
Has running a curl-to-bash command found during normal user-initiated web browsing <i>ever</i> resulted in a malware infection? Even anecdotal evidence would be valuable at this point.
Yeah I was asking this question on SO - How to responsibly publish a script - but got no response, a sarcastic "Tumbleweed" badge even. My concern was that the script could be easily hosted elsewhere and we'd have multiple versions with potential malicious mods flying around. In the absence of alternatives curl-bashing isn't so bad after all because it promotes a canonical download location from a domain/site you control, even if I hated it initially as a long-term Unix user.
> There is no fundamental difference between curl .. | sh versus cloning a repo and building it from source.<p>Not true: when you clone a repo with signed commits, you have forensic evidence that the repo signer provided the code you ran, while when you use curl you have … just the code itself.<p>That's not a <i>lot</i>, but it's not <i>nothing</i>.
I hate install scripts, period. They feel so Windows-ish. Just distribute a .deb, .rpm, .snap, homebrew package, npm package, or whatever is the most appropriate for your software. All the scripting you need to do should be done inside of the regular package installation process, and even that should be kept to a minimum.<p>The only software that has any right to rely on an ad-hoc install script on a Unix-like system is the package manager itself. It's awful enough that I have to do apt update and npm update separately. Please don't add even more ways to pollute my system.
The average non-technical user is never going to open up the terminal and run commands. The well educated technical user is going to be vary of untrusted sites and various forms of attacks (which I'm assuming the author of this post falls under).<p>IMO this is good advice for those that fall in the middle of these two categories, i.e. <i>slightly</i> technical people who run into problems and copy-paste solutions from Stack Overflow hoping that something will work.<p>> you’re not running some random shell script from a random author<p>This is <i>exactly</i> what is happening in the vast majority of these cases. These users are going to be vary if linked to an executable or installer, but "hey just run this simple line of code" sounds like a very appealing solution.
Agreed. If I don’t trust the server, or don’t have a secure connection to it, it is not likely wise to run any non trivial code downloaded from it.<p>Verifying a hash that comes from the same server also doesn’t make that much sense. Verifying a PGP signature would be a compelling reason to not pipe to shell, and that’s really about it.
For the most part this is a problem with non-rolling-release distros.<p>There are very few instances in which I've had to even use an installer on Arch. For many of those cases, the AUR provides a package that verifies the hash of the downloaded file anyway.<p>I've constantly been frustrated when using Ubuntu because something basic like having 'vim' not be months out of date requires a PPA.<p>The 'official' Rust installation method is a curl | sh.
Or:<p><pre><code> $ pacman -Q rustup && rustup -V
rustup 1.20.2-1
rustup 1.20.2 (2019-10-16)</code></pre>
the problem is mainly that the script is executed without leaving a trace. if you downloaded the script then executed it, you would have something to inspect in case something goes wrong.<p>it's too easy, and people with very scarce knowledge could develop a habit of doing this without asking questions and not even leaving any trace for a senior to inspect in case of a problem happening
> There is no fundamental difference between curl .. | sh versus cloning a repo and building it from source<p>I would say it depends. If the commits are signed by a key you know it's probably better. Even if it's not the case, cloning with SSH if you know the host key is also slightly better than downloading through HTTPS where any (compromised) trusted CA can MITM your connection :) (you can argue that those to use cases are rare in practice, and I would agree with you ;))
> Not knowing what the script is going to do.<p>This is more like:
not knowing what to do, when it doesn't work.
And this is always the case until it works. Which is just a local Phenomenon and i can't expect things that work for me to work for others. So why don't write an expressive installation documentation with multiple steps instead of one-liners that either work or don't. There is just no in between.<p>Take the installation instruction of syncthing for example:<p><pre><code> curl -s https://syncthing.net/release-key.txt | sudo apt-key add -
echo "deb https://apt.syncthing.net/ syncthing stable" | sudo tee /etc/apt/sources.list.d/syncthing.list
</code></pre>
These two steps are hard to automate, if you don't have an interactive shell.<p>Same goes for the saltstack-boostrap-script. This script doesn't work on all platform equally good. This is not an reliable state. So in the end I'll stick with the normal way to install things which is very easy to automate.
I ran into this recently at work. I wanted to write a script that you could curl into bash to quickly set up some common tools.<p>Firstly, I made sure that the script told you what it would do before doing it.<p>Secondly, my instructions are two lines. Curl to a file, then run it through bash. A compromise, but if you mistrust the script, you can inspect it yourself before running it.
> Either way, it’s not a problem with just pipe-to-shell, it’s a problem with any code you retrieve without TLS.<p>Well, yes. But the <i>typical</i> alternative is a tar-ball and a gpg signature - both via insecure transport, but verifiable (like with tls and a CA).<p>Git will typically be via ssh or https - so to a certain degree over a secure channel.
If curl loses connection to the source website while downloading the script, then partially downloaded script will be executed, no matter what. This is a main drawback of curl-to-shell piping approach, and the original article is missing it entirely.
I remember someone curling a Heroku CLI install script and upon inspection, it would have tried to install a specific version of Ruby too instead of just the client. Since then I always glance through the script first
Is there a simple command you can use to read the contents of the script (pipe) before it's sent to sh? Something like:<p><pre><code> curl ... | less-and-maybe-cancel | sh</code></pre>
Sometimes I just want to download software without installing it. This is complicated by install scripts that obfuscate the real source or break it into dozens of parts.
I always install docker using simple command<p><pre><code> curl -fsSL get.docker.com | sh
</code></pre>
Instead of copy pasting dozen of commands from docs / SO