For my usage:<p>* Wget's the interactive, end-user tool, and my go-to if I just need to download a file. For that purpose, its defaults are more sane, its command line usage is more straightforward, its documentation is better-organized, and it can continue incomplete downloads, which curl can't.<p>* Curl's the developer tool-- it's what I'd use if I were building a shell script that needed to download. The command line tool is more unix-y by default (outputs to stdout) and it's more flexible in terms of options. It's also present by default on more systems-- of note, OSX ships curl but <i>not</i> wget out of the box. Its backing library (libcurl) is also pretty nifty, but not really relevant to this comparison.<p>This doesn't really need to be an "emacs vs. vim" or "tabs vs. spaces"-type dichotomy: wget and curl do different things well and there's no reason why both shouldn't coexist in one's workflow.
My favorite use of wget: mirroring web documentation to my local machine.<p><pre><code> wget -r -l5 -k -np -p https://docs.python.org/2/
</code></pre>
Rewrites the links to point local where appropriate, and the ones which are not local remain links to the online documentation. Makes for a nice, seamless experience while browsing documentation.<p>I also prefer wget to `curl -O` for general file downloads, simply because wget will handle redirects by default, `curl -O` will not. Yes, I could remember yet another argument to curl... but why?<p>That said, I love curl (combined with `jq`) for playing with rest interfaces.
Also putting this out there—for nicer REST API interaction on the CLI, and a little more user-friendliness, you might also want to add HTTPie[1] to your toolbelt.<p>It's not going to replace curl or wget usage, but it is a nicer interface in certain circumstances.<p>[1] <a href="https://github.com/jkbrzt/httpie" rel="nofollow">https://github.com/jkbrzt/httpie</a>
Though only briefly mentioned in this article at the buttom, I'd like to give a huge shoutout to aria2. I use it all the time for quick torrent downloads, as it requires no daemon and just seeds until you C-c. It also does a damn good job at downloading a list of files, with multiple segments for each.
I instinctively go to `wget` when I need to, uhm, get the file into my computer[1]. `curl -O` is a lot more effort :P<p>Other than that, curl is always better.<p>[1] Aliasing `wget` to ~`curl -O` might be a good idea :)
"Wget can be typed in using only the left hand on a qwerty keyboard!"<p>I love both of these, but wish that curl was just like wget in that the default behavior was to download a file, as opposed to pipe it to stdout. (Yes, aliases can help, I know.)
I use wget when I need to download things.<p>curl is for everything else (love it when it comes to debugging some api)... Httpie is not bad too for debugging but most of them time I forget to use it.
Since aria2 was only passingly mentioned, let me list some of its features:<p>- Supports splitting and parallelising downloads. Super handy if you're on a not-so-good internet connection.<p>- Supports bittorrent.<p>- Can act as a server and has a really nice XML/JSON RPC interface over HTTP or WebSocket (I have a Chrome plugin that integrates with this pretty nicely).<p>They're not super important features sure but I stick with it because it's typically the fastest tool and I hate waiting.
Curl gets another point for having better SNI support, as wget versions until relatively recently didn't support it.<p>This means you can't securely download content using relatively recent (but not the newest) versions of wget (such as any in the Ubuntu 12.04 repos) from a server which uses SNI, unless the domain you're requesting happens to be the default for the server.<p>As an example, I found the file <a href="https://redbot.org/static/style.css" rel="nofollow">https://redbot.org/static/style.css</a> only accessible with SNI. Try `wget <a href="https://redbot.org/static/style.css`" rel="nofollow">https://redbot.org/static/style.css`</a> vs. `curl -O <a href="https://redbot.org/static/style.css`" rel="nofollow">https://redbot.org/static/style.css`</a> on Ubuntu 12.04. Domain names which point to S3 buckets (and likely other CDNs) will have similar issues.
For me defaults matter... 99% of the time when I want to use wget or curl, I want to do it to download a file, so I can keep working on it, from the filesystem.<p>wget does that without any parameters. Curl requires me to remember and provide parameters for this obvious usecase.<p>So wget wins every time.
If nobody's tried it, <i>axel</i> mentioned in the report as possibly abandoned has the awesome feature of splitting a download in to parts and then establishing that many concurrent TCP connections. Very useful on individual TCP flow rate-limited networks.
We are forgetting our long lost cousin, fetch. <a href="http://www.unix.com/man-page/FreeBSD/1/FETCH/" rel="nofollow">http://www.unix.com/man-page/FreeBSD/1/FETCH/</a>
wget has the amazing flag `--page-requisites` though, which downloads all of an html documents' css and images that you might need to display it properly. Lifesaver.
Really interesting.
Under curl he has:<p>"Much more developer activity. While this can be debated, I consider three metrics here: mailing list activity, source code commit frequency and release frequency. Anyone following these two projects can see that the curl project has a lot higher pace in all these areas, and it has been so for 10+ years. Compare on openhub"<p>Under wget he has:
"GNU. Wget is part of the GNU project and all copyrights are assigned to FSF. The curl project is entirely stand-alone and independent with no organization parenting at all with almost all copyrights owned by Daniel."<p>Daniel seems pretty wrong here. Curl does not require copyright assignment to him to contribute, and so, really, 389 people own the copyright to curl if the openhub data he points to is correct :)<p>Even if you give it the benefit of the doubt, it's super unlikely that he owns "almost all", unless there really is not a lot of outside development activity (so this is pretty incongruous with the above statement).<p>(I'm just about to email him with some comments about this, i just found it interesting)
Unmentioned in the article - Curl supports --resolve, this single feature helps us test all sorts of scenarios for HTTPS and hostname based multiplexing where DNS isn't updated or consistent yet, e.g. transferring site, bringing up cold standbys, couldn't live without it (well I could if I wanted to edit /etc/hosts continuously)
wget was the first one I learned how to use by trying to recursively download a professor's course website for offline use, and then learning that they hosted the solutions to the assignments there as well..<p>I did well in that course, granted it was an easy intro to programming one. ;)
> Wget requires no extra options to simply download a remote URL to a local file, while curl requires -o or -O.<p>I think this is oddly the major reason why wget is more popular. Saving 3 chars + not having to remember the specific curl flag seems to matter more than what we can think.
Curl scripts allow open connection to view all new logs in a session.<p>can wget do similar? I did not know it can or could however from my point of view if it cannot this is like comparing a philips head screwdriver to a powertool with 500pc set.
aria2 is much more reliable when downloading stuff, especially for links which involve redirections.<p>For example here's a link to download 7zip for windows from filehippo.com.<p>Results:<p>* Curl doesn't download it at all.<p><pre><code> curl -O 'http://filehippo.com/download/file/bf0c7e39c244b0910cfcfaef2af45de88d8cae8cc0f55350074bf1664fbb698d/'
</code></pre>
gives:<p><pre><code> curl: Remote file name has no length!
</code></pre>
* Wget manages to download the file, but with the wrong name.<p><pre><code> wget 'http://filehippo.com/download/file/bf0c7e39c244b0910cfcfaef2af45de88d8cae8cc0f55350074bf1664fbb698d/'
</code></pre>
gives:<p><pre><code> 2016-03-03 18:08:21 (75.9 KB/s) - ‘index.html’ saved [1371668/1371668]
</code></pre>
* aria2 manages to download the file with the correct name with no additional switches.<p><pre><code> aria2c 'http://filehippo.com/download/file/bf0c7e39c244b0910cfcfaef2af45de88d8cae8cc0f55350074bf1664fbb698d/'
</code></pre>
gives:<p><pre><code> 03/03 18:08:45 [NOTICE] Download complete: /tmp/7z1514-x64.exe</code></pre>
For certain case like creating a Telegram bot which has no interaction with browser, do you think we can make use of curl (post request) to make PHP session works?<p>As there's no browser interaction in Telegram bot, the script just receives response back from Telegram server. This might help to kerp track of user state without a need of db?
I use curl because it is generally installed. I prefer not to install wget, especially on customer machines because it stops 90% of script kiddies. For some reason wget is the only tool they will attempt to use to download their sploit.
I should probably write a "saldl vs. others" page someday.<p>> <i>Wget supports the Public Suffix List for handling cookie domains, curl does not.</i><p>This is outdated info. (lib)curl can be built with libpsl support since 7.46.0.
Nowaday I just use httpie. It's in Python, so easy to install in windows, and let me work easily with requests and responses, inspect the content, add coloration, etc. Plus the syntax is much easier.
I like Wget's option to continue a file download if it gets interrupted. I believe you can achieve the same thing in curl but its not as simple as just setting a flag (-c).
Wget is under GPLv3 so thats what I use more often. Sometimes I will use curl in certain cases, but yes, I will use a GPL product over a non-gpl product if given a choice.
There is no other industry where tools are debated so much as in IT. We literally waste tonns of hours on arguing over minor differences and nuances that really should not matter that much.
Everything 6 years old is new again: <a href="https://news.ycombinator.com/item?id=1241479" rel="nofollow">https://news.ycombinator.com/item?id=1241479</a>