I would also add at least "sane default options", "continues downloads" and "retries on error" to the Wget column. I recently had to write a script that downloads a very large file over a somewhat unreliable connection. The common wisdom among the engineers is that you need to use Wget for this job. I tried using curl but out of the box it could not resume or retry the download. I would have to study the manual and specify multiple options with arguments for this behaviour that really sounds something that should just work out of the box.<p>Wget needed one option to enable resuming in all conditions, even after a crash: --continue<p>Wget's introdcution in the manual page also states: "Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved."<p>I was sold. Even if I by some miracle managed to get all the options for curl to enable reliable performance over poor connection right, Wget seems to have those on by default and the sane defaults make me believe it will also have this expected correct behaviour enabled even for those error scenarios that I did not think to test myself. Or - if the HTTP protocol ever receives updates, newer versions of Wget will also support those by default, but curl will require new switches to enable the enhanced behaviour - something which I can not add after the product has been shipped.<p>To me it often seems like curl is a good and extremely versatile low-level tool, and the CLI reflects this. But for my everyday work, I prefer to use Wget as it seems to work much better out of the box. And the manual page is much faster to navigate - probalby in part due to just not supporting all the obscure protocols called out on this page.
For me the killer feature of wget is that by default it downloads a file with a name derived from the url.<p>You do:<p><pre><code> wget url://to/file.htm
</code></pre>
and a file named "file.htm" appears in your cwd.<p>Using curl, you would have to do<p><pre><code> curl url://to/file.htm > file.htm
</code></pre>
or some other, less ergonomical, incantation.
Daniel Stenberg is among those rare breed of developers who put their heart and soul into their creation, a fading trait in the modern world of big tech that shadowy developers seem to be replaceable cogs of a money-making machine.<p>It's as if he treats curl as his mark on the world of IT.
Seems maybe dated. For example, it excludes both of these from wget in the diagram<p>> HTTP PUT<p>wget --method=PUT --body-data=<STRING><p>> proxies ... HTTPS<p>wget --use-proxy=on --https_proxy=<a href="https://example.com" rel="nofollow noreferrer">https://example.com</a><p>Curl consistently has more options and flexibility, but there's several things on the right side of the venn diagram where wget does have some capability.
Ok, wow, I didn't know that curl supported so many protocols - but the fact remains that that small intersection area is probably what > 90% of curl/Wget users are using the tools for. So, from a developer's perspective, the overlap is not that big, but from a user's perspective it <i>might</i> appear much bigger...
The best part of the post for me is:<p>"""I have contributed code to wget. Several wget maintainers have contributed to curl. We are all friends."""
Mandatory mention for the comparison made by Daniel Stenberg<p><a href="https://daniel.haxx.se/docs/curl-vs-wget.html" rel="nofollow noreferrer">https://daniel.haxx.se/docs/curl-vs-wget.html</a>
In the olden times we used wget when we wanted to mirror a website. It is a specialized tool.<p>Curl is a general purpose request library with a cli frontend (also used embedded from other programs, or as a standard library API in PHP etc).
I guess the most common usage is the overlap between the two. That's why I'd love to see a Venn diagram of where (OS and docker images) each is installed by default!
I've never seen them as competitors!<p>wget is my goto if I need to download a file now, with the minimum of fuss.<p>curl is used when I need to do something fancy with a url to make it work, or when I'm fiddling with params to make an API work/debug it.
Couple more things wget can do that curl can't.<p>1. wget can resolve onion links. curl can't(yet). You'll get a<p><pre><code> curl: (6) Not resolving .onion address (RFC 7686)
</code></pre>
2. curl has problems parsing unicode characters<p><pre><code> curl -s -A "Mozilla/5.0 (Windows NT 10.0; rv:102.0) Gecko/20100101 Firefox/102.0" https://old.reddit.com/r/GonewildAudible/comments/wznkop/f4m_mi_coño_esta_mojada_summer22tomboy/.json
</code></pre>
will give you a<p><pre><code> {"message": "Bad Request", "error": 400}
</code></pre>
wget on the other hand, automatically converts the ñ to UTF-8 hex - %C3%B1 - and resolves the link perfectly.<p>I've searched the curl manpage and couldn't find a way to solve this. Please help.<p>I'm having to use `xh --curl` [1] to "fix" the links before I pass them to curl.<p>[1] <a href="https://github.com/ducaale/xh">https://github.com/ducaale/xh</a>
For the FreeBSD users out there also ‘fetch’ is available.<p>Don’t know what the advantages/disadvantages are, but it comes with the default install. It’s usually what I use.
curl is a connection tool while wget is an app. At a basic level they do the same thing, but they excel in different areas.<p>This diagram is clearly and unapologetically biased towards curl. Feels strange that the author of curl doesn’t know what wget actually offers.
I recently found [axel], which is very impressive wget-like tool for larger files.<p>[axel]: <a href="https://github.com/axel-download-accelerator/axel">https://github.com/axel-download-accelerator/axel</a>
On the cURL side; ridiculous manual<p>I regularly forget the order for the values for <i>--resolve</i>, try searching for that word and figuring it out quickly<p>I've been relegated to grepping a flippin' manual
Another thing I forget that wasn't supported in wget (but worked in curl) last I checked: IPv6 link-local address scopes (interface names on linux).
I find wget is more likely to be on a given system than curl by default so I usually reach for that first. But I am squarely in the middle of the venn.
Can anyone explain "happy eyeballs"? Did find one page about it, but wasn't 100% clear what the use case for it being an option was, or where on earth the name came from...
Neat. Love it.<p>Is there a feature matrix to Venn diagram converter?<p>(Deep down) on my To Do list is comparing Ansible, Puppet, Chef, Docker, etc.<p>Which ultimately means some kind of feature matrix, right?<p>With a converter, we'd get Venns for free.