I was really expecting a serious discussion about useless and dangerous flags, outdated encryption, expensive and dangerous renegotiations... I got a one line complaint about "network traffic" (take a read about latency and bandwidth difference!), caching, and bad tooling (go learn some better tooling, it's out there).<p>There are plenty of things to complain about in TLS, but the article touches none of them. What a bummer.
I really hope I'm not the only person who mentally groans whenever I see yet another "X considered Y" clickbait title. It's the tech equivalent of "this one weird trick" or "X Happened And You Won't Believe What Happened Next".
The problem with this argument is that there are <i>very</i> high-security pages on the Internet --- things that protect people's bank accounts or most sensitive personal information --- and they're not going away. The junction, at the protocol level, between insecure web sites and secure ones is a major design weakness; we would have fewer attack vectors in the long run if we could count on uniform encryption across the web.
> Seems to me a bit like equipping everyone with armour to make shooting them more difficult. Solving the problem the wrong way?<p>I don't know, making humans immune to bullets would be an elegant solution to the gun control debate which doesn't involve disagreements over the second amendment, and would make everyone win.
The problem with SSL/TLS is that it is binary. There's currently very strong pro-binary movement in the ranks of Internet infrastructure engineers, probably originated in Google. Yes, binary protocols are marginally more efficient, but they are inherently harder to understand, debug, and generally see what's happening, especially in high-stress conditions when something fails in production. Binary protocols are more complex than text protocols, and more complexity leads to negligence and security problems (e.g. recent OpenSSL bugs). Secure systems are simple systems (OpenBSD gets it right).<p>Text-based protocols are the greatest thing that UNIX brought to the world. There should be more of them, especially in security sensitive areas.
amusingly, the "one-line" server is not only "not really one line", but also contains a number of errors and other incongruities:<p>1. there's no reason to put : at the start<p>2. z=aa is the same length as z=$r<p>3. there are double quotes where there shouldn't be and none where there should<p>4. the sed quoting is wrong and only works since file names cannot be empty<p>5. useless use of subshells<p>6. won't work on echos which don't parse escape sequences or don't accept -e<p>7. parsing ls<p>but most importantly, the whole first part can easily use TLS with "openssl req -x509 -newkey rsa:4096 -nodes -subj /CN=localhost -keyout server.pem -out server.pem; openssl s_server".
There are several other very important reasons missing from this article, which I think invalidate part of the argument.<p>One is widespread use of open wifi networks. I know many people don't bother to redirect traffic through a VPN when on open wifi, which means anyone on the network can monitor their traffic. This might be mostly innocuous, but at the worst, they can steal login credentials and personal info.<p>The second is ad/analytics tracking networks. By using SSL, you force your trackers to be SSL as well. Small comfort for those who despise this anyway, but it's better than these networks moving plain text identifiers and info about you around, allowing it to be monitored as you surf around the web.<p>I believe the third is widespread government surveillance/mass spying. By using SSL you do two things: prevent (or at least complicate) the 3rd party interception of data, and also decrease the signal-to-noise ratio (making it less likely that any given encrypted stream is actually something valuable and worth breaking).
Hopefully the argument about back/forth traffic in SSL will soon be obsolete if Zero-RTT handshakes are implemented in TLS1.3. Surely this would then be comparable to standard HTTP requests?
Total clickbait. More like websites with black backgrounds and bright green monospace fonts considered unreadable.<p>No major browser will be supporting the insecure mode of http/2. I don't think I'm alone in thinking that is a good thing. I like to know that the page I'm interacting with hasn't been tampered with, whatever website I'm on. Nefarious certificate authorities aside, TLS is the way to do that.<p>Besides, connections (especially mobile) are getting faster all the time. I'd say encouraging better connectivity is a more worthwhile pursuit than allowing everyone to turn off TLS.
A counter to the author's "webserver in 1 line of code" - <a href="https://gist.github.com/denji/12b3a568f092ab951456#simple-golang-httpstls-server" rel="nofollow">https://gist.github.com/denji/12b3a568f092ab951456#simple-go...</a><p>I prefer proxying of SSL (and automatic generations of LetsEncrypt certificates) using containers so that my web servers don't have to worry about that aspect of configuration.
This post focuses only on the technical costs of TLS. The reality that we currently live in contains a hostile network where unarmoured packets are the easiest of targets. The movement to put TLS on everything is a reaction to the hostility and is overwhelmingly driven by #1: A legitimate interest in security.
SSL/TLS is bloated but that's not a reason <i>not</i> to use it.<p>Rather it's a reason we need some TLSv2 that just removes the crap and focuses only on three encryption/authentication modes:<p>* Desktop: High throughput, lots of CPU, minimal latency
* IoT: small throughput, very little CPU, latency acceptable
* Mobile: small to medium throughput, some CPU, minimize latency<p>A lot of bloated protocols are still good, they're bloated because backwards compatibility and everyone and their kitchensink needs to be able to decode it.
In my perfect world, you'd receive a certificate from your ISP when it assigns you one of the IP addresses it was itself assigned, and you'd receive a certificate from your registrar when you purchase a domain name. The former certificate would be good for the duration of your IP assignment; the latter for the duration of your domain ownership.<p>The IP-level certificate would be used for IPsec; the DNS-level certificate would be used for HTTP and other protocols; if you needed some other, stronger sort of identity verification then you'd need to take other measures.<p>This would solve the accessibility problem.<p>As for proxying, I think that HTTP had a really interesting idea with proxying, but it just doesn't work in practice. Proxies are untrustworthy, so it doesn't make sense to use them.<p>As for speed, I don't think SSL is noticeably slow from a modern phone.
The author lists legitmate motivations for why people want to see 100% SSL adoption.<p>The CA system also began with such good intentions. But motivations for profit enveloped good intentions. Certificates became a business, and the quality of the software became an afterthought.<p>The same may be or is happening with SSL/TLS deployment. With a function such as encryption, one cannot ignore software quality. Poor quality can defeat the whole purpose of the software. There is no point in using bad encryption software.<p>One of the good intentions the author cites is that people want ubiquitous encryption. Is encryption synonymous with SSL? Why? SSL is not the only system ever written to encrypt internet traffic. And it is probably far from the best one that could be written.<p>Nothing wrong with the good intentions. But is SSL is an asset or a liability? There is a cost to taking on SSL's baggage of complexity and maybe it's only worth it if the benefit achieved is real and not illusory.<p>If SSL can so easily be exploited, then the false sense of security it's name inspires may cause more problems
than SSL solves. But that's only for users. Others with purely commmercial goals stand to profit immensely from SSL adoption, the same way businesses did from CA certificates.<p>SSL was not created with the intent to protect non-commercial communications. It was created in the 1990's by Mozilla to allow for "e-commerce" using their Netscape browser. It served it purpose.<p>SSL is old and people are attempting to retrofit it with "improvements". Such as being able to host multiple sites with one wildcard certificate on one IP address. This is a hack. It's called SNI and it breaks a lot of software. People should consider why such a "feature" even needs to be implemented. Is it for the benefit of the user? The CA business has become nothing more than an impediment for many people.<p>Costs vs benefits. Not just for business but for users.
Missed the biggest point which is cognitive overhead. HTTP is simple to understand and it has thrived because of this. What a pain it is to get Wireshark to decode TLS traffic, which is not just cognitive overhead but debugging overhead too.
Trusted (ie, no warnings) HTTPS localhost for Mac requires about 10 mins to set up. After that:<p><pre><code> https-server
</code></pre>
Will give you: <a href="https://certsimple.com/images/blog/localhost-ssl-fix/trusted-localhost.png" rel="nofollow">https://certsimple.com/images/blog/localhost-ssl-fix/trusted...</a><p>Details: <a href="https://certsimple.com/blog/localhost-ssl-fix" rel="nofollow">https://certsimple.com/blog/localhost-ssl-fix</a>
> It stops proxies from caching responses between different clients. There is no way to fix this.<p>There is, at least in corp environments. We have, via proxy.pac, a couple of ordinary proxies which act as regular cachers with low TTL, and additionally a <i>huge</i> (read: multiple TB storage) proxy which caches with extremely high TTL the auto-updaters from Apple, MS, Debian, Ubuntu as well as the media CDNs of some major newspapers.<p>It works because our machines have its CA certificate locally installed.
Somewhat related: I went to check something on my home router for the first time in months and learned that:<p>a) it uses an old version of SSL to serve up its admin page<p>b) all modern browsers refuse to load that page and no longer offer an override<p>I had to dig up and load an old unpatched browser so I could turn off SSL completely on my router so I can continue to administer it. Am I more secure now? I'm not sure.
As a compromise between SSL and plain http, wouldn't it be enough for most of the content to be signed? E.g. background images don't necessarily have to be encrypted. They can be sent in plain sight with a signature which ensures that the image hasn't been modified. The signature has to be computed only once so that the overhead can be neglected.