Large tech companies seriously do not care. They say they do, and they point to all these heuristics and optimizations, they point to Chrome's dev tools where you can simulate slow connections, etc. Great.<p>The problem is, they're taking an experience that is fundamentally ridiculously heavy, and then spending thousands of man-hours trying to optimize it. No one even <i>considers</i> that maybe its the experience itself that is too heavy, and no optimizations can help that.<p>Youtube Home Page. Load it up, and you'll find its making over 200 requests, transferring megabytes of data. Google's most obvious solution: Lets speed up TLS, make each request go faster, lets invent new image and video compression algorithms to lower the size of each response, lets batch requests to reduce latency, technology, complexity, more code, more code.<p>No one actually takes a step back and asks if the Youtube home page should make 200 requests. What if it only made 20 requests? We gotta load some thumbnails, so there's bound to be a lot of requests there, but otherwise what the heck is all this JS?<p>TLS on one request isn't the problem. The problem is the hundreds of requests a typical website leans on.<p>Uncomfortable opinion: The only reason the internet has survived for so long is because of Moores Law. We've developed all of this technology and SDLC process in an era where another 20% jump in performance is just around the corner, so who cares if it's slow today. Yeah, that era is done. And we, as an industry, are completely fucked. Its not an understatement to say its a "back to the fundamentals" moment, and its going to cost us billions of collective dollars engineering for it.
The complaints seem to be two-fold:<p>* Websites are big and take a long time to download.<p>* They had previously solved this problem with a caching server but theirs broke with TLS.<p>The author is apparently unaware of the options they have.<p>* Run a proxy server that caches pages. Basically all software supports proxies. Secure and well understood. No PKI to manage. And you can allow access to the proxy with no TLS or get a public cert. Works very well with those old devices too.<p>* Run a HTTPS caching server and add the CA to the systems. Little more effort but transparent.
> Lots of things along those long and lonely signal paths can cause the packets to get dropped. 50% packet loss is not uncommon; 80% is not unexpected.<p>TCP doesn't perform well with 5% packet loss. 50% packet loss coupled with the tremendous latency of the link makes it close to useless. The long and lonely signal path needs a link layer between the terminals and the sat. Unfortunately it's probably a bent pipe transponder.
"Even in the highly-wired world, you can still find older installs of operating systems and browsers: public libraries, to pick but one example. Securing the web literally made it less accessible to many, many people around the world."<p>I'm a librarian and I talk to hundreds of other librarians a year about technology and security and all this stuff. In my experience it is incredible rare to find a library that is running anything THAT old.
While I don't have to deal with high latency but I have to use 64kbit from time to time so I developed some processes to deal with this.<p>- wiki, old.reddit, most news sites, github ... - css and js are loaded from my tampermonkey scripts and i update them every once a half year or so so. Then in umatrix i block loading css and js from their servers so that only my own is loaded(also has benefits of using custom themes/fixes)<p>- google(youtube)/facebook sites are a major pita. You can use youtube-dl to download videos, you can even perform some basic search like `youtube-dl ytsearch5:keyword --get-title --get-description` but I haven't researched if there're any better youtube-alternative sites because on 64k it's unusable anyway. Otherwise using mobile apps instead of sites is the only option here because google changes these assets quite a lot and the compression/obfuscation changes the names of css classes<p>- use RSS (inoreader) as much as possible - RSS you can get all the updates and especially inoreader has a neat feature called "Load mobilized content" which only grabs the text from the site and sends it back - also using it for my youtube subscriptions
Middleboxes can intercept and cache HTTPS, you need to operate your own CA however (some Middleboxes can do this fairly automatically and it's fairly touch-and-go).
It is a real problem. While traveling last summer and working remotely I experienced it first hand.<p>Is there was a really easy way of mimicking all the effects of this type of latency so I could periodically test the stuff I set up?<p>Also, if it is just HTTPS, then it is possible to proxy through something that downgrades the protocol, but it feels dirty.
The problem is caching, trust, and delegation. Too many proxy tools simply don't play well with SSL/TLS, and yes, there is good cause to not trust public infrastructure and ISPs, so HTTPS itself <i>is</i> desireable.<p>There's also the problem, generally, of one-size-fits-all security so far as websites are concerned: there's really very little content I receive that's specific to me,and much of that is Hacker News and a few other forum sites. The content itself is almost wholly public. But I cannot cache or otherwwise proxy this.<p>Locally, I've setup both Squid and Privoxy, mostly for shins and grits, but also to explore the use and viability of proxies these days.<p>Squid caches less than 10% of my traffic.<p>Privoxy can filter by hostname, but little within pages -- no path or content actions work for HTTPS URLs.<p>I've looked at the SSL options of each -- Privoxy seems a lost cause, but Squid looks as if it <i>should</i> be aable to MITM. TLS traffic, but I can't sort out how, or sensibly verify it. And I understand browsers will start screaming bloody murder if they detect this as well.<p>The notion of a trusted delegated proxy seems potentially useful. As with the author of the article, I'm wondering if there is any movement in developing HTTPS-friendly proxy tools, in a sane manner?
Couldn't a (relatively small) proxy server be set up that intercepts the request, checks URL+cookies against a cache, and makes its own HTTPS requests on cache misses? Maybe even whitelist cacheable domains so you reduce how fast it fills?<p>I haven't really caffeinated yet so I may be missing something important here but this seems like a few hours worth of work?
A bit of a clickbait title, but let's entertain the idea for a second.
Another one of those articles that claim that securing everything is a bad idea simply because their routine got shaken or altered. What is his alternative then? Remove it so you have 'slightly less slow' experience. (although, to be fair he states he does not know the solution) It's like complaining trams are less accessible because they have doors instead of just a platform with wheels. Sure, trough that lens you are right. But you are also willingly ignoring all the other facts.
The problem here is super slow internet not encryption. And no matter how hard and often these sec-nay-sayers repeat it. It does not make it a valid reason to roll back. Slow internet in Africa needs to be solved for a multitude of reasons. Not one of them is 'experience'
With the advent of LEO communication satellite constellations enabled by miniaturisation and reduced launch costs, hopefully the problem of satellite Internet access's extremely high latency should go away in the future. I expect the more localised signal could improve the bandwidth and cost too.<p>That said, I'm sure the weight of typical pages will grow leaps and bounds too, as they've been doing.
Plain text websites load just fine with HTTPS/TLS via very-small-aperture terminal (VSAT) ISPs. This is based on my experience with 1024k/256k Hughes service in rural US.
Making systems less accessible is the defining characteristic of computer security. Steve Yegge addresses this very point cogently in one of his old rants.
There's also the question of does HTTPS even make sense for most sites. Why bother with the extra security overhead for a simple blog with no user login and a basic comment system. Surely there's a middle ground between preventing man in the middle attacks for simple content sites and creating a complete bidirectional encrypted connection right? Signed content hashes maybe?