I don't buy this.<p>Hacker News is one of the most responsive websites I know. And it is run on a single server somewhere in the USA. While I am in Europe.<p><pre><code> If you have users in Sydney, Australia ...
... you are floored at 104ms of latency for your request
</code></pre>
When I open AirBnB with a clean browser cache, it takes several seconds until I see something useful. Those 104ms of latency don't make a dent.<p>Reddit takes over 5 seconds until the cookie banner is ready for me to click it away.<p>Twitter takes 6 seconds to load the homepage and display the single tweet which fits on it.<p><pre><code> preview images take a little longer to load
</code></pre>
Preview images of what? Images usually should be routed through a CDN which caches them directly in the user's country. It's extremely cheap and easy to set up. Nothing compared to running an application in multiple datacenters.
The speed of light in a fiber optic cable is slower than light in a vacuum, about 2.14e8 m/s.<p>If you feel latency, it's probably not the one-direction or round-trip latency, but rather the MANY round trips that are typically required for an HTTP request. DNS is probably 2 round trips (CNAME then A), and that has to cross the ocean via your resolver of choice (8.8.8.8 or whatever) to get to the authoritative server if it's not already cached (or distributed; big DNS providers will serve your zone in many regions). Then you have to set up a TCP session, which is 1.5 round trips. Then you have to set up TLS, which varies, and make an HTTP request, and wait for the response. (I counted 5 total round trips until you see the response.)<p>So basically if you calculate the speed of light between two points, multiply that by 2*(2+5) = 14 in the worst case to see your time to first byte. Doing something 14 times is always going to be slow.<p>The underlying issue here is not so much the distance, but rather that TCP, TLS, and HTTP don't care about latency at all. (I'll ignore the application layer, which probably wants to redirect you to /verify-session-cookie and then /hey-you-logged-in for some reason. And yes, TLS1.3 has 0RTT handshakes now too, eliminating some trips.)<p>This is the problem that HTTP/3 aims to fix; one round trip replaces the TCP handshake, TLS handshake, and HTTP request. You shoot out a packet, you get back an HTTP response. (You still have to do the DNS lookup, so we'll call this 3 round trips total.)
I've been doing this long enough that I remember when all the big web sites were hosted in California. In fact, my company had its web farm in Sunnyvale which we managed over frame relay from Atlanta.<p>Whenever I'd visit the west coast, I was shocked how much faster the web seemed.<p>So I sympathize with the sentiment.<p>Thing is though, the entire web feels pretty sluggish to me these days. And that's with us-east-1 less than 300 miles away from me. Because most web sites aren't slow due to where they're hosted, but rather because of how bloated with crap most of them have become.
Good article! I always notice this same effect when I visit my parents in Argentina or I'm in Europe.<p>> Using a global CDN can help get your assets to your users quicker, and most companies by this point are using something like Cloudflare or Vercel, but many still only serve static or cached content this way. Very frequently the origin server will still be a centralized monolith deployed in only one location, or there will only be a single database cluster.<p>Notably: even if the source of truth is single-region, there's a lot that can be done to improve the experience by flushing parts of the page at the edge.<p>Check out <a href="https://how-is-this-not-illegal.vercel.app/" rel="nofollow noreferrer">https://how-is-this-not-illegal.vercel.app/</a> where the layout.tsx[1] file is edge-rendered right away with placeholders, and then the edge renderer streams the content when the single-region database responds.<p>Furthermore, consider that parts of the page (like the CMS content) can also be cached and pushed to the edge more easily than, say, a shipping estimate or personalized product recommendations, so you can have content as part of that initial placeholder flush. We have an e-commerce example that shows this[2].<p>[1] <a href="https://github.com/rauchg/how-is-this-not-illegal/blob/main/app/layout.tsx">https://github.com/rauchg/how-is-this-not-illegal/blob/main/...</a><p>[2] <a href="https://app-router.vercel.app/streaming/edge/product/1" rel="nofollow noreferrer">https://app-router.vercel.app/streaming/edge/product/1</a>
I think the peering agreements of the local ISP are likely to be a factor as well.<p>When I moved inside Europe I suddenly noticed slow connections to Github pages. I expected that it had something to the physical location of the Github pages servers. However, when I connected to the VPN of my previous location it all was snappy again. That eliminated the physical distance as a cause.
To counter the top comment at the moment, being from Sydney, Australia, I totally <i>do</i> buy it. It also works both ways, if you want to build something with global reach but host it locally you’re immediately going to be penalised by the perceptions that come with latency. Also, I might add that the latency builds up non linearly the more throughput you’re attempting to achieve (e.g. streaming video).<p>Disclaimer: I am currently working for a startup attempting to build a video creation and distribution platform with global reach.
As an Australian, I agree that I usually prefer when a service is hosted nearby. Yet… 200ms latency, that’s pretty good actually. For some real data, I just tried `ping ec2.us-east-1.amazonaws.com` and time is 240ms. That’s in Tasmania, NBN over Wifi. I’m happy with that!<p>But the problem, like many of the other commenters are saying, is for a single request us-east-1 is actually fine. But for a modern web app but many requests, that compounds real quick. I actually think living here is an advantage as a web developer because it’s like those athletes that only train at high altitudes — living in a high latency environment means you notice problems easily.
It shows up if you need a CDN pretty clearly when you monitor uptime from around the world.<p>The response time for Bitbucket for example is:<p>100ms from us-east<p>300ms from us-west<p>400ms from eu-central<p>600ms from tokyo<p>800ms from sydney<p>(numbers from OnlineOrNot)
Related: <a href="https://news.ycombinator.com/item?id=36506865">https://news.ycombinator.com/item?id=36506865</a><p>Particularly the part quoted in this comment: <a href="https://news.ycombinator.com/item?id=36507013">https://news.ycombinator.com/item?id=36507013</a><p>But tbh I think this is mainly a problem for apps that have a long chain of dependent requests. If you make 3 requests one after the other, it's probably fine. If the DOM isn't stable until after a series of 10 requests, any amount of latency is noticeable.
As a European, visiting the USA, you certainly find that most of the internet just works better.<p>However I think a bug chunk of the effect is that European mobile networks seem to take a second or two to 'connect' - ie. If you take your phone from your pocket and open an app, the first network request takes 2-3 seconds. Whereas for whatever reason, the same phone in the USA doesn't seem to have such a delay.
I usually get ~300ms ping from my home to us-east-1. You can absolutely feel the latency, especially on SPAs that performs many small xhr sequentially which compounds the latency even more. Apps that felt almost instant when used in network with <10ms latency are suddenly felt pretty sluggish.<p>Some of my worst experience was being forced to use SFTP to transfer thousands of small files to a server in us-east-1, which can take hours due to latency alone compared to transferring the same set of files using rsync / compressed archive which finish in minutes, and using RDP to access a remote windows machine behind a VPN, then run putty there to access a Linux server (the VPN only allows RDP traffics), and then I need to transfer thousands of files to the Linux server as well (my solution was to punch a hole through their shitty VPN via an intermediary bastion host I fully control, which allows me to use rsync).
Funny, I can pinpoint players location based on their pings pretty accurately too. 300ms + is Asia, 350+ is Australia, Americans are 120+, South America 170+.<p>Ping towards USA has lowered the most. This used to be 225ms in the earlier online days.
This article brought to mind a different but related scenario. I live on an island that was recently affected by a typhoon. Internet speeds are usually pretty good, but in the aftermath of the storm cable internet has been up-and-down depending on the day, and the cell towers are very spotty. I've found that most modern apps depend on a high-speed connection, and give a very poor experience otherwise. Of course this seems obvious in hindsight, but it's a different experience living through it.
AFAIK, there is no generic datastore that does multi-region, with moving around the leader for a given subset of data available. Something like what's written in the Spanner paper would be amazing (microshards, and moving around microshards based on user access) if it was accessible.
no mention, or realisation, of the storage and bandwidth requirements for hosting anything other than text. html, js and database queries are cheap to stick on a global CDN, but when it comes to larger multimedia files, such as images and videos, the costs soon skyrocket
I remember when we first moved to the cloud from a datacenter. It was in us-east-1, and literally the day after the switch over (before we started configuring multi-region) was the first time us-east-1 had its major outage.<p>The owners were pissed that it had gone down and it wasn't that it went down, it was that we were basically sitting around with our thumbs up our ass. When things went down in our DC, we just fixed them or at least we could give an ETA until things went back to normal. We had absolutely nothing. We couldn't access anything, and AWS was being slow in acknowledging an issue even existed.<p>That was a good lesson that day: the cloud is not necessarily better.
If you want a smooth experience that is easy to set up, you can provide a download link (gasp) and serve that over a CDN, and just have your app be native.<p>You'll only pay for backend queries, not for every single button style
It’s a solvable problem if you optimize for multiple regions from day 1 of the app but migrating an existing stack to multi-region after the fact is often a large enough undertaking that you pick the region of the majority of users and go with it.<p>The process of setting up an active passive region with the db is becoming more common but an active/active design is still relatively rare outside of apps designed for massive scale.
Even if you have gigabit in Australia, the latency when browsing Youtube and clicking through menus is a world of difference when you compare it to the US
The tab of browser devtools that let you simulate slow connections should probably add simulation of this kind of latency, as well as a 'simulate AWS outage' toggle if that's even possible. (don't know enough DNS to know how hard the latter is)<p>I guessed from the title that this would focus on redundancy, but I guess that's rarely noticable.
> In reality, the ping you’ll experience will be worse, at around 215ms (which is a pretty amazing feat in and of itself - all those factors above only double the time it takes to get from Sydney to the eastern US).<p>Isn't it double just because ping measures round trip time?
There’s also the device speed. It might provide a different reference point for different users.<p>If you’re opening a website on a low end smartphone with an outdated system, the network latency might be not noticeable (because the UX of the device is so slow anyway).
Does anyone know of an out-of-the-box solution for measuring regional latency? If not, please somebody make and productize it.<p>I'd love to know how my sites behave in Frankfurt, Mumbai, SF, Sydney, etc.
For more pings check out: <a href="https://wondernetwork.com/pings" rel="nofollow noreferrer">https://wondernetwork.com/pings</a><p>I believe that’s the source he’s using.