Note that the measurement on geostationary is almost certainly performed on a contended (tdma) 32:1 consumer grade oversubscribed link or worse.<p>Actual 1:1 dedicated geostationary which is very expensive in $/Mbps is a fixed, flat 492 to 495ms rtt latency, plus or minus a tiny bit either way, depending on modem encode and decode FEC type.<p>Consumer grade geostationary could be anywhere from 495ms in the middle of the night local time to 1350ms or worse.<p>Re: the figure for terrestrial fiber service, I'm curious how the presumed residential last mile "fiber" link in Geoff's example which is not real gigabit service would compare to one of the symmetric gigabit last mile operators that exist in some cities. Where you can see actual 980 x 980Mbps speed test results from fast.com or speedtest.net in a browser.<p>I'm always very suspicious of anything that says it's fiber but is limited to like 25Mbps up, either it's a totally artificial limit or in reality it's some vdsl2 link, or "fiber" delivered over docsis3 copper cable with limited upstream rf channel allocation, etc.
One interesting upcoming latency twist to this will be when the Starlink inter-sat optical links go online for the whole network. Version 0.9 is already used (and required) for polar and new batches are all launching with them, but I don't think they've hit critical mass yet to bring it up. Once they do though that will be a significant shift for anyone where intercontinental servers form a significant part of their usage. Speed of light in conventional fiber is only about 70% c, and of course for the vast majority of people the actual path their packets take through the network is very far from ideal great circle path between two points on the globe (ie., they will first have to travel to the nearest hub then nearest subocean link which in some cases could add massive travel distance).<p>But within the Starlink network signals will go essentially 100% c, and as the constellation approaches design capacity the paths will get closer to ideal too (at least to the nearest ground station). At long enough range the 40% speed advantage alone will make up for orbital RTT penalty even before path savings which means Starlink will be able to offer much lower latency then fiber. I think it'll be the first time though where we see a weird split where your local connection speed is no longer the sole deciding factor and you can actually see a radical latency split between local and very long range traffic for two different WAN types.
> As of March 2022, a total of 2,335 Starlink spacecraft have been launched, with 2,110 still in orbit.<p>Wait, what happened to the missing 225 Starlink spacecrafts?
I did download of a film using torrent via StarLink once for performance test. I saw up to 5 Mbytes per second in torrent client, which is impressive speed to have in woods. Sadly, StarLink is not suitable for audio or video calls yet, because of frequent pauses.
I worked on Google Fiber and one of the things I did was write a pure-JS speed test. At the time, speedtest.net still used Flash. Why did we need this? Installers used Chromebooks to verify an installation so we wanted to be able to tell if the install was successful. That means maxing out the connection (~940Mbps for a gigabit connection). This speed test is still up [1].<p>Actually figuring out the max speed for a connection is a surprisingly hard problem. Here are some of the things I found:<p>1. Latency is absolutely everything. With sub-2ms latency I could get 8.5Gbps downloads in a browser on JS over 10GbE on a Macbook Pro. Bump that up to 100ms and that plummets. I forget the exact numbers but this has real world consequences. Australia, for example, rolled out it's ridiculous NBN network with a max speed of 100Mbps. Well Australia has a built-in latency of 150-200ms to the US just by distance and the max effective download speed would be a mere fraction of that;<p>2. Larger blobs are better for overall throughput but depending on your device this may blow up your browser. Unfortunately for the Internet you're never really going to reliably get an MTU >1500 unles you control every node on the network;<p>3. This sort of traffic exposed a lot of weird browser bugs, even with Chrome. For example, Chrome could get in a state where despite all my efforts the temporary traffic would get cached and would fill up your /tmp partition on Linux and blow up with weird errors that don't relaly give you any clue that that's the problem and only restarting Chrome will solve the issue. I could never figure out why. Not sure if it's still an issue;<p>4. The author I guess was talking about Linux defaults but there are a lot of kernel parameters that affect this (eg RPS [2] is absolutely esential for high-throughput TCP beyond a certain point);<p>5. BBR was in development at the time (ironically I was next to that team at the time for a few months) so I can't really speak to how this changes things. I was going this development back in 2016-2017;<p>6. Among people who knew more about this than me the consensus seemed to be that BSD's TCP stack was superior to Linux's. Anecodtally this is backed up by real-world examples like Facebook having extreme difficulty moving away from freeBSD to Linux for WhatsApp. That took many years apparently; and<p>7. I agree with the author here on the impact of packet loss. It's affect on throughput can be devastating and (again, pre-BBR) the recovery time for maximum throughput could be really long.<p>[1]: <a href="http://speed.googlefiber.net/" rel="nofollow">http://speed.googlefiber.net/</a><p>[2]: <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-rps" rel="nofollow">https://access.redhat.com/documentation/en-us/red_hat_enterp...</a>