When I saw this page a few years back I had an idea for a project. I want to create the lowest-latency typing terminal I possibly can, using an FPGA and an LED array. My initial results suggest that I can drive a 64x32 pixel LED array at 4.88kHz, for a roughly 0.2ms latency.<p>For the next step I want to make it capable of injecting artificial latency, and then do A/B testing to determine (1) the smallest amount of latency I can reliably perceive, and (2) the smallest amount of latency that actually bothers me.<p>This idea was also inspired by this work from Microsoft Research, where they do a similar experiment with touch screens: <a href="https://www.youtube.com/watch?v=vOvQCPLkPt4" rel="nofollow">https://www.youtube.com/watch?v=vOvQCPLkPt4</a>
An anecdote that will probably sway no one: was in a family friendly barcade and noticed-- inexplicably--a gaggle of kids, all 8-14, gathered around the Pong. Sauntering up so I could overhear their conversation, it was all excited variants of "It's just a square! But it's real!","You're touching it!", or "The knobs <i>really</i> move it."<p>If you wonder why we no long we have "twitch" games, this is why. Old school games had a tactile aesthetic lost in the blur of modern lag.
FWIW, a quick ballpark test shows <30 ms minimum keyboard latency on my M1 Max MacBook, which has a 120-hz display.<p><pre><code> Sublime Text: 17–29 ms
iTerm (zsh4humans): 25–54 ms
Safari address bar: 17–38 ms
TextEdit: 25–46 ms
</code></pre>
Method: Record 240-fps slo-mo video. Press keyboard key. Count frames from key depress to first update on screen, inclusive. Repeat 3x for each app.
I wonder if a compositor, and possibly an entire compositing system designed around adaptive sync could perform substantially better than current compositors.<p>Currently, there is a whole pile of steps to update a UI. The input system processes an event, some decision is made as to when to rerender the application, then another decision is made as to when to composite the screen, and hopefully this all finishes before a frame is scanned out, but not too far before, because that would add latency. It’s heuristics all the way down.<p>With adaptive sync, there is still a heuristic decision as to whether to process an input event immediately or to wait to aggregate more events into the same frame. But once that is done, an application can update its state, redraw itself, and trigger an <i>immediate</i> compositor update. The compositor will render as quickly as possible, but it doesn’t need to worry about missing scanout — scanout can begin as soon as the compositor finishes.<p>(There are surely some constraints on the intervals between frames sent to the display, but this seems quite manageable while still scanning out a frame immediately after compositing it nearly 100% of the time.)
Global Ping Data - <a href="https://wondernetwork.com/pings" rel="nofollow">https://wondernetwork.com/pings</a><p>We've got servers in 200+ cities around the world, and ask them to ping each other every hour. Currently it takes our servers in Tokyo and London about 226ms to ping each other.<p>We've got some downloadable datasets here if you want to play with them:
<a href="https://wonderproxy.com/blog/a-day-in-the-life-of-the-internet/" rel="nofollow">https://wonderproxy.com/blog/a-day-in-the-life-of-the-intern...</a>
I recently had some free time and used it to finish fixing up an Amiga 3000 (recapping the motherboard, repairing some battery damage on the motherboard). I installed AmigaDOS 3.2.1 and started doing things with it like running a web browser and visiting modern web sites.<p>The usability is worlds better than what we have now, even comparing a 1990 computer with a 25 MHz m68030 and 16 megs of memory with a four core, eight thread Core i7 with 16 gigs of memory. Interestingly, the 1990 computer can have a datatype added which allows for webp processing, whereas the Mac laptop running the latest Safari available for it can't do webp.<p>We've lost something, and even when we're aware of it, that doesn't mean we can get it back.
Going through the list of what happens on iOS:<p>> UIKit introduced 1-2 ms event processing overhead, CPU-bound<p>I wonder if this is correct, and what's happening there if so - a modern CPU (even a mobile one) can do a <i>lot</i> in 1-2 ms. That's 6 to 12% of the per-frame budget of a game running at 60 fps, which is pretty mind-boggling for just processing an event.
Has anyone else used an IBM mainframe with a hardware 327x terminal?<p>They process all normal keystrokes locally, and only send back to the host when Enter and function keys are pressed. This means very low latency for typing and most keystrokes. But much longer latency when you press enter, or page up/down as the mainframe then processes all the on-screen changes and sends back the refreshed screen (yes, you are looking at a page at a time, there is no scrolling).<p>Of course, these days people use emulators instead of hardware terminals so you get the standard GUI delays and the worst of both worlds.
Something I recently observed is that cutting edge, current generation gaming-marketed x86-64 motherboards for single socket CPUs, both Intel and AMD, still come with a single PS/2 mouse port on the rear I/O plate.<p>I read something about this being intended for use with high end wired gaming mice, where the end to end latency between mouse and cursor movement is theoretically lower if the signal doesn't go through the USB bus on the motherboard, but rather through whatever legacy PS/2 interface is talking to the equivalent-of-northbridge chipset.
I'd like to see older MS-DOS and Windows on there for comparison; I remember dualbooting 98se and XP for a while in the early 2000s and the former was noticeably more responsive.<p>Another comparative anecdote I have is between Windows XP and OS X on the same hardware, wherein the latter was less responsive. After seeing what GUI apps on a Mac actually involve, I'm not too surprised: <a href="https://news.ycombinator.com/item?id=11638367" rel="nofollow">https://news.ycombinator.com/item?id=11638367</a>
Powershell isn't a terminal (it's a shell, obviously), so the windows results are most likely tested in conhost. If it's on windows 11 it might be windows terminal, which may be more likely since I think cmd is still default on windows 10.
iPads predict user input <a href="https://developer.apple.com/documentation/uikit/touches_presses_and_gestures/handling_touches_in_your_view/minimizing_latency_with_predicted_touches" rel="nofollow">https://developer.apple.com/documentation/uikit/touches_pres...</a> . Did they do this back when this article was written or is this a newer thing that lets them get to even lower user perceived latencies than 30ms?<p>In general, predicting user input to reduce latency is a great idea and we should do more of it, as long as you have a good system for rolling back mispredictions. Branch prediction is such a fundamental thing for CPUs that it's surprising to me that it doesn't exist at every level of computing. The JavaScript REPL's (V8's REPL) "eager evaluation" where it shows you the result of side-effect free expressions before you execute them is the kind of thing I'm thinking about <a href="https://developer.chrome.com/blog/new-in-devtools-68/#eagerevaluation" rel="nofollow">https://developer.chrome.com/blog/new-in-devtools-68/#eagere...</a>
There is hardware input ( Keyboard and mouse ) latency as well as output like display latency. Unfortunately the market, and industry as a whole doesn't care about latency at all.<p>While I am not a fan or proponent of AR / VR. One thing that will definitely be an issue is latency. Hopefully there will be enough incentive for companies to look into it.
Isn't this experiment a bit bogus? Extrapolating a terminal emulator's behavior to represent a machine's latency /in general/... what if the terminal emulator just sucks? Dan Luu is of course aware of this but he's willing to swallow it as noise:<p>> Computer results were taken using the “default” terminal for the system (e.g., powershell on windows, lxterminal on lubuntu), which could easily cause 20 ms to 30 ms difference between a fast terminal and a slow terminal.<p>If that was the only source of noise in the measurements then ok, maybe, but compounded with other stuff? For example, I was thinking: the more time passes, the further we drift from the command-line being the primary interface through which we interact with our computer. So naturally older computers would take more care in optimizing their terminal emulator to work well, as it's the face of the computer, right? Somebody's anecdote about PowerShell performance in this thread makes me feel more comfortable assuming that maybe modern vendors don't care so much about terminal latency.<p>Using the "default browser" as the metric for mobile devices worries me even more...<p>I like Dan Luu and I SupportThismessage™ but I feel funny trying to take anything away from this post...
Should it be personal computer latency.<p>Wonder about hat as we just talked about the importance of sub-second response time in 1990s (full screen 3270 after hitting enter; even if no ims or db2 how can it be done …). The terminal keyboard response is fine (on 3270). Network (sna) …<p>1977 still have mainframe and workstation.
On my state-of-the-art desktop PC, Visual Studio has very noticeable cursor&scrolling lag. My C64 had the latter as well, but I used to assume the cursor moved as fast as I could type / tap the arrow keys
I really found this valuable, particularly the slide at the top that enables you to visualize low level latency times (Jeff Dean numbers) over the years. tl;dr: not much has changed in the processor hardware numbers since 2012. So everything right of the processor is where the action is. And sounds like people are starting to actually make progress.<p><a href="https://colin-scott.github.io/personal_website/research/interactive_latency.html" rel="nofollow">https://colin-scott.github.io/personal_website/research/inte...</a>
I wonder how this was all measured.<p>I didn't dig into the text blob to ferret that out.<p>Did anybody?<p>Because this doesn't pass the sniff test for data I want to trust
Cynic comment ahead, beware!<p>---<p>Does this actually even matter today when every click or key-press triggers dozens of fat network request going around the globe on top of a maximally inefficient protocol?<p>Or to summarize what we see here: We've build layers of madness. Now we have just to deal with the fallout…<p>The result is in no way surprising given we haven't refactored our systems for over 50 years and just put new things on top.