"Inspired by the gaming world, we realized that the only way to achieve the performance we needed was to build our own UI framework"<p>I'm surprised you did not look at "Dear ImGui", "Noesis", and "JUCE". All three of them are heavily used in gaming, are rather clean C++, use full GPU acceleration, and have visual editors available. Especially JUCE is used for A LOT of hard-realtime professional audio applications.<p>"When we started building Zed, arbitrary 2D graphics rendering on the GPU was still very much a research project."<p>What are you talking about? JUCE has had GPU-accelerated spline shapes and SVG animations since 2012?<p>BTW, I like the explanations for how they use SDFs for rendering basic primitives. But that technique looks an awful lot like the 2018 GPU renderer from KiCad ;) And lastly, that glyph atlas for font rendering is only 1 channel? KiCad uses a technique using RGB for gradients so that the rendered glyphs can be anti-aliased without accidentally rounding sharp corners. Overall, this reads to me like they did not do much research before starting, which is totally OK, but then they shouldn't say stuff like "did not exist" or "was still a research project".
The bottleneck of UI is not the rendering. A measly 60 fps is <i>plenty fast</i> for UI that feels immediate. We had this in the 90's with software rendering, you don't need a GPU for that today.<p>What causes user interfaces to hick up is that it's too easy to do stuff in the main UI thread. First it doesn't matter but stuff does accumulate, and eventually the UI begins to freeze briefly for example, after you press a button. The user interface gets intermingled with the program logic, and the execution of the program will visibly relay its operations to the user.<p>It would be very much possible to keep the user interface running in a thread, dedicated to a single CPU on a priority task, updating at vsync rate as soon as there are dirty areas in the window, merely sending UI events to the processing thread, and doing absolutely nothing more. This is closer to how games work: the rendering thread does rendering and there are other, more slow-paced threads running physics, simulation, and game logic at a suitable pace. With games it's obvious because rendering is hard and it needs to be fast so anything else that might slow down rendering must be moved away but UIs shouldn't be any different. An instantly reacting UI feels natural to a human, one that takes its time to act will slow down the brain.<p>But you don't need a GPU for that.
While I do enjoy a nice and smooth gpu-accelerated ui, I never use a gpu-ui framework for my own project for one simple reason: Almost none of them properly support accessibility.
Electron (and in general the web), despite its sluggishness has a very good support for accessibility. Most "traditional" native ui toolkit also do.<p>That would be my advice to anyone making a gpu-accelerated ui library in 2023: Try to support accessibility, and even better: make it a first class citizen.
Lots of negativity in here. I for one am excited about the prospect of an editor that is as responsive as I remember Sublime being back in the day, with the feature set I've come to expect from VS Code. An editor like this simply does not exist today, and betting on the Rust ecosystem is entirely the right choice for building something like this in 2023.
That's exactly the rabbit hole I'm in.<p>I love immediate feedback but getting it ranges from hard to neigh impossible. E.g. I have a complex Emacs setup for rendering Pikchr diagrams, but there are a lot of problems to solve from diagram conception to the end result, so I thought, hey, why not make my own cool RT editor - in Rust obviously.<p>Unfortunately I learned that GUIs are though problem especially if idea is hobby-based so there's only one developer inside. Ultra responsive GUIs cool, I have a prototype in egui (not sure if that's as fast as Zed's premise but feels fast nonetheless) and yet it doesn't support multiple windows, which I wanted to have.<p>120 FPS with direct rendering sounds AWESOME just for sake of it, but I believe that for the end-user layout will be more important than refresh rate, and that's different beast to tame.<p>Personally I "almost" settled for Dioxus (shameless plug: [1], there's link to YT video) and I'm quite happy with it. Having editor in WebView feels really quirky though (e.g. no textareas, I'm intercepting key events and rendering in div glyph-by-glyph directly).<p>[1]: <a href="https://github.com/exlee/dioxus-editor">https://github.com/exlee/dioxus-editor</a>
This seems like the wrong portion of the problem on which to spend time. This is a <i>text editor</i>. Performance problems with text editors tend to involve long files and multiple tabs. Refresh speed isn't the problem, although keyboard response speed can be.<p>I'd like to see "gedit", for Linux, fixed. It can stall on large files, and, in long edit sessions, will sometimes mess up the file name in the tab. Or "notepad++" for Linux.
I don't understand. Why would you need to render a user interface constantly at 120 fps, instead of just updating it when something changes? Laptop batteries last too long these days? Electricity too cheap?
Looking forward to trying this, VSCode is great but I really miss the performance of Sublime Text. I hope they get the plugin system right, killer feature would be if it could load VSCode plugins (incredibly hard to pull off, yes)
My rui library can render UIs at 120fps, uses similar SDF techniques (though uses a single shader for all rendering): <a href="https://github.com/audulus/rui">https://github.com/audulus/rui</a><p>Is their GPUI library open source?
Beyond the rendering which as noted is nothing that hasn't been done before (in general) the inherent OT/multi user + tree sitter functionality is something that entices me.<p>I'm surprised nobody pointed out lite/litexl here either it's rendering of ui is very similar (although fonts are via a texture; like a game would) and doesn't focus overly on the GPU but optimises those paths like games circa directx9/opengl 1.3<p>There are great details of the approach taken with lite at <a href="https://rxi.github.io" rel="nofollow">https://rxi.github.io</a><p>Lite-xl might have evolved the renderer but the code here is very consumable for me.
Nathan Sobo<p>It should be noted that the main person behind Zed is Nathan Sobo, who created Atom while he was at Github, which is the basis of Visual Studio Code today.<p>As such, I have high hopes Zed will be a much faster version of Visual Studio Code and am excited to see what him & his team make.
It's surprising that they jump immediately from the problem description into shader details.<p>IME the main theme with achieving high performance not just in games, and not just in rendering, is to avoid frequent 'context switches' and instead queue/batch a lot of work (e.g. all rendering commands for one frame) and then submit all this work at once "to the other side" in a single call. This is not just how modern 3D APIs work, but is the same basic idea in 'miracle APIs' like io_uring<p>This takes care of the 'throughput problem', but it can easily lead to a 'latency problem' (if the work to be done needs to travel through several such queues which are 'pipelined' together).
Lots of people nit-picking the 120 FPS but I think Zed looks super promising. The native support for collaborative editing looks fantastic, and I'm excited to try it out.<p>Curious if you guys have thought about VR / AR possibilities with GPUI?
Wow, that’s some low level stuff. Most would just use an established UI framework because rendering performance is left to the window manager. I’m not sure I understand the need to go about it like this? Windows is not considered the epitome of performant interfaces but it has no trouble rendering UI’s at 120 fps. When people go and buy a 120 fps display, they are wowed by the smooth scrolling in a heavy application like Google Chrome. The window manager is already hardware accelerated (as for Windows since Vista) and the apps draw widgets on their surface.
First, this looks awesome. Can't wait to try zed.<p>Second, forgive a naive question since I know nothing about graphics, but would the method described in the article perform better than Alacritty + Neovim?
This sounds really similar to the story we were hearing about Servo back in 2016: <a href="https://www.youtube.com/watch?v=erfnCaeLxSI">https://www.youtube.com/watch?v=erfnCaeLxSI</a><p>I was really excited when I saw that demo. Why didn't this turn into a final product that people could use?
This guy Bero1985 wrote the 3D library / engine some years that has extensive 2D features including UI that uses SDF among the other things [0].<p>[0] - <a href="https://github.com/BeRo1985/pasvulkan">https://github.com/BeRo1985/pasvulkan</a>
What API are they using for interfacing with the GPU (ie. OpenGL, Vulkan, other ) ?<p>I suspect a lot of time is likely to be spent on the CPU side updating vertex and other data and pushing it to the GPU so it would be useful to have some more detail on how they are handling that.
According to Wiki, the technique was invented in 2005 by Casey Muratori: <a href="https://en.wikipedia.org/wiki/Immediate_mode_GUI" rel="nofollow">https://en.wikipedia.org/wiki/Immediate_mode_GUI</a>
I cannot stress how much I do not want my 300w gpu to be used to render text that change at most three times per second.<p>And it's not just about electricity cost and heat stress, it will conflict with everything else that requires the gpu to do stuff, including watching 4k videos on the second monitor, which does have a legitimate case for requiring hardware acceleration since they move a lot of data 60 times per second, and your editor doesn't.<p>And the limited resource is not the gpu itself but the nearby onboard memory is a scarce resource on its own. I'd be real mad at a software that prevents me to multitask.