Does anyone know how this compares to PathFinder [1] ?<p>1. <a href="https://github.com/servo/pathfinder" rel="nofollow">https://github.com/servo/pathfinder</a>
The image previews looks like it's <i>very</i> crips, looks great - and the multicolored emoji is a nice touch.<p>It seems like this would have the same challenges as multi channel signed distance fields [0], where for unicode (e.g. Chinese characters) you need to generate textures on the fly and ship them to the GPU for the fragment shader to work off of.<p>[0] HN Discussion: <a href="https://news.ycombinator.com/item?id=20020664" rel="nofollow">https://news.ycombinator.com/item?id=20020664</a>
I wonder if it would be plausible to use this with webgl via WASM... Or if anyone's working on a javascript implementation of the algorithm.<p>Good text rendering/layout is about the only thing I feel like I'm missing from being able to pretty simply/rapidly create 3d apps with web tech these days. I still end up typically just overlaying DOM elements over the scene :/
Could we get a nice way to mark this as "Not Open Source"?<p>I spent far too long looking for technical details, a github or similar on that page before I realized that this wasn't open.<p>I'm not opposed to non-open source software, but it would be nicer if this was a bit more up front.
I do recall somewhat recently a great improvement was made to SDF using multiple channels of information: <a href="https://github.com/Chlumsky/msdfgen" rel="nofollow">https://github.com/Chlumsky/msdfgen</a><p>I wonder how they both compare. I am guessing Slug is more accurate but certainly also much more intensive, if its actually rendering outlines.
This kind of reminds me of <a href="http://wdobbie.com/" rel="nofollow">http://wdobbie.com/</a> since the only two posts he has are about rendering fonts directly from the bezier curve data of the font.
How difficult would it be for this library, or something akin to it, to become the first thing loaded when a PC boots? For several years now I have thought the ancient default of a 'textmode' as the underlying and initial display was destined to only become more and more useless as pixel densities continue to climb. We need the first display mode available to an OS to support vector display. Raster images can come later. I don't know what all is involved in initializing a GPU and handing the display over to it, though, so I'm curious how hard it would be. Would it absolutely require a new BIOS?
It looks like a great piece of code, and I wish I could use it. Maybe the author would consider doing a free-for-open-source type model like many other companies do? It would have the benefit of allowing developers to become familiar with the technology and possibly encourage wider use, too. To be clear, I'm not saying the author needs to open-source it, but he might wish to consider distributing a static lib.