This needs a 2011 on it, which will explain why it is missing the very recent Ryū algorithm, which I believe is the fastest algorithm at this point:<p><a href="https://dl.acm.org/doi/10.1145/3192366.3192369" rel="nofollow">https://dl.acm.org/doi/10.1145/3192366.3192369</a> (PLDI 2018 paper)<p><a href="https://github.com/ulfjack/ryu" rel="nofollow">https://github.com/ulfjack/ryu</a> (C source code)
I have to deal with this all the time. Unfortunately I wrote my programs in FreePascal, because Pascal was supposed to be the best, fastest, and safest language, but it is very bad at solving these kinds of problems.<p>I wrote my own rendering function. It was not so hard. I just wrote functions to calculate with arbitrary precision decimal numbers and then put the float in there. Reallly slow, but perfectly accurate unlike those in FreePascal.<p>Now FreePascal has implemented Grisu3, so it is fine to use their functions. Although it is printing weird numbers like 1.0000000000000001E-1 for 0.1. With my arbitrary precision calculation, I get 0.1 is exactly 0.1000000000000000055511151231257827021181583404541015625 and can round it to the shortest representation 0.1.<p>But the reverse problem -- parsing floating numbers -- is also really hard. And I cannot just bruteforce it by printing arbitrary precision binary floats. FreePascal's parsing function still rounds wrongly ( <a href="https://bugs.freepascal.org/view.php?id=29531" rel="nofollow">https://bugs.freepascal.org/view.php?id=29531</a> ), so it is risky to use<p>For fast parsing of 99% cases, the Eisel-Lemire algorithm can be used. Unfortunately, there is no FreePascal implementation. I hope someone will port it FreePascal, so I do not have to do it.<p>And then it still leaves the hard 1% of cases. How do you parse them? Is there a simple way to do it with some arbitrary precision calculations?
This is the kind of stuff that’s got me hooked on CS: small problems that no one thinks about much but have dramatic impact on developers’ lives and the things they build. There’s a lot of societal problems that we’ve inadvertently created, (see <i>Technopoly</i> by Postman or <i>The Social Dilemma</i> on Netflix) but it’s nice to be reminded about what makes this field <i>fun</i> and worthwhile.
This is in the running for the best HN post I've seen this year. I've read all about floating point precision, never about rendering.<p>The existence of this problem is so obvious in hindsight and I was so completely unaware of it previously.
Here is the reference for the most recent work (2019) of note in this domain. Ulf Adams extended his 2018 Ryū algorithm to implement Ryū printf which is 4x faster than the best other implementation tested on Linux, Mac and Windows. <a href="https://dl.acm.org/doi/pdf/10.1145/3360595" rel="nofollow">https://dl.acm.org/doi/pdf/10.1145/3360595</a>
Unless you actually need to output human-readable numbers for some reason, the sensible approach when serializing floating point numbers is to just use hexfloats and skip the problematic "rendering" part altogether.