TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Floating Point Visually Explained (2017)

423 点作者 bmease大约 5 年前

27 条评论

Veserv大约 5 年前
For another visual explanation in words, floating point numbers (ignoring subnormal) are just a linear approximation of 2^x [1] where there is one piece for each integer (x = 4 to x = 5, etc). As an example, draw a straight line between 2^4 (16) and 2^5 (32). The floating point numbers in that range are evenly spaced on that line.<p>Another explanation using the window + offset terminology used in the post is that the offset is a percentage of the way through the window. So, for a window of 2^x, the difference between an offset of y and y + 1 is 2^(x-23) or 1&#x2F;2^(-23) of 2^x. Put another way, floating point numbers do not have absolute error like integers (each number is within 1 of a representable value), but % error (each number is within 1&#x2F;2^(-23) of a representable value). Essentially, floating point numbers use % error bars instead of absolute error bars.<p>Using this model you can even see how to create your own floating point numbers. Just pick a % precision you want, for single FP that is 1&#x2F;2^(-23) and double FP 1&#x2F;2^(-52), that defines the range of your mantissa (offset). Then pick a range of x values you want to represent, that is the range of your exponent (window).<p>As an aside, subnormal numbers do not respect this principle. They extend the expressible range for very small numbers by sacrificing % error for those numbers. In the very worst case of the smallest subnormal number you can get 25% error (it might actually be 50%). As might be imagined, this plays havoc on error propagation since if you ever multiply by a number that just so happens to be the smallest subnormal, all your multiplies might suddenly be off by a factor of 25% instead of the normal 100 * 2^(-23)% which is 2,000,000 times the % error which is quite a bit harder to compensate for. This is why many people consider subnormals to be a blemish.<p>[1] The approximation is actually offset in the x direction for the bias. If you want to be more accurate, you are actually graphing 2^(x - 127).
评论 #23086435 未加载
commandlinefan大约 5 年前
Wow, that&#x27;s a much easier way to convert from decimal to floating point than I had ever seen. He doesn&#x27;t mention why biased notation is used (i.e. why the exponent is stored as 127+E): it&#x27;s used so that if you sort positive numbers as if they were integers, they&#x27;ll still end up in the right order.
评论 #23082588 未加载
评论 #23082747 未加载
SupriseAnxiety大约 5 年前
This right here is the only way I pretty much figured out just now a more precise understanding of the weird patterns going on in my head. I crave precision and this explained more than a whole year covering Basic C. Thank you so much! My head doesn’t have to hurt over that anymore. I never understood floats either but this representation truly helps clear some fog!!
GolDDranks大约 5 年前
Another visualization that blew my mind; I learnt this from a presentation that was about doing floats with FPGAs:<p>Think 32-bit floats as a 256-bit buffer interpreted as a fixed precision number, with the decimal point straight in the middle, but with the limitation that you can set only a continuous 24-bit window on that buffer to non-zeros.<p>Then, the 8 bits of the exponent determine where in the bit buffer that window points to, and the 23 (+1) bits are the contents of the window.
saagarjha大约 5 年前
Related, <a href="https:&#x2F;&#x2F;float.exposed&#x2F;" rel="nofollow">https:&#x2F;&#x2F;float.exposed&#x2F;</a> is a great resource: both when trying to see how floating point is laid out, as well as when having to convert between the bit representation and the number for &quot;actual work&quot; ;)
评论 #23084610 未加载
0-_-0大约 5 年前
Fabien Sargland is the same guy who wrote 2 books on a deep dive into 2 game engines: Wofenstein 3D and Doom which are great reading and I really recommend them to the HN crowd (can be downloaded for free):<p><a href="https:&#x2F;&#x2F;fabiensanglard.net&#x2F;gebbwolf3d&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fabiensanglard.net&#x2F;gebbwolf3d&#x2F;</a><p><a href="https:&#x2F;&#x2F;fabiensanglard.net&#x2F;gebbdoom&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fabiensanglard.net&#x2F;gebbdoom&#x2F;</a>
评论 #23092453 未加载
评论 #23088066 未加载
zoomablemind大约 5 年前
It&#x27;d be nice to also mention about ulp [0], the unit of least precision. The floating point is an approximation as concept, and ulp is one of the properties of an implementation of floating point representation in binary form.<p>0: <a href="https:&#x2F;&#x2F;stackoverflow.com&#x2F;questions&#x2F;43965347&#x2F;ulp-unit-of-least-precision" rel="nofollow">https:&#x2F;&#x2F;stackoverflow.com&#x2F;questions&#x2F;43965347&#x2F;ulp-unit-of-lea...</a>
splittingTimes大约 5 年前
That&#x27;s why when I did numerical simulation of electron Dynamics in semiconductors during my phD we never used straight SI units (m, s, kg, etc), but instead expressed all physical natural constants in nm, fs, eV, etc. That way all relevant constants had numerical values between 1 and 10 which stabilized the simulations a lot.
评论 #23086916 未加载
评论 #23087869 未加载
评论 #23085376 未加载
bchociej大约 5 年前
Is all of that really easier to understand than exponential notation? It&#x27;s a great tool to visualize floating point precision, but it&#x27;s lot more circuitious to get to an understanding of what a floating point number actually means IMO
评论 #23084549 未加载
评论 #23083633 未加载
seanalltogether大约 5 年前
I always found the wikipedia examples for 16bit floating point helpful since the numbers are smaller. You can really see how the exponent and fraction affect each other in a very simple way.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Half-precision_floating-point_format#Half_precision_examples" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Half-precision_floating-point_...</a>
评论 #23083052 未加载
评论 #23082942 未加载
dylan604大约 5 年前
&quot;(−1)S∗1.M∗2(E−127) How everybody hates floating point to be explained to them.&quot;<p>Now I know I&#x27;m weird, as that formula makes sense to me.
评论 #23086388 未加载
rwallace大约 5 年前
Excellent article! Just one thing surprised me:<p>&gt; While I was writing a book about Wolfenstein 3D[1], I wanted to vividly demonstrate how much of a handicap it was to work without floating point<p>I would&#x27;ve expected fixed point to work fine for games, because that&#x27;s a domain where you know the data, and in particular the dynamic range, in advance, so the &#x27;automatically adjust to whatever dynamic range happens to be in the data&#x27; feature of floating point isn&#x27;t needed. What am I missing? (If the answer is &#x27;it would take too long to explain here, but he does actually explain it in the book&#x27;, I&#x27;m prepared to accept that.)
评论 #23090496 未加载
jonplackett大约 5 年前
Whenever I read something about how computers _really_ work (as in not just a nice easy to comprehend programming language) I realise just how much smarter some people in the world are than me.
评论 #23084614 未加载
adrianmonk大约 5 年前
Here&#x27;s how I like to think of it.<p><i>Floating point numbers are just fractions</i> but with one extra condition: the denominator is a power of two.<p>The normal rules of fractions apply. If you want to add them, you have to make sure the denominators match, which would involve scaling the numerators too.<p>Just like fractions, there are multiple ways of writing the same value. 3&#x2F;2 is the same as 6&#x2F;4.<p>You can&#x27;t write 1&#x2F;3 exactly because, hey look at your denominator, it&#x27;s 3. Which isn&#x27;t a power of 2, is it? So that can&#x27;t be a floating point value.
评论 #23083686 未加载
zozbot234大约 5 年前
I believe that this &quot;window and offset&quot; intuition, while indeed true and useful in the radix-2 (&quot;binary&quot;) case, does not cleanly extend to the general case where no hidden bit is used even for non-&quot;subnormal&quot; numbers, and some numbers may thus have multiple representations. This shows up perhaps most clearly in the case of decimal floating point, but ISTR that a non-2 radix was also used in some mainframes.
评论 #23082868 未加载
biddlesby大约 5 年前
I saw a very simple visualisation of floating point by simply plotting points along the number line and zooming out. That gets the idea across very quickly!
评论 #23082712 未加载
cmrdporcupine大约 5 年前
This is a bit odd:<p>&quot;Trivia: People who really wanted a hardware floating point unit in 1991 could buy one. The only people who could possibly want one back then would have been scientists (as per Intel understanding of the market). They were marketed as &quot;Math CoProcessor&quot;. &quot;<p>The 486DX (1989) was already common in 91&#x2F;92 and came with a floating point unit. I had a 50mhz 486DX, and I was not by any means wealthy. FP unit was certainly used by lots of software, especially things like Excel, but C compilers certainly produced code for it if you had one.<p>Likewise, the 68040 (1990) had onboard FP. Macintosh Quadra, Amiga 4000, and various NeXT models had it.<p>Yes if you bought a 386 you often had to get a floating point co-processor as upgrade, but it wasn&#x27;t _that_ uncommon. Same on the 68k series; I knew people with Atari MegaSTes (68000) that bought an FP co-processor. They weren&#x27;t astronomers :-)<p>This feels like recent history, am I really that old?
jscheel大约 5 年前
I remember wanting a math co-processor so I could use autocad at home as a kid.
评论 #23083547 未加载
baby大约 5 年前
Interesting, because I think about it the other way: the offset tells you between which power of two you&#x27;re in:<p>* [2^-127, 2^-126] * ... * [1, 2] * [2, 2^2] * [2^2, 2^3] * ... * [2^128, 2^129]<p>and the window tells you where you are in this &quot;window&quot;. You have 8 bits to figure out where you are in there, so it&#x27;s obvious that you have much more precision if you are in the window [1, 2] than in the window [2^128, 2^129]
jiveturkey大约 5 年前
<a href="https:&#x2F;&#x2F;opencores.org&#x2F;projects&#x2F;fpuvhdl" rel="nofollow">https:&#x2F;&#x2F;opencores.org&#x2F;projects&#x2F;fpuvhdl</a>
dang大约 5 年前
A thread from 2019: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=19084773" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=19084773</a><p>Discussed at the time: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15359574" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=15359574</a><p>(Reposts are ok after a year or so.)
boomlinde大约 5 年前
I wonder why the significand is represented as a &gt; 1 number. It complicates representing zero (making it a special case not covered by the article of defining exponent=0, mantissa=0 to mean zero). Is it for the purpose of simplifying arithmetic operations or is it to minimize the number of redundant representations of zero?
评论 #23090490 未加载
jimmaswell大约 5 年前
I remember wondering at first how you know this won&#x27;t have multiple representations and cover every number - it&#x27;s because the mantissa can&#x27;t reach 2, and if it was 2 it would be the same as adding to the exponent, so you get the full range between any two exponents.
评论 #23084039 未加载
jimbob45大约 5 年前
This guy&#x27;s Quake 1&#x2F;2&#x2F;3 source code deep dives are well worth checking out as well.
barbs大约 5 年前
Awesome. Maybe now I can finally understand the fast inverse square root hack.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Fast_inverse_square_root" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Fast_inverse_square_root</a>
voz_大约 5 年前
This is beautifully written. Bravo to the author.
augustt大约 5 年前
Didn&#x27;t everyone learn scientific notation in high school? It&#x27;s pretty much exactly that, and you could put the coefficient&#x2F;exponent into whatever bit pattern you&#x27;d like.
评论 #23084269 未加载
评论 #23083606 未加载
评论 #23083343 未加载
评论 #23084323 未加载