TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Floating-Point Formats and Deep Learning

39 pointsby _eigenfooalmost 5 years ago

4 comments

alecmgalmost 5 years ago
Hoped to see opinions on Unum [1]<p>Correct me if I&#x27;m wrong, but most machine learning does happen around 1.0. Unum should give more precision for same bits, same precision for less bits around 0 and 1. And some other interesting features.<p>But would require new hardware and software.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Unum_(number_format)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Unum_(number_format)</a>
评论 #23975150 未加载
评论 #23974589 未加载
评论 #23976386 未加载
sdenton4almost 5 years ago
There&#x27;s also the whole world of fixed point inference which isn&#x27;t discussed here, but quite important. All of the hardware supports fast integer operations, and with fewer platform specific caveats, so you can get better guarantee of consistent behavior in deployments.
_5659almost 5 years ago
&gt; Floating point? In MY deep learning?<p>It&#x27;s more likely than you think.<p>Maybe not the most appropriate place for an &quot;X? in MY y?&quot; meme despite its relatively innocuous presentation<p>It&#x27;s kind of gross so I&#x27;ll refrain from linking it
_y5hnalmost 5 years ago
The moment floating-point precision errors become significant in your model, know that you&#x27;re dealing with algorithmic BS.
评论 #23975010 未加载