TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Floating-Point Formats and Deep Learning

39 点作者 _eigenfoo将近 5 年前

4 条评论

alecmg将近 5 年前
Hoped to see opinions on Unum [1]<p>Correct me if I&#x27;m wrong, but most machine learning does happen around 1.0. Unum should give more precision for same bits, same precision for less bits around 0 and 1. And some other interesting features.<p>But would require new hardware and software.<p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Unum_(number_format)" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Unum_(number_format)</a>
评论 #23975150 未加载
评论 #23974589 未加载
评论 #23976386 未加载
sdenton4将近 5 年前
There&#x27;s also the whole world of fixed point inference which isn&#x27;t discussed here, but quite important. All of the hardware supports fast integer operations, and with fewer platform specific caveats, so you can get better guarantee of consistent behavior in deployments.
_5659将近 5 年前
&gt; Floating point? In MY deep learning?<p>It&#x27;s more likely than you think.<p>Maybe not the most appropriate place for an &quot;X? in MY y?&quot; meme despite its relatively innocuous presentation<p>It&#x27;s kind of gross so I&#x27;ll refrain from linking it
_y5hn将近 5 年前
The moment floating-point precision errors become significant in your model, know that you&#x27;re dealing with algorithmic BS.
评论 #23975010 未加载