TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Machine Learning for Systems and Systems for Machine Learning [pdf]

408 点作者 andrew3726超过 7 年前

10 条评论

cs702超过 7 年前
TPUs are only one part of this eye-opening presentation. Skip to page 28, where Jeff starts talking about:<p>* Using reinforcement learning so the computer can figure out how to parallelize code and models on its own. In experiments, the machine beats human-designed parallelization.<p>* Replacing B-tree indices, hash maps, and Bloom filters with <i>data-driven indices</i> learned by deep learning models. In experiments, the learned indices outperform the usual stalwarts by a large margin in both computing cost and performance, and are auto-tuning.<p>* Using reinforcement learning to manage datacenter power. Machine intelligence outperforms human-designed energy-management policies.<p>* Using machine intelligence to replace user-tunable performance options in all software systems, eliminating the need to tweak them with command line parameters like --num-threads=16, --max-memory-use=104876, etc. Machine intelligence outperforms hand-tuning.<p>* Using machine intelligence for all tasks currently managed with heuristics. For example, in compilers: instruction scheduling, register allocation, loop nest parallelization strategies, etc.; in networking: TCP window size decisions, backoff for retransmits, data compression, etc.; in operating systems: process scheduling, buffer cache insertion&#x2F;replacement, file system prefetching, etc.; in job scheduling systems: which tasks&#x2F;VMs to co-locate on same machine, which tasks to pre-empt, etc.; in ASIC design: physical circuit layout, test case selection, etc. Machine intelligence outperforms human heuristics.<p>IN SHORT: machine intelligence (today, that means deep learning and reinforcement learning) is going to penetrate and ultimately control EVERY layer of the software stack, replacing human engineering with auto-tuning, self-improving, better-performing code.<p>Eye-opening.
评论 #15897616 未加载
评论 #15897625 未加载
评论 #15897165 未加载
评论 #15897647 未加载
评论 #15897162 未加载
评论 #15899479 未加载
评论 #15897397 未加载
cobookman超过 7 年前
Nvidia Titan V can do 110 TFLOPS, 12GB of 1.7 Gb&#x2F;s Memory [1] and sells for 3,000$. TPU v2 does 180 TFLOPS, 64GB of 19.2Gb&#x2F;s Memory [2].<p>That&#x27;s a heck of a performance boost for a chip that&#x27;s likely costing google way less than the nvidia flagship.<p>[1] <a href="http:&#x2F;&#x2F;www.tomshardware.com&#x2F;news&#x2F;nvidia-titan-v-110-teraflops,36085.html" rel="nofollow">http:&#x2F;&#x2F;www.tomshardware.com&#x2F;news&#x2F;nvidia-titan-v-110-teraflop...</a>
评论 #15895457 未加载
评论 #15895921 未加载
评论 #15895454 未加载
jamesblonde超过 7 年前
Great talk, with lots of new insights into what&#x27;s happening at Google. I really think his point that ImageNet is the new Mnist now holds true. Even research labs should be buying DeepLearning11 servers (10 x 1080Ti) for $15k, and training large models in a reasonable amount of time. It may seem that Google are way ahead, but they are just doing synchronous SGD, and it was interesting to see the drop in prediction accuracy from 128 TPU2 cores to 256 TPU2 cores for ImageNet (76 -&gt; 75% accuracy). So, the algorithms for dist. training aren&#x27;t unknown, and with cheap hardware like the DL11 server, many well-financed research groups can compete with this.
评论 #15896079 未加载
larelli超过 7 年前
It looks like this paper has more information: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1712.01208v1.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1712.01208v1.pdf</a>
EvgeniyZh超过 7 年前
Was it filmed? If yes, when video will be available?
评论 #15896215 未加载
nickpsecurity超过 7 年前
Great presentation. Far as application, I already thought this might be useful in lightweight, formal methods to spot problems and suggest corrections for failures in Rust&#x27;s borrow checkers, separation logic on C programs, proof tactics, and static analysis tooling. For Rust example, the person might try to express a solution in the language that fails the borrow checker. If they can&#x27;t understand why, they submit it to the system that attempts to spot where the problem is. The system might start with humans spotting it and restructuring the code to pass borrow checker. Every instance of those will feed into the learning system that might eventually do that on its own. There&#x27;s also potential to use automated, equivalence checks&#x2F;tests between user-submitted code and the AI&#x27;s suggestions to help human-in-the-loop decide if it&#x27;s worth review before passing onto the other person.<p>In hardware, both digital and analog designers seem to use lots of heuristics in how they design things. Certainly could help there. Might be especially useful in analog due to small number of experienced engineers available.
yeukhon超过 7 年前
While this is a collective work, honestly, after hearing about JD for so many years: is there anything he CAN’T do?
评论 #15895485 未加载
1024core超过 7 年前
This is some really cool stuff, I hope this submission gets more upvotes and reaches a wider audience.
novaRom超过 7 年前
I speculate that Google will sell TPUv2 for as less as 500 USD per PCIe card already in 2018. Nvidia&#x27;s Volta TensorCores are essentially the same: 32-bit accumulators and 16-bit multipliers, but GPUs are more general-purpose which is not necessary for Deep Learning since most intensive operation is dot-product (y+=w*x).
评论 #15898625 未加载
nl超过 7 年前
That &quot;Learned Index Structures&quot; makes it pretty clear that Karpathy was right in his widely criticized &quot;Software 2.0&quot; piece.
评论 #15895901 未加载
评论 #15895653 未加载
评论 #15895463 未加载
评论 #15895323 未加载