TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Nvidia R&D chief on how AI is improving chip design

225 pointsby chuckjchenabout 3 years ago

9 comments

maxwells-daemonabout 3 years ago
I work on this team! (Specifically: applied deep learning research, chip design).<p>It&#x27;s a shame to see so many people dismissing this work as marketing. I see lots of clever people working hard on really novel and interesting stuff, and I really do think that ML has real potential to customize a design much more &quot;deeply&quot; than traditional automation tools.
评论 #31094770 未加载
评论 #31097087 未加载
评论 #31094172 未加载
评论 #31095998 未加载
评论 #31096466 未加载
评论 #31104476 未加载
评论 #31097872 未加载
评论 #31098012 未加载
评论 #31094580 未加载
rektideabout 3 years ago
Did nvidia just promise us singularity? :)<p>Hard to read a talk like this from a pulpit &amp; not see shout outs to the incredibly super-fantastic open-source innovative projects like OpenROAD which have been shipping amazingly well-routed-by-AI chips for a while now. There&#x27;s papers you can cite, galore, many open source designs[1].<p>It&#x27;s not like Nvidia is promising anyone else will benefit from this work. This seems to be very high level coverage their R&amp;D department is looking at, perhaps&#x2F;perhaps not using. The article makes it hard to find out what is available, what has been published or otherwise deeply discussed (which is I think the best we can hope from Nvidia not real participation). There&#x27;s only one paper linked, on NVCell[2], described as:<p>&gt; <i>The first is a system we have called NVCell, which uses a combination of simulated annealing and reinforcement learning to basically design our standard cell library. </i><p>This just feels like so much else going on in computing. WSL coming to windows, the recent Unity vs Unreal topic[3]. It&#x27;s hard to imagine refusing to participate with others. It&#x27;s hard to imagine not being part of the open source community working shoulder to shoulder to push for better. NVidia patently doesn&#x27;t get it, patently isn&#x27;t participating, patently isn&#x27;t there. It&#x27;s cool we can hear what they are up to, but it&#x27;s also extremely NVidia that they&#x27;re doing it all on their own. Anyhow, Looking forward to more AI based chip power system design starting to emerge; that sounds like a good idea NV.<p>[1] <a href="https:&#x2F;&#x2F;theopenroadproject.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;theopenroadproject.org&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;research.nvidia.com&#x2F;publication&#x2F;2021-12_nvcell-standard-cell-layout-advanced-technology-nodes-reinforcement-learning" rel="nofollow">https:&#x2F;&#x2F;research.nvidia.com&#x2F;publication&#x2F;2021-12_nvcell-stand...</a><p>[3] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31064552" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=31064552</a> (412 points, 3 days ago, 311 comments)
评论 #31092210 未加载
评论 #31092434 未加载
评论 #31100388 未加载
评论 #31092527 未加载
ImportOllieabout 3 years ago
I don&#x27;t understand the backlash here. The jist seemed to be traditional tools that are exact take a long time to process complex designs. Deep learning offers a statistical approach that can give a &#x27;coarse&#x27; prediction and they&#x27;re using this to reduce development time. That seems to make sense to me, especially in the earlier verification phases of the hardware design lifecycle.<p>To me this sounds like a good use-case of AI and Neural Nets. It doesn&#x27;t appear to be looking to replace the traditional tools, just augment.
评论 #31096988 未加载
b20000about 3 years ago
the last time I checked autorouters were still not capable of doing all the routing on a multi layer PCB properly, and manual work was still required to produce a decent design.
评论 #31093275 未加载
bsderabout 3 years ago
What is extremely telling is what is missing ... Design Rule Checking (DRC) and Layout Vs Schematic (LVS).<p>These require:<p>1) Longer bit length arithmetic<p>32-bit float simply isn&#x27;t enough. 64-bit float is close, but limited. You really want 128-bit integer. And nVidia isn&#x27;t delivering that.<p>2) Real algorithmic improvements<p>We&#x27;re still stuck with computational geometry algorithms that don&#x27;t parallelize. It would be awfully useful if nVidia would actually research some new algorithms instead of just waving around the ML&#x2F;AI marketing wand.<p>But, then, this is the company that built itself on benchmarketing, so ...
评论 #31093677 未加载
评论 #31092563 未加载
评论 #31093427 未加载
评论 #31092996 未加载
productceoabout 3 years ago
Please keep up the processing power progress!<p>Economics of the software industry (or at least the products that I work on) depend on the assumption that cost of computing (including storage) diminish exponentially over time! &lt;3
评论 #31095203 未加载
W-Stoolabout 3 years ago
I&#x27;ve got a whole &quot;HAL9000&quot; feeling going here right now.<p>&quot;Sorry Dave - I can&#x27;t quite do that ...&quot;
urmishabout 3 years ago
I remember him from the vlsi text dally and poulton.
orangebeetabout 3 years ago
I really hope that they can apply some of these AI approaches on the driver situation on Linux as well. I will never buy an Nvidia product after the nightmares they&#x27;ve put me through.