TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

The Tesla Dojo Chip Is Impressive, but There Are Some Major Technical Issues

82 点作者 tobijkl超过 3 年前

20 条评论

jvanderbot超过 3 年前
As Hamming suggested in &quot;Art of doing science and engineering&quot;, when you want to make something autonomous, you usually have to build a completely different device that solves the same problem, rather than automating the same device.<p>I wonder. For all the money thrown into self-driving cars <i>research</i>, could we have had an autonomous rail system by now? The technology for mostly-autonomous rail is well understood. Most of the financial cost is in infrastructure to support the system. Seems to me self-driving cars try to short-circuit that infrastructure build-up. They try to &quot;automate the device&quot; rather than &quot;producing an automated system that solves the problem of moving people and goods&quot;.<p>Specifically, I wonder if, for the cost and time spent on CPU-and-engineer-driven research and development of autonomous cars, if we could have had <i>nationwide</i> autonomous rail <i>rolled out</i> by now.
评论 #28370161 未加载
评论 #28370178 未加载
评论 #28370898 未加载
评论 #28370686 未加载
评论 #28370176 未加载
评论 #28370240 未加载
评论 #28374841 未加载
评论 #28370297 未加载
dragontamer超过 3 年前
Somehow, I&#x27;m reminded of the Tsar tank from WW1. The Russians knew that a new weapon of war: an armored car, was necessary to break the stalemate of trench warfare.<p>This hypothetical armored car needed many features: the most important was that it must be able to move across the muddy no man&#x27;s land reliably.<p>Tests have shown that regular sized wheels would get stuck in the mud. A bigger wheel has more surface area and greater contact area. So the Russians built an armored car with the largest wheels possible. Russian tests were outstanding, the Tsar tank rolled over a tree !!!!<p><a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Tsar_Tank" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Tsar_Tank</a><p>The French design was to use caterpillar tracks. We know what works now since we have a century of hindsight.<p>--------<p>Spending the most money to make the biggest wheel isn&#x27;t necessarily the path to victory. I think it&#x27;s more likely that the tech (aka, caterpillar track equivalent) hasn&#x27;t been invented yet for robotaxis. Hitting the problem with bigger and more expensive neural network computers doesn&#x27;t seem to be the right way to solve the problem.
评论 #28369840 未加载
评论 #28369849 未加载
justapassenger超过 3 年前
&gt; Of this competition, only Google and Nvidia have supercomputers that stand toe to toe with the Tesla’s<p>Even assuming that it&#x27;s true (which I very much doubt - anyone that&#x27;s willing to spend enough money with Nvidia, can have powerful supercomputer fairly quickly), it&#x27;s very dishonest statement. It&#x27;s comparing deployed system with a lab prototype of a single competent of potential supercomputer, that may be fully operational in few years (software is a really, really, really big deal here).
评论 #28374488 未加载
评论 #28370201 未加载
thesausageking超过 3 年前
The Q&amp;A section on their compiler and software that the author links to is very interesting:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=j0z4FweCy4M&amp;t=8047s" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=j0z4FweCy4M&amp;t=8047s</a><p>It sounds like they&#x27;re going to have write a ton of custom software in order to use this hardware at scale. And, based on the team being speechless when asked a follow up question, it doesn&#x27;t sound like they know (yet) how they&#x27;re going to solve this.<p>Nvidia gets a lot of credit for their hardware advances, but what really what their chips work so well for deep learning was the huge software stack they created around CUDA.<p>Underestimating the software investment required has plagued a lot of AI chip startups. It doesn&#x27;t sound like Tesla is immune to this.
ggoo超过 3 年前
Tesla&#x27;s claim to delivery ratio is abysmal. I&#x27;m not sure why anybody even bothers deconstructing these presentations anymore, they&#x27;re just fluff.
评论 #28370780 未加载
评论 #28370922 未加载
michelpp超过 3 年前
Clearly a a shot across the bow for Cerebras and another excellent target for the GraphBLAS.<p>Dense numeric processing for image recognition is a key foundation for what Tesla is trying to do, but that tagging of the object is just the beginning of the process, what is the object going to do? What are its trajectories, what is the degree of belief that a unleashed dog vs a stationary baby carriage is going to jump out?<p>We are just beginning to scratch the surface of counterfactual and other belief propagation models which are hypersparse graph problems at their core. This kind of chip, and what Cerebras are working on, are the future platforms for the possibility of true machine reasoning.
2bitencryption超过 3 年前
from the article:<p>&gt; but the short of it is that their unique system on wafer packaging and chip design choices potentially allow an order magnitude advantage over competing AI hardware in training of massive multi-trillion parameter networks.<p>I kind of wonder if Tesla is building the Juicero of self-driving. [0]<p>Beautifully designed. An absolute marvel of engineering. The result of brilliant people with tons of money using every ounce of their knowledge to create something wonderful.<p>Except... you could just squeeze the bag. You could just use LIDAR. You could just use your hands to squish the fruit and get something just as good. You could just (etc etc).<p>No doubt future Teslas will be supercomputers on wheels. But what if all those trillions of parameters spent trying to compose 3D worlds out of 2D images is pointless if you can just get a scanner that operates in 3D space to begin with??<p>[0] <a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2017&#x2F;sep&#x2F;01&#x2F;juicero-silicon-valley-shutting-down" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;technology&#x2F;2017&#x2F;sep&#x2F;01&#x2F;juicero-s...</a>
评论 #28370375 未加载
评论 #28370930 未加载
modeless超过 3 年前
I&#x27;m glad people are exploring the design space. To some extent the training techniques and neural net architectures need to be tailored to the hardware. Nvidia isn&#x27;t on top just because they&#x27;re good at chip design, but because people have chosen to focus research effort on techniques that work well on Nvidia hardware. New hardware may allow new techniques to shine.<p>New hardware architectures can&#x27;t really be used to their full potential without years of research into techniques that are suited for them. The more people who have access to the hardware, the faster we can discover those techniques. If Tesla is serious about their hardware project, they need to offer it to the public as some kind of cloud training system. They don&#x27;t have enough people internally to develop everything themselves in a short enough time to remain competitive with the rest of the industry.
cr4zy超过 3 年前
Trillion parameter networks are mentioned a few times, but Tesla is deploying much smaller networks than that (like tens of millions IMU). Trillion param networks are mostly transformers like GPT-3 (actually 175B) etc... that are particularly heavy vs Conv as they have no weight sharing. Tesla is definitely starting to use transformers though, e.g. for camera fusion and evidenced by their focus on matrix multiply in dojo asic&#x27;s vs the conv asics they have in the on-vehicle chips.
评论 #28371226 未加载
m3kw9超过 3 年前
All I see is [techno terms].. impressive engineering.. lots of problems need to be solved first..2022..on paper toe to toe with Nvidia.. calm the hype.
thunkshift1超过 3 年前
What a bs fanboy article.. the author is going gaga over something that isnt even out in silicon yet, and has no credible plans of software ecosystem coming on top the hw( if it materializes). Unbelievable hype.
Const-me超过 3 年前
&gt; they have 1.25MB of SRAM and 1TFlop of FP16&#x2F;CFP8… This is woefully unequipped for the level of performance they want to achieve.<p>Any idea how OP made that conclusion?<p>My GeForce 1080Ti has 1.3MB of in-core L1 caches (28 streaming multiprocessors, 48kb L1 each). It also has L2 but not too large, slightly under 3MB for the whole chip.<p>The GPU delivers about 10 TFlops of FP32 which needs 2x the RAM bandwidth of FP16. I’m generally OK with the level of performance, at least until the GPU shortage is fixed.
neolefty超过 3 年前
&gt; This chip is not Tesla designing something that is better than everyone else all by themselves. We are not at the liberty to reveal the name of their partner(s), but the astute readers will know exactly who we are talking about when we reference the external SerDes and photonics IP.<p>Any &quot;astute readers&quot; here who know who the partner would be?
评论 #28369829 未加载
评论 #28370088 未加载
评论 #28369707 未加载
_nalply超过 3 年前
My curiosity got piqued at the mention of CFP8 (configurable floating point 8), but googling this didn&#x27;t yield usable information.<p>What exactly is CFP8? How many bits does one instance of CFP8 use? What mathematical operations are supported? How does one configure the floating point?
评论 #28378022 未加载
scardycat超过 3 年前
This is a step in the right direction. I witnessed the semiconductor industry abandoning their own designs in favor of Intel&#x2F;x86. Better diversity in chip design is always a good thing, even if its in closed ecosystems (Google TPU, Tesla Dojo)
rektide超过 3 年前
some discussion yesterday, <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28361807" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=28361807</a><p>it&#x27;s interesting because it&#x27;s clearly exciting &amp; leading edge tech. unlike most Tesla tech which ultimately has consumers using it, where we all get to assess strengths &amp; weaknesses, this tech is going to remain inside the Tesla castle, unviewable, unassessable. we&#x27;ll probably never know what real strengths or weaknesses it has, never understand all the ways it doesn&#x27;t well, or as well as competitors. it&#x27;s going to remain an esoteric dollop of computing.
评论 #28369691 未加载
danso超过 3 年前
I confess I have a reflexive skepticism to the idea that Tesla&#x27;s achievements (and struggles) in car manufacturing would translate to any kind of lead in chip design and manufacturing. How long did it take Apple from planning to rollout for M1? And the Tesla chip seems to be making bigger revolution-sized claims?
评论 #28374603 未加载
lpapez超过 3 年前
2022 will surely be the year of Linux on Desktop and fully self driving cars.
immmmmm超过 3 年前
not fully related but i was doing some reading on various &quot;new sustainable ways of transportation&quot; and, since they&#x27;re building the biggest hyperloop test track near my place, i found this interesting video of some of problems one might get trying to put vacuum in a pipe:<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;Zz95_VvTxZM" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;Zz95_VvTxZM</a>
CasillasQT超过 3 年前
&quot;We believe it makes sense for Tesla to pour as much capital as needed into winning the Robotaxi race and catch up to these two&quot;. That has to be a joke right?
评论 #28369493 未加载