"... which outperforms the Nvidia H100 <i>in energy efficiency</i>".<p>Very unlike what the headline was saying.<p>The article doesn't seem to really state how much real world performance this thing actually has. Seems pretty dodgy?
This is the FFM paper, one of the papers in question (since no mention of a TAICHI here):<p><a href="https://www.nature.com/articles/s41586-024-07687-4" rel="nofollow">https://www.nature.com/articles/s41586-024-07687-4</a><p>A Tsinghua page mentions only this paper.<p><a href="https://media.au.tsinghua.edu.cn/info/1016/1419.htm" rel="nofollow">https://media.au.tsinghua.edu.cn/info/1016/1419.htm</a><p>And this is one from one year ago from the same group, also photonic chip.<p><a href="https://www.nature.com/articles/s41586-023-06558-8" rel="nofollow">https://www.nature.com/articles/s41586-023-06558-8</a><p>I haven't read any of this yet.
Can any expert in this domain validate the claimed performance over H100? Any pointers on how optical processors work and how mature they are over traditional processors.
I would be shocked if nvidia didn’t use optical interconnects in their next iteration. My understanding is that the big power loss in these racks comes from having to bus the data around.<p>The article only seems to mention the power savings and speed, both of which are obvious benefits over traditional electrical circuits. Optical computing isnt constrained by the speed of light or the efficiency of a fiber, it’s that components are orders of magnitude larger and have low to no manufacturing support.<p>Don’t get me wrong, I’m hyped for the eventual PIC revolution, i just don’t think this is it.