This general spine-leaf, and subsequent super-spine construction, are talked about frequently in recent networking conference talks. Using ECMP on top of OSPF/BGP are very well established ways to build super-switches to scale to super large fabrics.<p>I'd be really interested in the specifics that they don't describe very well regarding cable layouts and automated configuration of pods.<p>Also, for anybody stuck in the old paradigm of super-expensive inflexible switches from the traditional network vendors, be sure to check out the commodity stuff that was mentioned previously in this HN thread:<p><a href="https://news.ycombinator.com/item?id=8400953" rel="nofollow">https://news.ycombinator.com/item?id=8400953</a>
> What’s different is the much smaller size of our new unit – each pod has only 48 server racks<p>48 Racks seems pretty darn large by itself, and that is the smallest unit they deal with. At only 20 servers in a rack, thats 960 servers in their smallest unit. And they make it seem like there are hundreds of these pods in a single datacenter...<p>A single pod is bigger than the vast majority of the top 500 super computers...
their schematics of datacenter reminds about schematics of a big server 15 years ago. Server racks instead of CPU-boards. "The datacenter is the computer."