Oh, I was hoping this would be something built more directly over Ethernet, rather than on top of UDP/IP (if I'm understanding the layer diagram correctly).<p>I've been working with Ethernet devices a lot lately, using the network as a communication bus, essentially. I find that there's a lot of complexity that we simply don't need: ARP, DHCP, DNS... So many points of failure. We know all the devices on our LAN and their unique MAC addresses, and could do everything we need to addressing-wise at Layer 2. But everything's built on Layer 3 and up, so we're effectively working backward to map devices to IP addresses and vice versa. It's unsatisfying.
Hmmm, so much of this looks like an attempt to solve the problems that were solved with fibre channel a couple decades back. Which I guess is standard NIH, with the advantage of not having to pay the FC consortium 95% HW margins.<p>But still, you would think that some of those lessons could be learned before replacing it. AKA FC routes IP as one of its many protocols on top of the lower levels providing far more service guarantees than one normally gets with ethernet. Much of the QoS/latency/etc metrics were designed into FC from the beginning as a use on storage area networks (SANs). It just never took off as a IP transport because it cost 10x as much as ehernet, including a decade ago when these same groups tried to dump it on an ethernet MAC only to discover that it requires special switches which were $$$$ because "enterprise markup" defeating the whole point of cheap ethernet phy's. See FCoE..<p>And yet today, there is NVMEoF on FC, which is what one runs when its important that someone scp'ing a file on your network doesn't cause your database queries to slow down.<p>What I don't get is why OCP doesn't just actually build some of these adapters/etc with a "we won't be greedy" take and sell them not only to the hyperscalers but on the open market. That way someone could actually build say, a FC adapter that has a price similar to an ethernet adapter.
Is it just me getting older / less smart, or did articles about products really start to sound like a jumbled mess or buzzwords lately?<p>What is "Hardware transport", what is "the ecosystem"? And then there is dozens of random products and technologies that I've never heard of...<p>This sounds more like a humble brag, than an article trying to inform people about technologies that might actually be useful to them.
When you have enough scale you can claim a certain particular way of doing things are better than the others, which in most cases is just one way of doing things. This is what we see here.
To this day I still haven't seen a more sensible API for low-latency Ethernet than Exablaze (was the market leader in low-latency trading, then got bought by Cisco).<p>The only thing blocking these from becoming standard is that it means userland has direct control of hardware.
I'm confused by this because we've been using Falcon at work for over a year now, perhaps longer, as I just started a year ago. What are they making available that wasn't already?
I don’t understand networking all that well. Is it interesting that the telcos and non-tech companies are moving away from specialized hardware toward software defined networks while the hyperscalers are using hardware acceleration?
It sounds like this builds on top of Ethernet to provide a higher performance alternative to UDP/TCP, with some sort of hardware acceleration.<p>I may be in over my head since I’m not an HPC/datacenter expert, but not sure I understand how you’d use this on the software side. Maybe someone is aware of specific examples? (beyond the vague “HPC/AI”)<p>edit: as another comment mentioned, the diagram shows it’s on top of UDP/IP, so it’s mostly an alternative to TCP/IP
I normally like Google blog announcements, as they are usually heavy on technical details. But not this one. Quoting, the meat of it is:<p>> Fine-grained hardware-assisted round-trip time (RTT) measurements with flexible, per-flow hardware-enforced traffic shaping, and fast and accurate packet retransmissions, are combined with multipath-capable and PSP-encrypted Falcon connections ... flexible ordering semantics and graceful error handling ... hardware and software are co-designed to work together to help achieve the desired attributes of high message rate, low latency, and high bandwidth<p>So like QUIC, but designed for low latency. Maybe. There is no indication of how they achieve it if it is, nor is there a link to further details. The bulk of the article is literally name dropping. Protocol names, FAANG company names, standards organisation names. It reads like C-suite bait. "Come join us boys - all the big guys already have. So it's a sure winner."
I was confused by the reference to “lossy” networks in this page. Does this have a different meaning in this context than something like lossy compression where data is actually discarded?
I guarantee that there will eventually be a vaguely similar (but different!) stack published by each of: NetFlix, Microsoft, Amazon, and Apple. Just kidding, Apple won't publish anything.<p>The IT ecosystem has fragmented into mutually incompatible cliques. You are either in the Google ecosystem, the Amazon ecosystem, or some other one, but there are no more truly open and industry-wide standards.<p>Look at WebAuthN: it enables a mobile device from "any" vendor to sign on to web pages without a password. Great! Can I transfer secrets from an Apple iPhone to a Google Android phone? Yes? No? Hello? Anyone there?<p>I just got a new camera. It can take HDR still images, which look <i>astonishingly</i> good. Can I send that to an Apple device? Sure! Can I send it to a Google device? Err... not without transcoding it first... on a Microsoft Windows box. Can I send it to a mailing list of people with mixed-vendor devices? Ha-ha... no.<p>This is the best argument I've seen for splitting up the FAANGs + Microsoft + NVIDIA. Once they get to this behemoth trillion-dollar scale, they become nations onto themselves and no longer need to cooperate, no longer need to use any open standards at all, and can start dictating and pushing third parties around.<p>Another random example is HTTP/3, which is basically the "What's best for Google" protocol.<p>Or gRPC, which is "What Google needs in their data centre".<p>And now Falcon, which is "The transport Google needs for their workloads".<p>Does it work for anyone else? I don't know, but it's a certainty that Google doesn't care and never will, because <i>they don't need to</i>.