What's missing is a 2021 perspective on why this is or is not useful at scale. Noting, that google does transform edge-IP into some mythic internal protocol which in part explains why google won't do IPv6 direct to some things like GCP: they couldn't, for a given generation of hardware.<p>Basically: yes, within a DC which is heading to some kind of traefik or HAproxy or other redirection/sharder, this could make sense. So.. how does this 2018 approach stack up in 2021
It targets the properties that Aeron protocol aims for too (<a href="https://github.com/real-logic/aeron" rel="nofollow">https://github.com/real-logic/aeron</a>).
I have a feeling the vast majority of traffic should be pub sub / love query not request response, and yet is currently the latter, and this might mean this is shooting for the wrong goalposts.
In a similar vein, Bell Labs developed the IL protocol in the 90's to facilitate better 9p performance on plan 9. It did not work well over the internet due to latency but was beneficial on local networks. It was most useful for disk servers serving root fs to CPU servers and terminals/workstations.<p><a href="http://doc.cat-v.org/plan_9/4th_edition/papers/il/" rel="nofollow">http://doc.cat-v.org/plan_9/4th_edition/papers/il/</a><p>(edit: forgot to mention IL is still usable on plan 9)
If one use internal ip 128. 10. Etc, why can’t one use another protocol for internal transfer. That means the internal traffic is cut off not just by ip but also by other internet protocol.<p>The ospf vs Bgp and within organisation why not …
Quic is similar and standardized by IETF. <a href="https://en.wikipedia.org/wiki/QUIC" rel="nofollow">https://en.wikipedia.org/wiki/QUIC</a>
Deterministic ethernet varieties been around for a very long time.<p>If you have fabric determinism like Infiniband, and capacity reservation on the receiving side, you can just dispose of connection paradigm, flow control, and thus get great deal of performance, while simplifying everything at the same time.<p>I do not see much use of it though unless you are building something like an airplane.<p>The uncounted PhD hours spent on getting networks to work well do amount to something.<p>Dealing with RDMA aware networking is far beyond the ability of typical web developers.<p>Deterministic Ethernet switches cost a fortune, and are lagging behind the broader Ethernet standard by many years.<p>Making a working capacity reservation setup takes years to perfect as well.<p><i>99.9999...% of web software will most likely *lose* performance if blindly ported to RDMA enabled database, message queue server, or caching.<p>If you don't know how the upoll, or iouring like mechanisms work, you can not get any benefit out of RDMA whatsoever</i><p>I once worked for a subcontractor for Alibaba's first RDMA enabled datacentre.