This is a bit of a shot in the dark but my guess here is that they're doing this because their stack is not able to properly deal with ICMPv6 packets on the return path. In ICMPv6 for some reason they designers saw fit to add the IP header information into the ICMP checksum so that if you're doing a NAT or other rewrite then you need to recompute the checksum for the ICMP packet, and if it's an error packet you need to do this for the inner packet as well.<p>It seems plausible that their network stack wasn't up to the task of handling this so they sort of jury-rigged up this sort of odd connection forwarding.<p>That's the only thing that I can think of here as otherwise there's just no planet where this makes any sense. That said, NAT for IPv6 is a generally problematic concept and they probably were flying a bit blind on how to implement it since there's no real standard way to do this. IPv6 was really designed around the idea that every endpoint would have a unique, globally routable address.
Did they lay off the whole team to the point where they can't even push updates? Their second-to-last blog post is about a hotfix that they haven't released a proper 8.1.1 patch release for in several months, you have to download a random file from their blog and manually patch it in via the terminal...?!<p><a href="http://blogs.vmware.com/teamfusion/2016/01/workaround-of-nat-port-forwarding-issue-in-fusion-8-1.html" rel="nofollow">http://blogs.vmware.com/teamfusion/2016/01/workaround-of-nat...</a>
It's not just NAT that's broken. On both Windows and Linux hosts, with bridged networking, SLAAC doesn't work for Linux or FreeBSD guest systems. It does <i>eventually</i>, after somewhere between 5 and 30 minutes, but for machines on the physical LAN it's virtually instantaneous. Something is dropping the router advertisements, but eventually one gets through. Once the guest has an address, it then works just fine.<p>Not so great when all the systems you want to talk to are v6 only, and the v4 NAT address is just for legacy use.
I totally understand that the observed behavior may not be what was intended, but there's clearly some complexity of the sort that doesn't happen by accident. What was VMWare <i>trying</i> to do, and which parts of this mess were unintentional? Is this an experimental feature that was correctly disabled for IPv4 but accidentally left on for IPv6, or was it intended to be released and on for both?
I thought this was posted not that long ago.<p>But in any case, I was wondering if this had anything to do with happy eyeballs but did not hear any further input.<p>EDIT: Upon rereading, this is the followup post.
Yeah, it's very unlikely this will be resolved given the team who developed Fusion was retrenched.<p>VMWare are no longer, in my view, a particularly innovative company.
This sort of thing isn't all that uncommon - enterprise "network optimiser" devices like <a href="http://www.riverbed.com/" rel="nofollow">http://www.riverbed.com/</a> work in this way too. Hopefully not buggy, though.