I'm suspicious about the IP 169.150.221.147
My guess: there is some misconfigured bogons IP filter and instead of 169.254.0.0/16 (rfc3927) there is something like 169.0.0.0/8 configured to be blocked on some firewall<p>I once was a customer of an ISP that mistakenly blocked the whole 192.0.0.0/8 net, which caused some confusion, but they fixed it after I pointed it out.
What really grinds my gears is that a networking team believes the culprit is a static DNS that "conflicts" with their DNS.<p>Like...<p>"My car won't start."<p>"Oh, OK, have you tried waiting for the traffic lights to go green, as designed by the Principal Road Engineer?"
Think I was able to reproduce it. I configured my router to drop established connections for IP 169.150.221.147 in my policy attached to my wan interface for outgoing traffic (important detail, inbound would drop the syn/ack instead). For reference its an Ubiquiti Edgerouter that uses iptables to filter traffic.<p>In the linked picture [0] I have packet #436 selected, its a retransmission of the handshake syn/ack with seq=0 ack=1, repeating a few times later, same as OP.<p>So as others suggested, likely a misconfigured BOGON rule with 169.0.0.0/8, but also matching outbound established connections rather than new/any state for some reason.<p>[0] <a href="https://i.imgur.com/AwJGI3W.png" rel="nofollow">https://i.imgur.com/AwJGI3W.png</a>
I'm rather surprised that Berkeley Student Tech Services would keep people around who either don't know how DNS works or know, but who make up excuses to dismiss a problem.<p>The problem really should be escalated and the nonsense answer pointed out, because if they care (and they should), they'll want to educate the person who gave that response.
Feels like some stateful device within someone's network mishanding the connection state, like the author guesses.<p>It's interesting that your side thinks the three-way handshake worked, but the remote side continues to resend the [SYN, ACK] packets, as if they've never received the final [ACK] from you.<p>Had a hellish time troubleshooting a similar problem several years ago with F5 load balancers - there was a bug in the hashing implementation used to assign TCP flows to different CPUs. If you hit this bug (parts per thousand), your connection would be assigned to a CPU with no record of that flow existing, so the connection would be alive, but would no longer pass packets. Would take a long time for the local TCP stack to go through its exponential retries and finally decide to drop the connection and start over .
99% MTU size. Had this recently specifically with TLS due to large initial packets containing certificates. Results could even depend on user agent, some fail some will work.<p>try to reduce MTU on client, 1280 is a good starting point.
A raw packet capture would be useful to look deeper. Actually 2. One of the IP in question and one of any other site. Both from the problem source network. I would wager one of these things is not like the other but I need the .cap files as there is not enough information in the screenshot. The output of <i>ss -emoian</i> as text and not a screenshot may also be useful to grab just after the connections are attempted to both destinations.
My guess would be something related to your campus having more than one external connection available.<p>Maybe from the server's point of view the SYN and ACK are coming from distinct addresses and this is tripping them up ?<p>I have 2 internet connection in my home and would encounter some strange bugs whenever I used both connections at the same time. I never debbuged theses cases but they always disappeared when I just used 1 connection and left the second as a backup.
I wouldn't be surprised if someone (your Uni) is mistakenly blocking some 169.x.x.x data since 169.254.0.0/16 is used for local IPs. Someone put the wrong subnet mask in a firewall rule or ACL someplace.
First off, the HTTP HTTP 301s to the HTTPS site, so HTTPS is still the likely trigger.<p>Second, I see that whatever client he's using is specifying a very old TLS 1.0. If its not MTU (which others have mentioned), then my guess would be a firewall with a policy specifying a minimum TLS version, and dropping this connection on the floor.
My guess is that your original SYN did not go to the target, but was redirected somewhere close by. I'd look at the TTL value in the IP header of your first SYN-ACK, and play with such things as traceroute.<p>Such redirection is often done on a specific port basis, so that trying to access different ports might produce a different result, such as a RST packet coming back from port 1234 with a different TTL than port 443.<p>There is so much cheating going with Internet routing that the TTL is usually the first thing I check, to make sure things are what they claim.
Sounds to me as if they have a Palo Alto NGFW at the edge, filtering the traffic. UC Berkeley appears to be running a Palo Alto for at least part of their infrastructure.<p><a href="https://security.berkeley.edu/services/bsecure/bsecure-remote-access-vpn" rel="nofollow">https://security.berkeley.edu/services/bsecure/bsecure-remot...</a>
Wow. What an embarassing answer by the "Berkeley Student Tech Services"...<p>That is on the same level as e.g. the customer hotline at a phone company ("did you try turning it off and on again?"), I would have thought that Berkeley of all university has higher standards than that
The symptoms match my experience with a mid-network firewall/router that is not aware of TCP window scaling stripping out the scaling factor while leaving the window scaling feature enabled. See <a href="https://lwn.net/Articles/92727/" rel="nofollow">https://lwn.net/Articles/92727/</a>
"Adventures with asymmetric routing and firewalls"[1] might provide some useful insite/information[1].<p>[1] : <a href="http://www.growse.com/2020/01/23/adventures-with-asymmetric-routing-and-firewalls.html" rel="nofollow">http://www.growse.com/2020/01/23/adventures-with-asymmetric-...</a>
Maybe it’s not a network issue at all - might be related to a purposeful action taken by a network device (ips or web filter etc) that is killing the connection based on some rule set.
Lots of good things to investigate already in the thread. I would throw in the potential for an anycast routing issue. TCP is stateful and if there is asymmetric routing, maybe the packets are coming from one anycast device, but the returning packets are routing to a different one.<p>Would suspect some of the other responses first though, but if they don't help this could be a possibility if they are using anycast.
Somewhat related anecdote:<p>Some 10 years back I was working for a solar company doing SCADA stuff (monitoring remote power plant equipment, reporting generation metrics, handling grid interconnect stuff, etc).<p>We had a big room with lots of monitors that looked like a set in a Hollywood film, no doubt inspired by them. You could see all the solar installations all around the world that we monitored. The monitoring crew put out a call for engineers, stat, and as I walked into the monitoring room I could see perhaps 1/10th of the power plant icons on the wall we red "lost communication", one plant went green to red right in front of me.<p>This started a shitstorm with all hands be summoned. Long story short, somebody decided the best way to get an external IP for one of our remote gateways was to use curl command to a whatismyip.com type service, but instead of targeting Google (or you know, a server under our control), it hit some random ISP in Italy. The ISP most have eventually realized they were getting ping on by thousands of devices 24/7, so they decided they would drop some percentage of incoming requests silently, and of course the curl call was blocking without timeout. When the remote gateway's was dropped, it blocked indefinitely.<p>I skipped a lot in between but it was definitely a fun firefighting session, it was particularly hampered by a couple engineers that were quite high up on the food chain getting lead in the wrong direction (as to the root cause) at the beginning and fighting particularly hard against any opposing theories. It was the one time I basically got to drop the "I'm right and I bet my job on it." Fun times.
are we ruling out content filtering? any content filter that is going to filter HTTPS without SSL decryption is going to look at the esni, which is in the client hello.
Here is the author's latest update and the revelation of the mystery:<p><a href="https://devnonsense.com/posts/asymmetric-routing-around-the-firewall" rel="nofollow">https://devnonsense.com/posts/asymmetric-routing-around-the-...</a>