I'll go on record as saying I am an ardent <i>hater</i> of U-Verse and AT&T due to personal experience with their service and would like nothing more than for this to be a purposeful act that would result in backlash on that company...<p>... that said, I'm going to fall in the camp of stating that this is likely an unintentional bug. If they truly wanted to block 1.1.1.1 (and it's backup), doing so via firmware would seem to be the most difficult and unreliable way of doing so. The benefits of doing so are also limited: (a) If the motivation was to avoid losing the ability to spy on their customers via DNS requests, well ... they can still do that. Yes, Cloudflare supports encrypted DNS, but the half of one percent of folks who have this set up wouldn't be worth the effort[0]. (b) If there was some <i>other</i> reason to want customers using their DNS (i.e. redirection to advertising pages when lookup fails), they could simply do packet rewrites (of non-encrypted DNS lookups) to send them over to AT&Ts infrastructure -- the benefit of doing this is that it would be more likely to go unnoticed[1]. (c) There have been several <i>other</i>, far more popular and just as well publicized public DNS services that they haven't messed with -- why pick on a new entrant -- why not break 8.8.8.8 or OpenDNS?<p>More likely is the explanation that 1.1.1.1 was being used as a defact-o 10.x.x.x address for other purposes. It had a few benefits -- it was far less likely to be used as an internal address for customers (being ... <i>not</i> a traditional non-routable address) and up until recently, it was unlikely to be used for legitimate services. Or ... it's something else. Firmware bugs are <i>everywhere</i> and having had their service and the particular brand of modem they're using, I'm not the least bit surprised. I had to root my modem to make my service work reliably[2]. Heck, I worked for a telecom for 17 years, and the first half of that, the guy who set our network up used 1-10.x.x.x as internal addresses.<p>[0] It's not terribly difficult to do, but few take the effort. I've got an internal DNS server configured (for AD purposes) which forwards to another internal DNS server that makes all DNS requests out to cloudflare via encrypted DNS. It was a 5 minute change to my internal setup, a lot of which was the time it took to download the container, reboot the host for testing purposes and validation of everything.<p>[1] It probably would have managed to be hidden an entire <i>minute</i> longer than this debacle.<p>[2] On their DSL (re-labeled U-Verse despite it having nothing to do with their U-Verse TV/Internet -- it's the <i>old</i> DSL limited to 12Mb down <i>if you're lucky</i>), my modem would randomly display the "Internet is down" page for all requests despite everything being fine. I forgot, exactly, what I had to do to resolve it, but it required hitting their ping page to trigger a buffer overflow, allowing me to get console access and running some command. I also wanted to be able to ping the modem remotely (something they disable with no customer-facing option to correct) to correlate it with weather so as to prove to customer service (...and at least a little to myself) that this bizarre happenstance wasn't all in my head. My next-door neighbors also had this problem, so I suspected it was something in the wiring (expansion/contraction-like) up the street, but it was hard to track down <i>where</i> because all but two people on that street (including us) used those homes as summer vacation homes and were rarely there in the winter -- many didn't have service and those who did were unlikely to be around when the weather hit about 40 degrees, so AT&T wasn't getting reports of outages in enough frequency to do anything about it. Two years ago, they sent a truck, took everyone down and re-did a pole 8 houses down. Since then, the problem hasn't happened.