Maybe for losely coupled systems. Unavoidable in tightly coupled systems because it's a convenient way to do things unless you already have elaborate HA infra and protocols in place.<p>For example, if you offer an "entrypoint" that you can guarantee and technically make to be stable, then use longish TTLs. Anycast IPs are an extreme, but inbetween there are many useful modes of exploiting longish but not too long TTLs.<p>On the other hand, if you implement system failover in a locally redundant system and want to exploit DNS so you don't have to manage additional technology to make an "entrypoint" HA (VRRP, other IP movements, ...), low TTLs are nice. AWS is I think using 5s TTLs on the ElastiCache node's primary DNS names.<p>Finally, 15m max is what I'm comfortable with. Any longer or much longer, and ANY MISTAKE, and you can easily be in a world of hurt. It's no fun sitting out a DNS mistake propagating around the world and the fix lagging behind.<p>And this is only a view on "respectable TTL" values. DNS services like Google's public dns probably ignore any or all TTLs for records they pull, and refresh them as fast as possible anyway, at least according to my observation. In that sense, I doubt that most of the internet is still using "respectable" TTLs --- I suspect most systems will RACE to get new data ASAP.