Lots of other comments have torn this article apart (and justifiably so), but I still feel the need to pile on.<p>In their docs, Loggly only gives out one API endpoint: logs-01.loggly.com.<p>It is referenced as the endpoint for HTTP, HTTPS, syslog and syslog TLS. These seem to be the only methods available to send log data to them.<p>There is the obvious problem that a DNS record with a 60s TTL cannot possibly receive every single packet sent to it in the event of a server failure. Even if the returned IP address is an elastic IP, it takes a substantial amount of time to move to another instance in AWS.<p>I don't know why you would use the same service hostname for all of these endpoints. Separate names for each endpoint, even if they all pointed to the same pool of hosts, would at least give some flexibility in the future when they have enough traffic to get desperate about capacity. I would also think they might want to segregate native syslog from HTTP traffic, since I presume it uses different processes on the backend.<p>It's also curious that they chose to return only one A record. DNS RR is a poor substitute for real load balancing, but it's better than nothing. With multiple A records, there is at least a chance that some of their traffic will go to other servers -- rather than all of it potentially going to one as it is now.<p>While they made no claims about using Route 53 for its geo DNS capabilities, I still found it amusing that I was sent to a US East IP from California. Not that it's super critical that my log lines get delivered quickly, but it is ideal to shorten the path of an insecure and unreliable transport in order to improve durability. Although I would never ship syslog out to some host on the Internet, a host 16 hops away is even more ludicrous.<p>I think their article says a lot more about how poorly ELBs function when you exceed the low traffic threshold it is seemingly designed for than about how well Route 53 works (and it is a decent static DNS service). The inability to robustly direct incoming traffic is the achilles heel of AWS.