[Full disclosure: I've worked on Amazon Route 53]<p>It's always neat to see nice data collection like this, but unfortunately the average speed to the authoritative name servers isn't a very meaningful measurement. Real world resolvers bias heavily towards the fastest name server for your zone, and they are so latency sensitive that they'll do things like issue concurrent queries to several name servers at the same time.<p>The upshot of that is that what really matters is the latency to the closest name server, or at worst the latency to the 3rd fastest server; for the rare boot-strapping cases. Bind, the most common resolver by far will issue up to 3 concurrent queries to different name servers as part of its SRTT algorithm. The next most common resolvers; Unbound, OpenDNS, and Google Public DNS perform pre-fetching and so the latencies aren't contributing to the user experience except for extreme outlier queries.<p>Some large DNS providers design to this behaviour, and seek to increase the average distance to their DNS servers by operating the name servers for each domain in different data centers. That gives routing and path diversity for the DNS queries and responses. Since network path diversity increases with distance, this works best when you include a location or two that are quite far away, which increases the average latency to those servers - but thanks to resolver behavior doesn't do much to the user-experience.<p>A write up for Route 53's consideration of the trade-offs is here: <a href="http://www.awsarchitectureblog.com/2014/05/a-case-study-in-global-fault-isolation.html" rel="nofollow">http://www.awsarchitectureblog.com/2014/05/a-case-study-in-g...</a> (there's also a video about the role this plays in withstanding DDOS attacks: <a href="https://www.youtube.com/watch?v=V7vTPlV8P3U" rel="nofollow">https://www.youtube.com/watch?v=V7vTPlV8P3U</a> around the 10 minute mark).<p>Where the average latencies are low, all of the name servers are in close proximity to the measurement point and I would wager that the network path diversity is probably quite low. A small number of link failures or ddos/congestion events, maybe even one, might make all of the servers unreachable.<p>A more meaningful measurement of the speed itself is to perform regular DNS resolutions, using real-world DNS resolvers spread out across your users. In-browser tests like Google analytics go a long way here, and it's fairly easy to A/B test different providers. The differences tend to be very small. Caching dominates, as others here have mentioned.<p>Apologies if I seemed to rain on dnsperf's parade here; it's a neat visualization and measuring this stuff is tough. It's always good to see someone take an interest in measuring DNS!