<i>The next steps towards standardization began when Google, Yahoo, and Microsoft came together to define and support the sitemap protocol in 2006. Then in 2007, they announced that all three of them would support the Sitemap directive in robots.txt files. And yes, that important piece of internet history from the blog of a formerly 125 Billion dollar company now only exists because it was archived by Archive.org.</i><p>The Internet Archive (archive.org) is currently running their end-of-year donation drive, if you value the work they do it's a good time to donate: <a href="https://archive.org/donate/" rel="nofollow">https://archive.org/donate/</a><p>(and on the topic of robots.txt, it sounds like they're moving in the direction of disallowing people from using them indiscriminately to block access to valuable archival materials: <a href="https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/" rel="nofollow">https://blog.archive.org/2017/04/17/robots-txt-meant-for-sea...</a> )
I also wrote up an analysis of the top 1M robots.txt files:
<a href="http://www.benfrederickson.com/robots-txt-analysis/" rel="nofollow">http://www.benfrederickson.com/robots-txt-analysis/</a><p>I ended up analyzing very different things from this article though, so this article was still pretty interesting to me.
<p><pre><code> “traditionally used for
vague attempts at humor
which signal to twenty-something
white males that this is
a “cool” place to work.”</code></pre>
WTF with the casual sexism/ageism?
"The web servers might not have cared about the traffic, but it turns out that you can only look up domains so quickly before a DNS server starts to question your intentions!"<p>s/DNS server/third party open resolver/<p>IME, querying an authoritative server for the desired name triggers no such limitations.<p>One does not even need to use DNS to get the IP addresses for those authoritative servers, if the zone file is made available for free to the public as most are, under the ICANN rules.<p>I have thought about building a database of robots.txt many times. IMO, robots.txt has an important role besides thwarting "bots". It can thwart humans as well. It can be used to make entire websites "disappear" from the Internet Archive Wayback Machine.<p>Perhaps others are making mirrors of the IA.<p>However, I have thought it could be useful to monitor the robots.txt of important websites on a more frequent basis than IA, in order to (if possible) preemptively archive the IA's collections if robots.txt changes are ever detected that would effectively "erase" them from the IA.<p>Perhaps the greatest thing about robots.txt is that it is "plain text". This "rule" <i>seems</i> to be ubiquitously honoured. Did the author ever find any html, css, javascript or other surprises in any robots.txt file?
History presented in this post was very interesting, but the analysis ended up disappointing. The article ends just after they had managed to narrow their sample of robots.txt files to exclude duplicate and derivative files. They don't even present any summary statistics for this filtered sample.
Honestly, I'm kind of surprised that turnitin's bot listens to robot.txt, or that the 'anti copyright infringement' bots do the same. Seems like it provides a very simple way for a cheating site to just thwart their entire 'system'.<p>But hey, I guess it's one of those cases where the law and basic ethics clash a bit; with certain laws saying 'unauthorised' access to a server is illegal, then ignoring that would leave them under fire for that instead.