Doesn't this passage suggest that Google ignores robots.txt in its own cross-comparisons of search relevancy:<p><i>Google said in October that it found statistical evidence that Bing suddenly became more Google-like. More listings in the first page of results of both search engines seemed to match, as did more of the number one results.</i><p>How would you get statistically significant results for such things, over time, without constant automated probe queries against Bing?<p>I think such probes are both legal and wise... but Google should drop the pretense that robots.txt is a sacred barrier across which no analysis can be done, no matter how indirect or for what purpose.<p>Also, I'd wager at some time in its history – if not constantly even today – Google has shown panels of users results from Google and its competitors in various combinations – side-by-side, with and without branding, intermixed randomly – and used their reactions to detect areas where the competitors are doing well, and Google could improve.<p>Further, either human eyes or algorithms then tried to determine adjustments to close any gaps in user satisfaction. The net effect of any such process is – surprise, surprise! – leveraging strengths of <i>other</i> engines to patch weaknesses in Google. This is normal, expected behavior by any serious search competitor.