This gives the illusion of being in control, but if enough people block the bot, they'll just scrape differently (if they don't already) because too much money is at stake, more than whatever fine they may get if they do get caught and can't settle out of court, not to mention they may consider it will be someone else's problem by then.<p>It's more pragmatic to expect that any data that can be accessed one way or another will be scraped because interests aren't aligned between content authors and scrapers.<p>On the other hand, robots.txt was benefiting both search engines and content authors because it signaled data that wasn't useful to show in search results, therefore search engines had an incentive to follow its rules.