I've been inadvertently working on this topic and I'd like to share some findings.<p>* Do not confuse bots with DDoS. While bot traffic may end up overwhelming your server, your DDoS SaaS will not stop that traffic unless you have some kind of bot protection enabled, for example the product described in post.<p>* A lot of bots announce themselves via user agents, some don't.<p>* If you're running an ecom shop with a lot of product pages, expect a large portion of traffic to be bots and scrapers. In our case it was upto 50%, which was surprising.<p>* Some bots accept cookies and these skew your product analytics.<p>* We enabled automatic bot protection and a of lot our third party integrations ended up being marked as bots and their traffic was blocked. We eventually turned that off.<p>* (EDIT) Any sophisticated self implemented bot protection isn't worth the effort for most companies out there. But I have to admit, it's very exciting to think about all the ways to block bots.<p>What's our current status? We've enabled monitoring to keep a look out for DDoS attempts but we're taking the hit on bot traffic. The data on our the website isn't really private info, except maybe pricing, and we're really unsure how to think about the new AI bots scraping this information. ChatGPT already gives a summary of what our company does. We don't know if that's a good thing or not. Would be happy to hear anyone's thoughts on how to think about this topic.
It says "Declare your independence", but your independence is exactly what you stand to lose if you channel your traffic through Cloudflare. You already have your independence; don't give it up to those who appeal to desperation to fool you into believing the opposite of what's true.
Does google effectively gets a pass, because they (can) use the same bot to index websites for search and to scrap data for AI models training at the same time?
I find it slightly ironic that they're only able to do this effectively because they've been able to train their own detection model on traffic, mostly from users that have never agreed to anything.<p>I don't have strong opinions on this either way really, I just found that a bit funny.
There are so many things sites need to protect against these days it’s making independent self hosting quite annoying. As bots get better at hiding, only companies with huge scale like Cloudflare would be able to identify and block them. DDOS/bot providers are unintentionally creating a monopoly
For those not using cloudflare but who have access to web server config files and want to block AI bots, I put together a set of prebuilt configs[0] (for Apache, Nginx, Lighttpd, and Caddy) that will block most AI bots from scraping contents. The configs are built on top of public data sources[1] with various adjustments.<p>[0] <a href="https://github.com/anthmn/ai-bot-blocker">https://github.com/anthmn/ai-bot-blocker</a><p>[1] <a href="https://darkvisitors.com/" rel="nofollow">https://darkvisitors.com/</a>
It'll be so interesting to see what sorts of "biases" future AI models will manifest when they're only trained on a fraction of the web. All any group with an agenda has to do is make their content available for training, with the knowledge/hope that many of those with balancing content will have it blocked. And then there will be increased complaints re said "biases" by the same ones who endorse blocking, without a thought that the issue was amplified by said blocking. And of course use cases for AI will continue to broaden, in most cases without a care for those spouting about "biases". It'll be a wonderful world.