TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

LWN sluggish due to DDoS onslaughts from AI-scraper bots

39 点作者 sohkamyung4 个月前

5 条评论

LinuxBender4 个月前
User-agent aside there are usually small details bots leave out unless they are using headless chrome of course. Most bots can&#x27;t do HTTP&#x2F;2.0 yet all common browsers support it. Most bots will not be sending cors, no-cors, navigate sec_fetch_mode headers whereas browsers do. Some bots do not send a accept_language header. Those are just a few things one can look for and deal with in simple web server ACL&#x27;s. Some bots do not support http-keepalive, though this can knock out some poor middle boxes if dropping connections that do not support http keepalive.<p>At the tcp layer some bots do not set MSS options or use very strange values. This can get into false positives so I just don&#x27;t publish IPv6 records for my web servers and then limit to an MSS range of 1280 to 1460 on IPv4 which knocks out many bots.<p>There are always the possibilities of false positives but they can be logged and reviewed acceptable losses should the load on the servers get too high. Another mitigating control is to perform analysis on previous logs and use maps to exclude people that post on a regular basis or have logins to the site assuming none of them are part of the problem. If a registered user is part of the problem give them an error page after {n} requests.
评论 #42849516 未加载
iamwpj4 个月前
We have been suffering this. It&#x27;s easy enough to weather high traffic loads for pages, but our issue is targeted applications. Things like website search bars are getting target with functional searches for sub pages and content by labels, etc. It causes the web server to run out of handles for the pending database lookups.<p>A real mess. The problem is these searches are valid and the page will return a 200 result with &quot;Nothing in that search found!&quot; types of messages. Why would the crawler ever stop? It&#x27;s going to work and work until we all die and there&#x27;s still another epoch of search term combos left to try.<p>We solve problems like this all the time, but we&#x27;re hitting another level and really exposing some issues. Ideally our WAF can start to kick the traffic. It&#x27;s good to see other people having this issue. We first started addressing this last fall -- around November.
teeray4 个月前
We need some kind of fail2ban for AI scrapers. Fingerprint them, then share the fingerprint databases via torrent or something.
评论 #42794549 未加载
jimrandomh4 个月前
We see similar issues on LessWrong. We&#x27;re constantly being hit by bots that are egregiously badly behaved. Common behaviors include making far more requests per second than our entire userbase combined, distributing those requests between many IPs in order to bypass the rate limit on our firewall, and making each request with a unique user-agent string randomly drawn from a big list of user agents, to prevent blocking them that way. They ignore robots.txt. Other than the IP address, there&#x27;s no way to identify them or find an abuse contact.
halJordan4 个月前
It&#x27;s inappropriate to usurp the language of cyberattacks just to denigrate certain traffic. It might be true that this category of traffic is too voluminous to handle at their current capability, resulting in bad service.<p>However cyberattacks, especially distributed ones, require intentionality and they require coordination.
评论 #42800796 未加载
评论 #42796821 未加载