TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

OpenAI crawler burning money for nothing

12 pointsby babuskov4 months ago
I have a bunch of blog posts, with URLs like these:<p><pre><code> https:&#x2F;&#x2F;mywebsite&#x2F;1-post-title https:&#x2F;&#x2F;mywebsite&#x2F;2-post-title-second https:&#x2F;&#x2F;mywebsite&#x2F;3-post-title-third https:&#x2F;&#x2F;mywebsite&#x2F;4-etc </code></pre> For some reason, it tries every combination of numbers, so the requests look like this:<p><pre><code> https:&#x2F;&#x2F;mywebsite&#x2F;1-post-title&#x2F;2-post-title-second https:&#x2F;&#x2F;mywebsite&#x2F;1-post-title&#x2F;3-post-title-third </code></pre> etc.<p>Since the blog engine simply discards everything after number (1,2,3...) and just serves the content for blog post #1, #2, #3,... the web server returns a valid page. However, all those pages are the same.<p>The main problem here is that there is no website page that has such compound links like https:&#x2F;&#x2F;mywebsite&#x2F;1-post-title&#x2F;2-post-title-second<p>So it&#x27;s clearly some bug in the crawler.<p>Maybe OpenAI is using AI code for their crawler because it has so dumb bugs you cannot believe any human would write it.<p>They will make 90000 requests to load my small blog with 300 posts.<p>Cannot imagine what happens with larger websites that have thousands of blog posts.

6 comments

readyplayernull4 months ago
They are decided to set the web on fire:<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42660377">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42660377</a>
markus_zhang4 months ago
I wonder if one can build maze webpages to trap these AI crawlers. So if it&#x27;s a human it doesn&#x27;t bother, but once identified as a crawler it dynamically generates webpages after webpages of garbage. It doesn&#x27;t need to save all those garbage but the crawler has to.
评论 #42784565 未加载
codemusings4 months ago
For what it&#x27;s worth: they do honor the robots.txt file. I had the same problem with a client&#x27;s CMS and denying all AI crawler user agents did the trick.<p>It&#x27;s clear they&#x27;ve all gone mad. The traffic spiked 400% overnight and made the CMS unresponsive a few times a day.
gbertb4 months ago
how are the links structured in the ahref tag? is it relative or absolute? if relative, then thats prob why.
评论 #42761605 未加载
thiago_fm4 months ago
They believe they can take market share from Google, which currently has a mkt. cap of over $2T, so with that amount of money in line, they don&#x27;t care if they will hammer down the internet or the amount of hate &amp; lawsuits they will get.<p>The issue is that they don&#x27;t understand that the search business took decades to develop and be what it currently is. And is only so profitable to Google because they hold a monopoly because the US is an oligarchy.<p>The stuff OpenAI is building has been proven to be easy (and expensive) to replicate, with many competitors having posting similar results, even while starting later.<p>Whatever new iteration of the search business they will develop will likely mean profits will be smaller, but nobody cares as long as there are billions being invested in this space.<p>Not to mention their AGI goals. When you can&#x27;t reliably trust their software to answer basic questions.<p>So, currently we are at the internet of trash age. We now have trash content being generated, trash bots hammering your tiny website and trash ambitions.<p>I doubt this CAPEX will go on for more than 2 years, once the bubble burst companies will start reviewing what they built and they will fix the Crawler bug you&#x27;ve just mentioned.
1010084 months ago
Cloudflare should provide a service (paid or free) to block AI crawlers.
评论 #42768277 未加载