TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Avoiding bot detection: How to scrape the web without getting blocked?

586 点作者 proszkinasenne2超过 3 年前

35 条评论

bsamuels超过 3 年前
&gt; I need to make a general remark to people who are evaluating (and&#x2F;or) planning to introduce anti-bot software on their websites. Anti-bot software is nonsense. Its snake oil sold to people without technical knowledge for heavy bucks.<p>If this guy got to experience how systemically bad the credential stuffing problem is, he&#x27;d probably take down the whole repository.<p>None of these anti-bot providers give a shit about invading your privacy, tracking your every movements, or whatever other power fantasy that can be imagined. Nobody pays those vendors $10m&#x2F;year to frustrate web crawler enthusiasts, they do it to stop credential stuffing.
评论 #29061388 未加载
评论 #29062125 未加载
评论 #29061115 未加载
评论 #29064587 未加载
评论 #29060667 未加载
评论 #29062716 未加载
评论 #29061715 未加载
评论 #29060960 未加载
评论 #29071972 未加载
评论 #29066889 未加载
评论 #29062400 未加载
评论 #29063298 未加载
评论 #29063598 未加载
评论 #29064722 未加载
评论 #29060682 未加载
评论 #29061875 未加载
评论 #29060837 未加载
评论 #29062004 未加载
评论 #29061751 未加载
ChuckMcM超过 3 年前
I am always amazed when otherwise intelligent people assert without data that the marginal cost of serving web traffic to scrapers&#x2F;bots is zero. It is kind of like people who say &quot;Why don&#x27;t they put more fuel in the rocket so it can get all the way into orbit with just one stage?&quot;<p>It sounds great but it is a completely ignorant thing to say.
评论 #29061803 未加载
评论 #29061792 未加载
评论 #29061605 未加载
评论 #29065281 未加载
ufmace超过 3 年前
What I really enjoy about this thread is all of the completely different perspectives. Lots of people doing anti-abuse research bemoaning that this stuff exists, and lots of people working against what are from their perspective ham-handed anti-abuse tech blocking legitimate useful automation trading tips on how to do it better. I guess the other sides of those we don&#x27;t see much. People doing actual black-hat work probably don&#x27;t post about it on public forums, and most of the over-broad anti-abuse is probably a side effect of taking some anti-abuse tech and blindly applying it to the whole site just because that&#x27;s simpler, often no tech people may be really involved at all.
评论 #29065326 未加载
marginalia_nu超过 3 年前
If someone is signalling to you you that they do not want your bot on their site, then maybe respect that? Trying to circumvent it is besides being legally questionable, a serious pain in the ass for the site owner and makes websites <i>more</i> prone to attempt to block bots in general.<p>Also, in my experience, most websites that block your bot, block your bot because your bot is too aggressive, or because you are fetching some resource that is expensive that bots in general refuse to lay off. Bots with seconds between the requests rarely get blocked even by CDNs.
评论 #29061485 未加载
评论 #29061513 未加载
评论 #29065394 未加载
评论 #29061509 未加载
评论 #29062830 未加载
评论 #29063146 未加载
评论 #29062113 未加载
评论 #29062456 未加载
评论 #29062506 未加载
al2o3cr超过 3 年前
<p><pre><code> You use this software at your own risk. Some of them contain malwares just fyi </code></pre> LOL why post LINKS to them then? Flat-out irresponsible...<p><pre><code> you build a tool to automate social media accounts to manage ads more efficiently </code></pre> If by &quot;manage&quot; you mean &quot;commit click fraud&quot;
abadger9超过 3 年前
I&#x27;m a lead engineer on the search team of a publicly traded company who&#x27;s bread and butter is this domain. I was curious about this list, it candidly misses the mark- the tech mentioned in this blog is what you might get if you hired a competent consultant to build out a service without having domain knowledge. In my experience, what&#x27;s being used on the bleeding edge is two steps ahead of this.
评论 #29062423 未加载
评论 #29062069 未加载
curun1r超过 3 年前
There’s one technique that can be very useful in some circumstances that isn’t mentioned. Put simply, some sites try to block all bots except for those from the major search engines. They don’t want their content scraped, but they want the traffic that comes from search. In those cases, it’s often possible to scrape the search engines instead using specialized queries designed to get the content you want into the blurb for each search result.<p>This kind of indirect scraping can be useful for getting almost all the information you want from sites like LinkedIn that do aggressive scraping detection.
评论 #29061179 未加载
评论 #29061389 未加载
评论 #29061659 未加载
rp1超过 3 年前
It&#x27;s very easy to install Chrome on a linux box and launch it with a whitelisted extension. You can run Xorg using the dummy driver and get a full Chrome instance (i.e. not headless). You can even enable the DevTools API programmatically. I don&#x27;t see how this would be detectable, and probably a lot safer than downloading a random browser package from an unknown developer.
评论 #29061309 未加载
walrus01超过 3 年前
Google &quot;residential proxies for sale&quot; if you want to see the weird shady grey market for proxies when you need your traffic to come from things like cablemodem operator ASNs&#x27; DHCP pools
评论 #29061684 未加载
评论 #29063387 未加载
welanes超过 3 年前
Another great resource is incolumitas.com. A list of detection methods are here: <a href="https:&#x2F;&#x2F;bot.incolumitas.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;bot.incolumitas.com&#x2F;</a><p>I run a no-code web scraper (<a href="https:&#x2F;&#x2F;simplescraper.io" rel="nofollow">https:&#x2F;&#x2F;simplescraper.io</a>) and we test against these.<p>Having scraped million of webpages, I find dynamic CSS selectors a bigger time sink than most anti-scraping tech encountered so far (if your goal is to extract structured data).
评论 #29063520 未加载
peterburkimsher超过 3 年前
2 of my social media accounts have fallen victim to bot detection, despite <i>not using scripts</i>. There are other websites for which I have used scripts, and sometimes ran into CAPTCHA restrictions, but was able to adjust the rate to stay within limits.<p>CouchSurfing blocked me after I manually searched for the number of active hosts in each country (191 searches), and posted the results on Facebook. Basically I questioned their claim that they have 15 million users - although that may be their total number of registered accounts, the real number of users is about 350k. They didn&#x27;t like that I said that (on Facebook) so they banned my CouchSurfing account. They refused to give a reason, but it was a month after gathering the data, so I know that it was retaliation for publication.<p>LinkedIn blocked me 10 days ago, and I&#x27;m still trying to appeal to get my account back.<p>A colleague was leaving, and his manager asked me to ask people around the company to sign his leaving card. Rather than go to 197 people directly, I intentionally wanted to target those who could also help with the software language translation project (my actual work). So I read the list of names, cut it down to 70 &quot;international&quot; people, and started searching for their names on Google. Then I clicked on the first result, usually LinkedIn or Facebook.<p>The data was useful, and I was able to find willing volunteers for Malay, Russian, and Brazilian Portuguese!<p>After finding the languages from 55 colleagues over 2 hours, LinkedIn asked for an identity verification: upload a photo of my passport. No problem, I uploaded it. I also sent them a full explanation of what I was doing, why, how it was useful, and a proof of my Google search history.<p>But rather than reactivate my account, LinkedIn have permanently banned me, and will not explain why.<p>&quot;We appreciate the time and effort behind your response to us. However, LinkedIn has reviewed your request to appeal the restriction placed on your account and will be maintaining our original decision. This means that access to the account will remain restricted.<p>We are not at liberty to share any details around investigations, or interpret the terms of service for you.&quot;<p>So when the CAPTCHA says &quot;Are you a robot?&quot;, I&#x27;m really not sure. Like Pinocchio, &quot;I&#x27;m a real boy!&quot;
评论 #29062689 未加载
nocturnial超过 3 年前
I knew there was a reason why I used client certificates and alternate ports.<p>Why is it so difficult to just respect robots.txt? Maybe there&#x27;s an idea for a browser plugin that determines if you can easily scrape the data or not. If not, then the website is blocked and then traffic will drop. I know this is a naive idea...
评论 #29062471 未加载
teeray超过 3 年前
Never underestimate the scraping technique of last resort: paying people on Mechanical Turk or equivalent to browse to the site and get the data you want
adinosaur123超过 3 年前
Are there any court cases that provide precedence regarding the legality of web scraping?<p>I&#x27;m currently looking for ways to get real estate listings in a particular area and apparently the only real solution is the scrape the few big online listing sites.
评论 #29061139 未加载
评论 #29061281 未加载
评论 #29061269 未加载
IceWreck超过 3 年前
Half of the short-links to cutt.ly aren&#x27;t working. Why use short links in markdown ?
评论 #29061023 未加载
dpryden超过 3 年前
It always amazes me how people believe they have a <i>right</i> to retrive data from a website. The HTTP protocol calls it a <i>request</i> for a reason: you are <i>asking</i> for data. The server is allowed to say no, for any reason it likes, even a reason you don&#x27;t agree with.<p>This whole field of scraping and anti-bot technology is an arms race: one side gets better at something, the other side gets better at countering it. An arms race benefits no one but the arms dealers.<p>If we translate this behavior into the real world, it ends up looking like <a href="https:&#x2F;&#x2F;xkcd.com&#x2F;1499" rel="nofollow">https:&#x2F;&#x2F;xkcd.com&#x2F;1499</a>
评论 #29062801 未加载
connectsnk超过 3 年前
For the row &quot;Long-lived sessions after sign-in&quot; the author mentions that this solution is for social media automation i.e. you build a tool to automate social media accounts to manage ads more efficiently.<p>I am curious by what the author means by automating social media accounts to manage ads more efficiently
评论 #29064093 未加载
kseifried超过 3 年前
Trying to stop credential stuffing by blocking bots will not work, and can often severely impact people depending on assistive technologies.<p>I think a better solution is to implement 2FA&#x2F;MFA (even bad 2FA&#x2F;MFA like SMS or email will block the mass attacks, for people worried about targeted attacks let them use a token or software token app) or SSO (e.g. sign in with Google&#x2F;Microsoft&#x2F;Facebook&#x2F;Linkedin&#x2F;Twitter who can generally do a better job securing accounts than some random website). SSO is also a lot less hassle in the long term that 2FA&#x2F;MFA for most users (major note: public use computers, but that&#x27;s a tough problem to solve security wise, no matter what).<p>Better account security is, well, better, regardless of the bot&#x2F;credential stuffing&#x2F;etc problem.
softwaredoug超过 3 年前
A lot of web scraping is annoying often because there’s *an explicit API built for the scrapers needs*. Instead of looking for an API, many think to first use web scraping. This in turn puts load and complexity on the user facing web app that must now tell scraper from real users.
评论 #29065078 未加载
评论 #29062799 未加载
greeklish超过 3 年前
Here&#x27;s a good resource about web scrapping: <a href="https:&#x2F;&#x2F;bot.incolumitas.com&#x2F;#:~:text=more%20sources%2Finformation" rel="nofollow">https:&#x2F;&#x2F;bot.incolumitas.com&#x2F;#:~:text=more%20sources%2Finform...</a>
kinderjaje超过 3 年前
I am running a no-code web automation and data extraction tool called <a href="https:&#x2F;&#x2F;automatio.co" rel="nofollow">https:&#x2F;&#x2F;automatio.co</a>. And from my experience most of the time when using quality residential proxies you will be fine. But that comes at cost since they are way expensive then data center proxies.<p>But for some websites, even residential ips doesn&#x27;t let you pass.<p>I noticed there is like a premium reCaptcha service, which just work differently then standard one and not let you pass. It&#x27;s mostly shown with a Cloud flare anti bot page.
intricatedetail超过 3 年前
By the way - is it possible to stop Google bot from scrapping without maintaining a list of IP addresses? Google doesn&#x27;t publish these and it&#x27;s not good to run reverse DNS as it slows down legitimate clients. I know you can put a meta tag, but bot still has to make a request to read it. I would like to completely cut off Google from scrapping.
评论 #29062129 未加载
评论 #29062959 未加载
rfraile超过 3 年前
Datadome, PerimeterX, anyone tried ine if them?
评论 #29063413 未加载
评论 #29061920 未加载
评论 #29061550 未加载
navels超过 3 年前
I&#x27;ve had a lot of success just with Selenium and this custom version of Chromedriver: <a href="https:&#x2F;&#x2F;github.com&#x2F;ultrafunkamsterdam&#x2F;undetected-chromedriver" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ultrafunkamsterdam&#x2F;undetected-chromedrive...</a>
Jenk超过 3 年前
In a previous venture my team successfully circumvented bot detection for a price comparison project simply by using apify.com. Wasn&#x27;t that expensive, either. We were drilling sites with 500k+ hits per day for months.
janmo超过 3 年前
Some how related: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29062027" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=29062027</a>
egberts1超过 3 年前
A couple of things for unblockable scraping<p>1. plenty of VPS with many IP addresses (this is easier with IPv6 subnet)<p>2. HTTP header rearranging<p>3. Fuzzing user-agent<p>4. Pseudo-PKBOE algorithm<p>5. office hours, break-time, lunch-time activity emulation<p>6. ????<p>7. profit<p>I am looking at you, SSH port bashers.
completelylegit超过 3 年前
* Scrape open proxy websites for open proxies, then use those proxies, cycle which proxies you use frequently.<p>* Change your user-agent to a real user-agent, cycle it frequently.<p>* Done.
billpg超过 3 年前
You could ask first. The site&#x27;s robots.txt file might have some information.<p>Put your email address in your User-Agent string so they can get in touch if needed.
评论 #29061750 未加载
lavezzi超过 3 年前
The proxy service recommendations are pretty expensive. Does anyone have alternatives they suggest to keep costs down?
hk1337超过 3 年前
Not to forget the most important rule, don&#x27;t be an asshole to the site hosting the content.
0xlwj超过 3 年前
Pretty useful crash course on what is out there in the web scraping universe
lifeisstillgood超过 3 年前
What if we solved it by replacing passwords with client HSMs?
firerfly超过 3 年前
plivo.com is good at anti-bot, i tried many method and some residential proxys . there still blocked me out .
nuker超过 3 年前
Will I scrape faster with RTX 3080 Ti?
评论 #29064646 未加载