At my company, we scrape our Webflow marketing website and host it ourselves on Cloudflare to avoid their crazy enterprise plan pricing. I have a little node.js script that gets the job done but it's really slow (5 to 10 minutes).<p>For the life of me I cannot figure out how to speed up the scraping process. For example, when I scrape it locally I can only get like a maximum of like 300kb/second no matter how much I try to parallelize requests, even though I have 200mbps of bandwidth. It's just annoying for our marketing team to have such a long delay in between publishing changes and seeing it deployed live.<p>Am I getting hit with some sort of Cloudfront rate limiting by IP address? Is there some socket limit at a real low level I'm hitting on both my local mac and the linux box I do the scraping on?<p>What are the best ways I can speed things up?
Idk how cloudflare rate limiting can impact this unless the webflow site is behind cloudflare? Can it be removed?<p>It may also be that webflow rate limits bot traffic? Try spoofing the user agent with a popular browser's[1].<p>But why scrape? Webflow allows to export the code[2]. But it may still require premium subscription, I haven't looked thoroughly.<p>[1] <a href="https://techblog.willshouse.com/2012/01/03/most-common-user-agents/" rel="nofollow">https://techblog.willshouse.com/2012/01/03/most-common-user-...</a><p>[2] <a href="https://university.webflow.com/lesson/code-export" rel="nofollow">https://university.webflow.com/lesson/code-export</a>
Hard to know what's going on without seeing your code setup and knowing more details, but it could be related to rate limits. You could consider using something like rotating proxies to help bypass some stuff. If you want to continue to use your own setup, you could integrate rotating proxies through a service. Dropping a link to one that we used for a couple of projects at my workplace.<p>[1] <a href="https://get.brightdata.com/bd-solutions-rotating-proxies" rel="nofollow">https://get.brightdata.com/bd-solutions-rotating-proxies</a>
1) compare it with httrack website copier<p>2) maybe your scraping is synchronous, without perhaps any level of parallelism at the same depth<p>3) use your code or a sitemap to get all the URLs into a txt, then loop through with bash/curl maybe