TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Web Scraping a Javascript Heavy Website: Keeping Things Simple

49 pointsby kuhnover 11 years ago

8 comments

rgarciaover 11 years ago
I used to use the network tab for stuff like this, but now I almost exclusively use mitmproxy[0]. Once things get sufficiently complicated, the constant scrolling and clicking around in the network tab feels tedious. Plus it&#x27;s difficult to capture activity if a site has popups or multiple windows. mitmproxy solves these problems and also has a ton more features like replaying requests and saving to files. My ideal tool involves something that translates mitmdump into code that performs the equivalent raw HTTP requests (e.g. using python&#x27;s requests). Sort of like Selenium&#x27;s IDE but for super lightweight scraping.<p>[0] <a href="http://mitmproxy.org/" rel="nofollow">http:&#x2F;&#x2F;mitmproxy.org&#x2F;</a>
评论 #6294406 未加载
hazzover 11 years ago
In many cases websites that load data asynchronously through an API are much nicer to scrape, as the data is already structured for you. You don&#x27;t have to go through the pain of extracting data from a mess of tables, divs and spans.
bdcravensover 11 years ago
I&#x27;ve done a lot of scraping. Some sites use heavy Javascript frameworks that generate session IDs and request IDs that the XHR requests use to &quot;authenticate&quot; the request. In these situations, the amount of work to reverse engineer that workflow is pretty large. In these situations, I lean on headless Selenium. I know there are some lighter solutions, but Selenium offers some distinct advantages:<p>1) lot of library support, in multiple languages<p>2) without having to fake UAs, etc, the requests look more like a regular user (all media assets downloaded, normal browser UA, etc)<p>3) simple clustering: setting up a Selenium grid is very easy, and switching from local instance of Selenium to using the grid requires very little code change (1 line in most cases)
评论 #6294989 未加载
hayksaakianover 11 years ago
Before any naysayers complain about the idea of using undocumented endpoints, keep in mind that this is all in the context of web scraping.
timscottover 11 years ago
I&#x27;ve recently been learning all this the hard way.<p>1. Documented API. Failing that...<p>2. HTTP client fetching structured data (XHR calls). Failing that...<p>3. HTTP client fetching and scraping HTML documents. Failing that...<p>4. Headless browser<p>I recently found myself pushed to #4 to handle sites with over-complex JS or anti-automation techniques.
wslhover 11 years ago
If you liked this article, you might also be interested in &quot;Scraping Web Sites which Dynamically Load Data&quot; <a href="http://blog.databigbang.com/scraping-web-sites-which-dynamically-load-data/" rel="nofollow">http:&#x2F;&#x2F;blog.databigbang.com&#x2F;scraping-web-sites-which-dynamic...</a>
corfordover 11 years ago
For JS heavy sites, I&#x27;ve found proxying the traffic through Fiddler is the easiest way to discover the API end points I need to hit.
cpayneover 11 years ago
I&#x27;m getting a 404 - Page not found
评论 #6294598 未加载