TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: What are best tools for web scraping?

502 点作者 pydox超过 7 年前

100 条评论

sharmi超过 7 年前
If you are a programmer, scrapy[0] will be a good bet. It can handle robots.txt, request throttling by ip, request throttling by domain, proxies and all other common nitty-gritties of crawling. The only drawback is handling pure javascript sites. We have to manually dig into the api or add a headless browser invocation within the scrapy handler.<p>Scrapy also has the ability to pause and restart crawls [1], run the crawlers distributed [2] etc. It is my goto option.<p>[0] <a href="https:&#x2F;&#x2F;scrapy.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;scrapy.org&#x2F;</a><p>[1] <a href="https:&#x2F;&#x2F;doc.scrapy.org&#x2F;en&#x2F;latest&#x2F;topics&#x2F;jobs.html" rel="nofollow">https:&#x2F;&#x2F;doc.scrapy.org&#x2F;en&#x2F;latest&#x2F;topics&#x2F;jobs.html</a><p>[2] <a href="https:&#x2F;&#x2F;github.com&#x2F;rmax&#x2F;scrapy-redis" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;rmax&#x2F;scrapy-redis</a>
评论 #15695140 未加载
评论 #15694617 未加载
评论 #15699354 未加载
评论 #15694851 未加载
评论 #15694970 未加载
jackschultz超过 7 年前
I&#x27;ve actually wrote about this! General tips that I&#x27;ve found from doing more than a few projects [0], and then an overview of Python libraries I use [1].<p>If you don&#x27;t want to clock on the links, requests and BeautifulSoup &#x2F; lxml is all you need 90% of the time. Throw gevent in there and you can get a lot of scraping done in not as much time as you think it would take.<p>And as long as we&#x27;re talking about web scraping, I&#x27;m a huge fan of it. There&#x27;s so much data out there that&#x27;s not easily accessible and needs to be cleaned and organized. When running a learning algorithm, for example, a very hard part that isn&#x27;t talked about a lot is getting the data before throwing it in a learning function or library. Of course, there the legal side of it if companies are not happy with people being able to scrape, but that&#x27;s a different topic.<p>I&#x27;ll keep going. The best way to learn about what are the best tools is to do a project on your own and teat them all out. Then you&#x27;ll know what suits you. That&#x27;s absolutely the best way to learn something about programming -- doing it instead of reading about it.<p>[0] <a href="https:&#x2F;&#x2F;bigishdata.com&#x2F;2017&#x2F;05&#x2F;11&#x2F;general-tips-for-web-scraping-with-python&#x2F;" rel="nofollow">https:&#x2F;&#x2F;bigishdata.com&#x2F;2017&#x2F;05&#x2F;11&#x2F;general-tips-for-web-scrap...</a><p>[1] <a href="https:&#x2F;&#x2F;bigishdata.com&#x2F;2017&#x2F;06&#x2F;06&#x2F;web-scraping-with-python-part-two-library-overview-of-requests-urllib2-beautifulsoup-lxml-scrapy-and-more&#x2F;" rel="nofollow">https:&#x2F;&#x2F;bigishdata.com&#x2F;2017&#x2F;06&#x2F;06&#x2F;web-scraping-with-python-p...</a>
评论 #15694868 未加载
samtc超过 7 年前
I maintain ~30 different crawlers. Most of them are using Scrapy. Some are using PhantomJS&#x2F;CasperJS but they are called from Scrapy via a simple web service.<p>All data (zip files, pdf, html, xml, json) we collect are stored as-is (&#x2F;path&#x2F;to&#x2F;&lt;dataset name&gt;&#x2F;&lt;unique key&gt;&#x2F;&lt;timestamp&gt;) and processed later using a Spark pipeline. lxml.html is WAY faster than beautifulsoup and less prone to exception.<p>We have cronjob (cron + jenkins) that trigger dataset update and discovery. For example, we scrape corporate registry, so everyday we update the 20k oldest companies version. We also implement &quot;discovery&quot; logic in all of our crawlers so they can find new data (ex.: newly registered company). We use Redis to send task (update &#x2F; discovery) to our crawlers.
评论 #15699763 未加载
评论 #15696779 未加载
评论 #15696997 未加载
danso超过 7 年前
Always fascinated by how diverse the discussion and answers is for HN threads on web-scraping. Goes to show that &quot;web-scraping&quot; has a ton of connotations, everything from automated-fetching of URLs via wget or cURL, to data management via something like scrapy.<p>Scrapy is a whole framework that may be worthwhile, but if I were just starting out for a specific task, I would use:<p>- requests <a href="http:&#x2F;&#x2F;docs.python-requests.org&#x2F;en&#x2F;master&#x2F;" rel="nofollow">http:&#x2F;&#x2F;docs.python-requests.org&#x2F;en&#x2F;master&#x2F;</a><p>- lxml <a href="http:&#x2F;&#x2F;lxml.de&#x2F;" rel="nofollow">http:&#x2F;&#x2F;lxml.de&#x2F;</a><p>- cssselect <a href="https:&#x2F;&#x2F;cssselect.readthedocs.io&#x2F;en&#x2F;latest&#x2F;" rel="nofollow">https:&#x2F;&#x2F;cssselect.readthedocs.io&#x2F;en&#x2F;latest&#x2F;</a><p>Python 3, AFAIK, doesn&#x27;t have anything as handy as Ruby&#x2F;Perl&#x27;s Mechanize. But using the web developer tools you can usually figure out the requests made by the browser and then use the Session object in the Requests library to deal with stateful requests:<p><a href="http:&#x2F;&#x2F;docs.python-requests.org&#x2F;en&#x2F;master&#x2F;user&#x2F;advanced&#x2F;" rel="nofollow">http:&#x2F;&#x2F;docs.python-requests.org&#x2F;en&#x2F;master&#x2F;user&#x2F;advanced&#x2F;</a><p>I usually just download pages&#x2F;data&#x2F;files as raw files and worry about parsing&#x2F;collating them later. I try to focus on the HTTP mechanics and, if needed, the HTML parsing, before worrying about data extraction.
评论 #15710908 未加载
评论 #15696702 未加载
评论 #15696939 未加载
评论 #15697682 未加载
评论 #15697369 未加载
marvinpinto超过 7 年前
I would recommend using Headless Chrome along with a library like puppeteer[0]. You get the advantage of using a real browser with which you run pages&#x27; javascript, load custom extensions, etc.<p>[0]: <a href="https:&#x2F;&#x2F;github.com&#x2F;GoogleChrome&#x2F;puppeteer" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;GoogleChrome&#x2F;puppeteer</a>
评论 #15699496 未加载
评论 #15705421 未加载
beernutz超过 7 年前
The absolute best tool i have found for scraping is Visual Web Ripper.<p>It is not open source, and runs in windows only, but it is one of the easiest to use tools that i have found. I can set up scrapes entirely visually, and it handles complex cases like infinite scroll pages, highly javascript dependent pages and the like. I really wish there were an open source solution that was as good as this one.<p>I use it with one of my clients professionally. Their support is VERY good btw.<p><a href="http:&#x2F;&#x2F;visualwebripper.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;visualwebripper.com&#x2F;</a>
hydragit超过 7 年前
WebOOB [0] is a good Python framework for scraping websites. It&#x27;s mostly used to aggregate data from multiple websites by organizing each site backend implement an abstract interface (for example the CapBank abstract interface for parsing banking sites) but it can be used without that part.<p>On the pure scraping side, it has a &quot;declarative parsing&quot; to avoid painful plain-old procedural code [1]. You can parse pages by simply specifying a bunch of XPaths and indicating a few filters from the library to apply on those XPath elements, for example CleanText to remove whitespace nonsense, Lower (to lower-case), Regexp, CleanDecimal (to parse as number) and a lot more. URL patterns can be associated to a Page class of such declarative parsing. If declarative becomes too verbose, it can always be replaced locally by writing a plain-old Python method.<p>A set of applications are provided to visualize extracted data, and other niceties are provided for debug easing. Simply put: « Wonderful, Efficient, Beautiful, Outshining, Omnipotent, Brilliant: meet WebOOB ».<p>[0] <a href="http:&#x2F;&#x2F;weboob.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;weboob.org&#x2F;</a><p>[1] <a href="http:&#x2F;&#x2F;dev.weboob.org&#x2F;guides&#x2F;module.html#parsing-of-pages" rel="nofollow">http:&#x2F;&#x2F;dev.weboob.org&#x2F;guides&#x2F;module.html#parsing-of-pages</a>
zapperdapper超过 7 年前
No one has mentioned it so I will: consider Lynx, the text-mode web-browser. Being command-line you can automate with Bash or even Python. I have used it quite happily to crawl largeish static sites (10,000+ web pages per site). Do a `man lynx` the options of interest are -crawl, -traversal, and -dump. Pro tip - use in conjunction with HTML TIDY prior to the parsing phase (see below).<p>I have also used custom written Python crawlers in a lot of cases.<p>The other thing I would emphasize is that a web scraper has multiple parts, such as crawling (downloading pages) and then actually parsing the page for data. The systems I&#x27;ve set up in the past typically are structured like this:<p>1. crawl - download pages to file system 2. clean then parse (extract data) 3. ingest extracted data into database 4. query - run adhoc queries on database<p>One of the trickiest things in my experience is managing updates. So when new articles&#x2F;content are added to the site you only want to have to get and add that to your database, rather than crawl the whole site again. Also detecting updated content can be tricky. The brute force approach of course is just to crawl the whole site again and rebuild the database - not ideal though!<p>Of course, this all depends really on what you are trying to do!
phsource超过 7 年前
For someone on a Javascript stack, I highly recommend combining a requester (e.g., &quot;request&quot; or &quot;axios&quot;) with Cheerio, a server-side jQuery clone. Having a familiar, well-known interface for selection helps a lot.<p>We use this stack at WrapAPI (<a href="https:&#x2F;&#x2F;wrapapi.com" rel="nofollow">https:&#x2F;&#x2F;wrapapi.com</a>), which we highly recommend as a tool to turn webpages into APIs. It doesn&#x27;t completely do all the scraping (you still need to write a script), but it does make turning a HTML page into a JSON structure much easier.
评论 #15740187 未加载
mping超过 7 年前
I use nightmarejs <a href="https:&#x2F;&#x2F;github.com&#x2F;segmentio&#x2F;nightmare" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;segmentio&#x2F;nightmare</a> which is based on electron; I recommend it if you&#x27;re on js
评论 #15695379 未加载
deathemperor超过 7 年前
I&#x27;ve just finished my research on web scraping for my company (took me about 7 days). I started with import.io and scrapinghub.com for point and click scraping to see if I could do it without writing codes. Ultimately, UI point and click scraping is for none-technical. There are many data you would find it hard to scrape. For example, lazada.com.my stores the product&#x27;s SKU inside an attribute that looks like &lt;div data-sku-simple=&quot;SKU11111&quot;&gt;&lt;&#x2F;div&gt; which I couldn&#x27;t get. import.io&#x27;s pricing is also something. I need to pay $999 a month for accessing API data is just too high.<p>So I decided to use scrapy, the core of scrapinghub.com.<p>I haven&#x27;t written much python before but scrapy was very easy to learn. I wrote 2 spiders and run on scrapinghub (their serverless cloud). Scrapinghub support jobs scheduling and many other things at a cost. I prefer scrapinghub because in my team we don&#x27;t have DevOps. It also supports Crawlera to prevent IP banning, Portia for point and click (still in beta, it was still hard to use), and Splash for SPA websites but it&#x27;s buggy and the github repo is not under active maintenance.<p>For DOM query I use BeautifulSoup4. I love it. It&#x27;s jQuery for python.<p>For SPA websites I wrote a scrapy middleware which uses puppeteer. The puppeteer is deployed on Amazon Lambda (1m free request first 365 days, more than enough for scraping) using this <a href="https:&#x2F;&#x2F;github.com&#x2F;sambaiz&#x2F;puppeteer-lambda-starter-kit" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sambaiz&#x2F;puppeteer-lambda-starter-kit</a><p>I am planning to use Amazon RDS to store scraped data.
baldfat超过 7 年前
I use R since that is the language I use mostly httr and rvest. Edit I missed typing rvest thanks for the comments you use the two together.<p><a href="https:&#x2F;&#x2F;cran.r-project.org&#x2F;web&#x2F;packages&#x2F;httr&#x2F;vignettes&#x2F;quickstart.html" rel="nofollow">https:&#x2F;&#x2F;cran.r-project.org&#x2F;web&#x2F;packages&#x2F;httr&#x2F;vignettes&#x2F;quick...</a>
评论 #15694867 未加载
indescions_2017超过 7 年前
Headless Chrome, Puppeteer, NodeJS (jsdom), and MongoDB. Fantastic stack for web data mining. Async based using promises for explicit user input flow automation.
评论 #15696410 未加载
评论 #15694792 未加载
Risse超过 7 年前
If you use PHP, Simple HTML DOM[0] is an awesome and simple scraping library.<p>[0] <a href="http:&#x2F;&#x2F;simplehtmldom.sourceforge.net&#x2F;" rel="nofollow">http:&#x2F;&#x2F;simplehtmldom.sourceforge.net&#x2F;</a>
评论 #15696245 未加载
评论 #15697606 未加载
评论 #15696238 未加载
评论 #15700547 未加载
levi_n超过 7 年前
I use a combination of Selenium and python packages (beautifulsoup). I&#x27;m primarily interested in scraping data that is supplied via javascript, and I find Selenium to be the most reliable way scrape that info. I use BS when the scraped page has a lot of data, thereby slowing down Selenium, and I pipe the page source from Selenium, with all javascript rendered, into BS.<p>I use explicit waits exclusively (no direct calls like `driver.find_foo_by_bar`), and find it vastly improves selenium reliability. (Shameless plug) I have a python package, Explicit[1], that makes it easier to use explicit waits.<p>[1] <a href="https:&#x2F;&#x2F;pypi.python.org&#x2F;pypi&#x2F;explicit" rel="nofollow">https:&#x2F;&#x2F;pypi.python.org&#x2F;pypi&#x2F;explicit</a>
评论 #15697223 未加载
giarc超过 7 年前
For non-coders, import.io is great. However, they used to have a generous free plan that has since went away (you are limited to 500 records now). Still a great product, problem is they don&#x27;t have a small plan (starts at $299&#x2F;month and goes up to $9,999).
评论 #15695020 未加载
cholmon超过 7 年前
I recently stumbled across <a href="http:&#x2F;&#x2F;go-colly.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;go-colly.org&#x2F;</a>, that looks well thought out and simple to use. It seems like a slimmed down Go version of Scrapy.
elchief超过 7 年前
Anyone who suggests a tool that can&#x27;t understand JavaScript doesn&#x27;t know what they are talking about<p>You should be using Headless Chrome or Headless Firefox with a library that can control them in a user-friendly manner
评论 #15699996 未加载
评论 #15699793 未加载
jmkni超过 7 年前
I&#x27;ve had a surprising amount of success with the HTML Agility Pack in .net, if you have a decent understanding of HTML it&#x27;s pretty usable.
评论 #15699032 未加载
评论 #15705704 未加载
khuknows超过 7 年前
Shameless plug - I build this tiny API for scraping and it works a treat for my uses: <a href="https:&#x2F;&#x2F;jsonify.link&#x2F;" rel="nofollow">https:&#x2F;&#x2F;jsonify.link&#x2F;</a><p>A few similar tools also exist, like <a href="https:&#x2F;&#x2F;page.rest&#x2F;" rel="nofollow">https:&#x2F;&#x2F;page.rest&#x2F;</a>.
ravenstine超过 7 年前
It depends on what you&#x27;re trying to do.<p>For most things, I use Node.js with the Cheerio library, which is basically a stripped-down version of jQuery without the need for a browser environment. I find using the jQuery API far more desirable than the clunky, hideous Beautiful Soup or Nokogiri APIs.<p>For something that requires an actual DOM or code execution, PhantomJS with Horseman works well, though everyone is talking about headless Chrome these days so IDK. I&#x27;ve not had nearly as many bad experiences with PhantomJS as others have purportedly experienced.
评论 #15700338 未加载
Doctor_Fegg超过 7 年前
If you speak Ruby, mechanize is good: <a href="https:&#x2F;&#x2F;github.com&#x2F;sparklemotion&#x2F;mechanize" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sparklemotion&#x2F;mechanize</a>
评论 #15695341 未加载
评论 #15694733 未加载
评论 #15694687 未加载
polote超过 7 年前
I maintain about 8 crawlers and I use only vanilla Python<p>I have a function to help me search :<p><pre><code> def find_r(value, ind, array,stop_word): indice = ind for i in array: indice = value.find(i,indice)+1 end = value.find(stop_word,indice) return value[indice: end], end </code></pre> You can use it like that :<p><pre><code> resulting_text , end_index = find_r(string, start_index, [&quot;&lt;td&quot;, &quot;&gt;&quot;], &quot;&lt;&#x2F;td&quot;) </code></pre> To find text it is quite fast and you don&#x27;t need to master a framwork
CGamesPlay超过 7 年前
If you can get away without a JS environment, do so. Something like scrapy will be much easier than a full browser environment. If you cannot, don’t bother going halfway and just go straight for headless chrome or Firefox. Unfortunately Selenium seems to be past its useful life as Firefox dropped support and chrome has a chrome driver which wraps around it. Phantom.js is woefully out of date and since it’s a different environment than your target site was designed for just leads to problems.
评论 #15698634 未加载
评论 #15698411 未加载
评论 #15697188 未加载
评论 #15697357 未加载
dsacco超过 7 年前
I&#x27;ve done this professionally in an infrastructure processing several terabytes per day. A robust, scalable scraping system comprises several distinct parts:<p>1. A crawler, for retrieving resources over HTTP, HTTPS and sometimes other protocols a bit higher or lower on the network stack. This handles data ingestion. It will need to be sophisticated these days - sometimes you&#x27;ll need to emulate a browser environment, sometimes you&#x27;ll need to perform a JavaScript proof of work, and sometimes you can just do regular curl commands the old fashioned way.<p>2. A parser, for correctly extracting specific data from JSON, PDF, HTML, JS, XML (and other) formatted resources. This handles data processing. Naturally you&#x27;ll want to parse JSON wherever you can, because parsing HTML and JS is a pain. But sometimes you&#x27;ll need to parse images, or outdated protocols like SOAP.<p>3. A RDBMS, with databases for both the raw and normalized data, and columns that provide some sort of versioning to the data in a particular point in time. This is quite important, because if you collect the raw data and store it, you can re-parse it in perpetuity instead of needing to retrieve it again. This will happen somewhat frequently if you come across new data while scraping that you didn&#x27;t realize you&#x27;d need or could use. Furthermore, if you&#x27;re updating the data on a regular cadence, you&#x27;ll need to maintain some sort of &quot;retrieved_at&quot;, &quot;updated_at&quot; awareness in your normalized database. MySQL or PostgreSQL are both fine.<p>4. A server and event management system, like Redis. This is how you&#x27;ll allocate scraping jobs across available workers and handle outgoing queuing for resources. You want a centralized terminal for viewing and managing a) the number of outstanding jobs and their resource allocations, b) the ongoing progress of each queue, c) problems or blockers for each queue.<p>5. A scheduling system, assuming your data is updated in batches. Cron is fine.<p>6. Reverse engineering tools, so you can find mobile APIs and scrape from them instead of using web targets. This is important because mobile API endpoints a) change <i>far</i> less frequently than web endpoints, and b) are <i>far</i> more likely to be JSON formatted, instead of HTML or JS, because the user interface code is offloaded to the mobile client (iOS or Android app). The mobile APIs will be private, so you&#x27;ll typically have to reverse engineer the HMAC request signing algorithm, but that is virtually always trivial, with the exception of companies that really put effort into obfuscating the code. apktool, jadx and dex2jar are typically sufficient for this if you&#x27;re working with an Android device.<p>7. A proxy infrastructure, this way you&#x27;re not constantly pinging a website from the same IP address. Even if you&#x27;re being fairly innocuous with your scraping, you probably want this, because many websites have been burned by excessive spam and will conscientiously and automatically ban any IP address that issues something nominally more than a regular user, regardless of volume. Your proxies come in several flavors: datacenter, residential and private. Datacenter proxies are the first to be banned, but they&#x27;re cheapest. These are proxies resold from datacenter IP ranges. Residential IP addresses are IP addresses that are not associated with spam activity and which come from ISP IP ranges, like Verison Fios. Private IP addresses are IP addresses that have not been used for spam activity before and which are reserved for use by only your account. Naturally this is in order from lower to greater expense; it&#x27;s also in order from most likely to least likely to be banned by a scraping target. NinjaProxies, StormProxies, Microleaf, etc are all good options. Avoid Luminati, which offers residential IP addresses contributed by users who don&#x27;t realize their IP addresses are being leased through the use of Hola VPN.<p>Each website you intend to scrape is given a queue. Each queue is assigned a specific allotment of workers for processing scraping jobs in that queue. You&#x27;ll write a bunch of crawling, parsing and database querying code in an &quot;engine&quot; class to manage the bulk of the work. Each scraping target will then have its own file which inherits functionality from the core class, with the specific crawling and parsing requirements in that file. For example, implementations of the POST requests, user agent requirements, which type of parsing code needs to be called, which database to write to and read from, which proxies should be used, asynchronous and concurrency settings, etc should all be in here.<p>Once triggered in a job, the individual scraping functions will call to the core functionality, which will build the requests and hand them off to one of a few possible functions. If your code is scraping a target that has sophisticated requirements, like a JavaScript proof of work system or browser emulation, it will be handed off to functionality that implements those requirements. Most of the time, this won&#x27;t be needed and you can just make your requests look as human as possible - then it will be handed off to what is basically a curl script.<p>Each request to the endpoint is a job, and the queue will manage them as such: the request is first sent to the appropriate proxy vendor via the proxy&#x27;s API, then the response is sent back through the proxy. The raw response data is stored in the raw database, then normalized data is processed out of the raw data and inserted into the normalized database, with corresponding timestamps. Then a new job is sent to a free worker. Updates to the normalized data will be handled by something like cron, where each queue is triggered at a specific time on a specific cadence.<p>You&#x27;ll want to optimize your workflow to use endpoints which change infrequently and which use lighter resources. If you are sending millions of requests, loading the same boilerplate HTML or JS data is a waste. JSON resources are preferable, which is why you should invest some amount of time before choosing your endpoint into seeing if you can identify a usable mobile endpoint. For the most part, your custom code is going to be in middleware and the parsing particularities of each target; BeautifulSoup, QueryPath, Headless Chrome and JSDOM will take you 80% of the way in terms of pure functionality.
评论 #15697680 未加载
评论 #15697561 未加载
austincheney超过 7 年前
This is perhaps the fastest way to screenscrape a dynamically executed website.<p>1. First go get and run this code, which allows immediate gathering of all text nodes from the DOM: <a href="https:&#x2F;&#x2F;github.com&#x2F;prettydiff&#x2F;getNodesByType&#x2F;blob&#x2F;master&#x2F;getNodesByType.js" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;prettydiff&#x2F;getNodesByType&#x2F;blob&#x2F;master&#x2F;get...</a><p>2. Extract the text content from the text nodes and ignore nodes that contain only white space:<p>let text = document.getNodesByType(3), a = 0, b = text.length, output = []; do { if ((&#x2F;^(\s+)$&#x2F;).test(text[a].textContent) === false) { output.push(text[a].textContent); } a = a + 1; } while (a &lt; b); output;<p>That will gather ALL text from the page. Since you are working from the DOM directly you can filter your results by various contextual and stylistic factors. Since this code is small and executes stupid fast it can be executed by bots easily.<p>Test this out in your browser console.
评论 #15695764 未加载
jacinda超过 7 年前
If you&#x27;re specifically looking at news articles, go for the Python library Newspaper: <a href="http:&#x2F;&#x2F;newspaper.readthedocs.io&#x2F;en&#x2F;latest&#x2F;" rel="nofollow">http:&#x2F;&#x2F;newspaper.readthedocs.io&#x2F;en&#x2F;latest&#x2F;</a><p>Auto-detection of languages, and will automatically give you things like the following:<p>&gt;&gt;&gt; article.parse()<p>&gt;&gt;&gt; article.authors [u&#x27;Leigh Ann Caldwell&#x27;, &#x27;John Honway&#x27;]<p>&gt;&gt;&gt; article.text u&#x27;Washington (CNN) -- Not everyone subscribes to a New Year&#x27;s resolution...&#x27;<p>&gt;&gt;&gt; article.top_image u&#x27;<a href="http:&#x2F;&#x2F;someCDN.com&#x2F;blah&#x2F;blah&#x2F;blah&#x2F;file.png&#x27;" rel="nofollow">http:&#x2F;&#x2F;someCDN.com&#x2F;blah&#x2F;blah&#x2F;blah&#x2F;file.png&#x27;</a><p>&gt;&gt;&gt; article.movies [u&#x27;<a href="http:&#x2F;&#x2F;youtube.com&#x2F;path&#x2F;to&#x2F;link.com&#x27;" rel="nofollow">http:&#x2F;&#x2F;youtube.com&#x2F;path&#x2F;to&#x2F;link.com&#x27;</a>, ...]
mmmnt超过 7 年前
For very simple tasks Listly seems to be a fast and good solution: <a href="http:&#x2F;&#x2F;www.listly.io&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.listly.io&#x2F;</a><p>If you need more power, I heard good stuff about <a href="http:&#x2F;&#x2F;80legs.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;80legs.com&#x2F;</a> though never tried them myself.<p>If you really need to do crazy shit like crawling the iOS App Store really fast and keep thing up to date. I suggest using Amazon Lambda and a custom Python parser. Though Lambda is not meant for this kind of things it works really well and is super scalable at a reasonable price.
jppope超过 7 年前
Headless chrome in the form of puppeteer (<a href="https:&#x2F;&#x2F;github.com&#x2F;GoogleChrome&#x2F;puppeteer" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;GoogleChrome&#x2F;puppeteer</a>) or Chromeless (<a href="https:&#x2F;&#x2F;github.com&#x2F;graphcool&#x2F;chromeless" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;graphcool&#x2F;chromeless</a>) or for smaller gigs use nightmare.js (<a href="http:&#x2F;&#x2F;www.nightmarejs.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.nightmarejs.org&#x2F;</a>).<p>scapy is fine but selenium, phantom, etc are all outdated IMO
评论 #15696910 未加载
btb超过 7 年前
We have been using kapow robosuite for close to 10 years now. Its a commercial GUI based tool which have worked well for us, it saves us a lot of maintenance time compared to our previous hand-rolled code extraction pipeline. Only problem is that its very expensive(pricing seems catered towards very large enterprises).<p>So I was really hoping this this thread would have revealed some newer commercial GUI-based alternatives(on-premise, not SaaS). Because I dont really ever want to go back the maintenance hell of hand rolled robots ever again :)
kanishkalinux超过 7 年前
for mostly static pages requests&#x2F;pycurl + beautifulsoup more than sufficient. For advance scraping, take a look at scrapy.<p>for javascript heavy pages most people rely on selenium webdriver. However you can also try hlspy (<a href="https:&#x2F;&#x2F;github.com&#x2F;kanishka-linux&#x2F;hlspy" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;kanishka-linux&#x2F;hlspy</a>), which is a little utility I made a while ago for dealing with javascript heavy pages for simple usage.
bootcat超过 7 年前
One of the important avenues to scrape AJAX heavy and phantomjs avoiding websites is using the google chrome extension support. They can mirror the dom and send it to an external server for processing where we can use python lxml to xpath to appropriate nodes. This worked for me to scrape Google, before we hit the capatcha. If anyone is interested, i can share code i wrote to scrape websites !<p>If you can scrape findthecompany database ? I have done it successfully !!
评论 #15698721 未加载
etatoby超过 7 年前
If you need to scrape content from complex JS apps (eg. React) where it doesn&#x27;t pay to reverse engineer their backend API (or worse, it&#x27;s encrypted&#x2F;obfuscated) you may want to look at CasperJS.<p>It&#x27;s a very easy to use frontend to PhantomJS. You can code your interactions in JS or CoffeeScript and scrape virtually anything with a few lines of code.<p>If you need crawling, just pair a CasperJS script with any spider library like the ones mentioned around here.
theden超过 7 年前
I&#x27;ve had good success with scrapy (<a href="https:&#x2F;&#x2F;scrapy.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;scrapy.org&#x2F;</a>) for my personal projects
Jeaye超过 7 年前
I&#x27;ve written a bit on web scraping with Clojure and Enlive here: <a href="https:&#x2F;&#x2F;blog.jeaye.com&#x2F;2017&#x2F;02&#x2F;28&#x2F;clojure-apartments&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.jeaye.com&#x2F;2017&#x2F;02&#x2F;28&#x2F;clojure-apartments&#x2F;</a><p>That&#x27;s what I&#x27;d use, if I had to scrape again (no JS support).
mrskitch超过 7 年前
I’d recommend puppeteer or some other Chrome driver. It’s fast and resilient even on single page apps.<p>If you’re looking to run it on a Linux machine also take a look at <a href="https:&#x2F;&#x2F;browserless.io" rel="nofollow">https:&#x2F;&#x2F;browserless.io</a> (full disclosure I’m the creator of that site).
评论 #15696630 未加载
riekus超过 7 年前
Depends on your skillset and the data you want to scrape. I am testing waters for a new business that relies on scraped data. As a non programmer I had good success testing stuff with contentgrabber. Import.io also get mentioned a lot. Tried out octoparse but wast stable with the scraping.
评论 #15694675 未加载
vrathee超过 7 年前
If you are looking for SaaS or managed services, Try <a href="https:&#x2F;&#x2F;www.agenty.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.agenty.com&#x2F;</a><p>Agenty is cloud-hosted web scraping app and you can setup scraping agents using their point and click CSS Selector Chrome extension to extract anything from HTML with these 3 modes below: - TEXT : Simple clean text - HTML : Outer or Inner HTML - ATTR : Any attribute of a html tag like image src, hyperlink href…<p>Or advance mode like REGEX, XPATH etc.<p>And then save the scraping agent to execute on cloud-hosted app with most advanced features like batch crawling, scheduling, multiple website scraping simultaneously without worrying in ip-address block or speed like never before.
doominasuit超过 7 年前
If you need to interpret javascript, or otherwise simulate regular browsing as closely as possible, you may consider running a browser inside a container and controlling it with selenium. I have found it’s necessary to run inside the container if you do not have a desktop environment. This is better suited for specific use cases rather than mass collection because it is slower to run a full browsing stack than to only operate at the HTTP layer. I have found that alternatives like phantomJS are hard to debug. Consider opening VNC on the container for debugging. Containers like this that I know of are SeleniumHQ and elgalu&#x2F;selenium.
hmottestad超过 7 年前
If you know Java, then my go to library is Jsoup <a href="https:&#x2F;&#x2F;jsoup.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;jsoup.org&#x2F;</a><p>It lets you use jQuery-like selectors to extract data.<p>Like this: Elements newsHeadlines = doc.select(&quot;#mp-itn b a&quot;);
评论 #15697093 未加载
cdolan超过 7 年前
Outwit Hub, specifically the advanced or enterprise levels.<p>It has a GUI on it that is not designed very well, and documentation that is complete, but hard to search...<p>But it can do just about any type of scrape, including getting started from a command line script
评论 #15694659 未加载
jpetersonmn超过 7 年前
I used to use a combo of python tools. Requests, beautifulsoup mostly. However the last few things I&#x27;ve built used selenium to drive headless chrome browsers. This allows me to run the javascript most sites use these days.
jancurn超过 7 年前
Apify (<a href="https:&#x2F;&#x2F;www.apify.com" rel="nofollow">https:&#x2F;&#x2F;www.apify.com</a>) is a web scraping and automation platform where you can extract data from any website using a few simple lines of JavaScript. It&#x27;s using headless browsers, so that people can extract data from pages that have complex structure, dynamic content or employ pagination.<p>Recently the platform added support for headless Chrome and Puppeteer, you can even run jobs written in Scrapy or any other library as long as it can be packaged as Docker container.<p>Disclaimer: I&#x27;m a co-founder of Apify
servitor超过 7 年前
I agree with others, with curl and the likes you will hit insurmountable roadblocks sooner or later. It&#x27;s better to go full headless browser from the start.<p>I use a python-&gt;selenium-&gt;chrome stack. The Page Object Model [0] has been a revelation for me. My scripts went from being a mess of spaghetti code to something that&#x27;s a pleasure to write and maintain.<p>[0] <a href="https:&#x2F;&#x2F;www.guru99.com&#x2F;page-object-model-pom-page-factory-in-selenium-ultimate-guide.html" rel="nofollow">https:&#x2F;&#x2F;www.guru99.com&#x2F;page-object-model-pom-page-factory-in...</a>
sl0wik超过 7 年前
I had great experience with www.apify.com.
mfontani超过 7 年前
Whatever you end up using for scraping, I beg you to pick a unique user-agent which allows a webmaster to understand which crawler is it, to better allow it to pass through (or be banned, depending).<p>Don&#x27;t stick with the default &quot;scrapy&quot; or &quot;Ruby&quot; or &quot;Jakarta Commons-HttpClient&#x2F;...&quot;, which end up (justly) being banned more easily than unique ones, like &quot;ABC&#x2F;2.0 - <a href="https:&#x2F;&#x2F;example.com&#x2F;crawler&quot;" rel="nofollow">https:&#x2F;&#x2F;example.com&#x2F;crawler&quot;</a> or the like.
评论 #15697688 未加载
Softcadbury超过 7 年前
With node, you can use cheerio [0]. It allows you to parse html pages with a JQuery similar syntax. I use it in production on my project [1]<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;cheeriojs&#x2F;cheerio" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;cheeriojs&#x2F;cheerio</a> [1] <a href="https:&#x2F;&#x2F;github.com&#x2F;Softcadbury&#x2F;football-peek&#x2F;blob&#x2F;master&#x2F;server&#x2F;updaters&#x2F;scorersUpdater.js" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;Softcadbury&#x2F;football-peek&#x2F;blob&#x2F;master&#x2F;ser...</a>
colinchartier超过 7 年前
We had a really tough time scraping dynamic web content using scrapy, and both scrapy and selenium require you to write a program (and maintain it) for every separate website that you have to scrape. If the website&#x27;s structure changes you need to debug your scraper. Not fun if you need to manage more than 5 scrapers.<p>It was so hard that we made our own company JUST to scrape stuff easily without requiring programming. Take a look at <a href="https:&#x2F;&#x2F;www.parsehub.com" rel="nofollow">https:&#x2F;&#x2F;www.parsehub.com</a>
256cats超过 7 年前
I use Node and either puppeteer[0] or plain Curl[1]. IMO Curl is years ahead of any Node.js request lib. For proxies I use (shameless plug!) <a href="https:&#x2F;&#x2F;gimmeproxy.com" rel="nofollow">https:&#x2F;&#x2F;gimmeproxy.com</a> .<p>[0] <a href="https:&#x2F;&#x2F;github.com&#x2F;GoogleChrome&#x2F;puppeteer" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;GoogleChrome&#x2F;puppeteer</a><p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;JCMais&#x2F;node-libcurl" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;JCMais&#x2F;node-libcurl</a>
评论 #15700765 未加载
mitchtbaum超过 7 年前
I made this <a href="https:&#x2F;&#x2F;www.drupal.org&#x2F;project&#x2F;example_web_scraper" rel="nofollow">https:&#x2F;&#x2F;www.drupal.org&#x2F;project&#x2F;example_web_scraper</a> and produced the underlying code many years ago. The idea is to map xpath queries to your data model and use some reusable infrastructure to simply apply it. It was very good, imho (for what it was). (I&#x27;m writing this comment since I don&#x27;t see any other comments with the words map or model :&#x2F; )
bbayer超过 7 年前
I am really surprised nobody mentioned pyspider. It is simple, has a web dashboard and can handle JS pages. It can store data to a database of your choice. It can handle scheduling, recrawling. I have used it to crawl Google Play. 5$ Digital Ocean VPS with pyspider installed on it could handle millions of pages crawled, processed and saved to a database.<p><a href="http:&#x2F;&#x2F;docs.pyspider.org&#x2F;en&#x2F;latest&#x2F;" rel="nofollow">http:&#x2F;&#x2F;docs.pyspider.org&#x2F;en&#x2F;latest&#x2F;</a>
OzzyB超过 7 年前
A good host xD<p>Preferably one that doesn&#x27;t mind giving you a bunch of IPs, and if they do, don&#x27;t charge a fortune for them.<p>Then you can worry about what software you&#x27;re gonna use.
评论 #15695189 未加载
mrkeen超过 7 年前
I made a crawler <a href="https:&#x2F;&#x2F;github.com&#x2F;jahaynes&#x2F;crawler" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jahaynes&#x2F;crawler</a><p>It outputs to the warc file format (<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Web_ARChive" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Web_ARChive</a>), in case your workflow is to gather web pages and then process them afterwards.
ngneer超过 7 年前
<a href="https:&#x2F;&#x2F;github.com&#x2F;featurist&#x2F;coypu" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;featurist&#x2F;coypu</a> is nice for browser automation. A related question: what are good tools for database scraping, meaning replicating a backend database via a web interface (not referring to compromising the application, rather using allowed queries to fully extract the database).
dineshr93超过 7 年前
If you know java then jsoup will be very handy. [1] <a href="https:&#x2F;&#x2F;jsoup.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;jsoup.org&#x2F;</a>
charlus超过 7 年前
For a little diversity on tools, if you&#x27;re looking for something quick that others can access the data easily - Google Apps script in a Google Sheet can be quite useful.<p><a href="https:&#x2F;&#x2F;sites.google.com&#x2F;site&#x2F;scriptsexamples&#x2F;learn-by-example&#x2F;parsing-html" rel="nofollow">https:&#x2F;&#x2F;sites.google.com&#x2F;site&#x2F;scriptsexamples&#x2F;learn-by-examp...</a>
buildops超过 7 年前
Why are you looking to scrape? Here&#x27;s a list of some scraper bots: <a href="https:&#x2F;&#x2F;www.incapsula.com&#x2F;blog&#x2F;web-scraping-bots.html" rel="nofollow">https:&#x2F;&#x2F;www.incapsula.com&#x2F;blog&#x2F;web-scraping-bots.html</a><p>What about Botscraper: <a href="http:&#x2F;&#x2F;www.botscraper.com&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.botscraper.com&#x2F;</a>
wiradikusuma超过 7 年前
I tinkered with Apache Nutch (<a href="http:&#x2F;&#x2F;nutch.apache.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;nutch.apache.org&#x2F;</a>), but I found it overkill. In the end, since I use Scala, I use <a href="https:&#x2F;&#x2F;github.com&#x2F;ruippeixotog&#x2F;scala-scraper" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ruippeixotog&#x2F;scala-scraper</a>
laktek超过 7 年前
One of the challenges with modern day scraping is you need to account for client-side JS rendering.<p>If you prefer an API as a service that can pre-render pages, I built Page.REST (<a href="https:&#x2F;&#x2F;www.page.rest" rel="nofollow">https:&#x2F;&#x2F;www.page.rest</a>). It allows you to get rendered page content via CSS selectors as a JSON response.
blueadept111超过 7 年前
Jaunt [<a href="http:&#x2F;&#x2F;jaunt-api.com" rel="nofollow">http:&#x2F;&#x2F;jaunt-api.com</a>] is a good java tool.
0xdeadbeefbabe超过 7 年前
The best tool for web scraping, for me, is something easy to deploy and redeploy; and something that doesn&#x27;t rely on three working programs--eliminating selenium sounds great.<p>For those reasons I like <a href="https:&#x2F;&#x2F;github.com&#x2F;knq&#x2F;chromedp" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;knq&#x2F;chromedp</a>
ksahin超过 7 年前
I wrote some blog post about Java web scraping here : <a href="https:&#x2F;&#x2F;ksah.in&#x2F;introduction-to-web-scraping-with-java&#x2F;" rel="nofollow">https:&#x2F;&#x2F;ksah.in&#x2F;introduction-to-web-scraping-with-java&#x2F;</a><p>As others said, phantomJS (and now headless Chrome) are good tools to deal with heavy js websites
teremin超过 7 年前
I use Colly[0][1] which is a young but decent scraping framework for Golang.<p>[0] <a href="http:&#x2F;&#x2F;go-colly.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;go-colly.org&#x2F;</a> [1] <a href="https:&#x2F;&#x2F;github.com&#x2F;gocolly&#x2F;colly" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;gocolly&#x2F;colly</a>
tmaly超过 7 年前
I just tried puppeteer yesterday for the first time. It seems to work very well. My only complaint is that it is very new and does now have a plethora of examples.<p>I previously have used WWW::Mechanize in the Perl world, but single page applications with Javascript really require something with a browser engine.
评论 #15695917 未加载
RandomBookmarks超过 7 年前
The &quot;best tool&quot; is different for web developers and non-coders. If you are a non-technical person that just needs some data there is:<p>(1) hosted services like mozenda<p>(2) visual automation tools like Kantu Web Automation (which includes OCR)<p>(3) and last but not least outsourcing the scraping on sites like Freelancer.com
thallian超过 7 年前
I used CasperJS[0] in the past to scrap a javascript heavy forum (ProBoards) and it worked well. But that was a few years ago, I have no idea what new strategies came up in the meantime.<p>[0] <a href="http:&#x2F;&#x2F;casperjs.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;casperjs.org&#x2F;</a>
tn_超过 7 年前
Check out Heritrix if you&#x27;re looking for an open-source webscraping archival tool: <a href="https:&#x2F;&#x2F;webarchive.jira.com&#x2F;wiki&#x2F;spaces&#x2F;Heritrix" rel="nofollow">https:&#x2F;&#x2F;webarchive.jira.com&#x2F;wiki&#x2F;spaces&#x2F;Heritrix</a>
brycematheson超过 7 年前
Shameless plug. I wrote a blog post on how I use Powershell to scrape sites: <a href="http:&#x2F;&#x2F;brycematheson.io&#x2F;webscraping-with-powershell&#x2F;" rel="nofollow">http:&#x2F;&#x2F;brycematheson.io&#x2F;webscraping-with-powershell&#x2F;</a>
frausto超过 7 年前
Been getting blocked by recaptcha more and more, do any of these tools handle dealing with that or workarounds by default? Tried routing through proxies and swapping IP addresses, slowing down, etc... Any specific ways people get around that?
评论 #15696886 未加载
评论 #15697044 未加载
jschuur超过 7 年前
If you want to extract content and specific meta data, you might find the Mercury Web Parser useful:<p><a href="https:&#x2F;&#x2F;mercury.postlight.com&#x2F;web-parser&#x2F;" rel="nofollow">https:&#x2F;&#x2F;mercury.postlight.com&#x2F;web-parser&#x2F;</a>
Karupan超过 7 年前
I&#x27;ve had some success using portia[1]. Its a visual wrapper over scrapy, but is actually quite useful.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;scrapinghub&#x2F;portia" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;scrapinghub&#x2F;portia</a>
traviswingo超过 7 年前
I’ve been using puppeteer to scrape and it’s been fantastic. Since it’s a headless browser, it can handle SPA just as well as server side loaded traditional websites. It’s also incredibly easy to use with async&#x2F;await.
评论 #15695934 未加载
askz超过 7 年前
A friend released a little tool to only scrap html from websites, with tor and proxy chaining<p><a href="https:&#x2F;&#x2F;github.com&#x2F;AlexMili&#x2F;Scraptory" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;AlexMili&#x2F;Scraptory</a>
freeslugs超过 7 年前
If you need simple scraping, I like traditional http request lib. For more robust scraping (ie clicking buttons &#x2F; filling text), use capybara and either phantomjs or chromedriver - easy to install using homebrew!
mateuszf超过 7 年前
`clj-http`, `enlive`, `cheshire` in case of `clojure` worked fine for me
评论 #15695835 未加载
thegrif超过 7 年前
A ton of people recommended Scrapy - and I am always looking for senior Scrapy resources that have experience scraping at scale. Please feel free to reach out - contact info is in my profile.
sananth12超过 7 年前
If you are looking for image scraping: <a href="https:&#x2F;&#x2F;github.com&#x2F;sananth12&#x2F;ImageScraper" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;sananth12&#x2F;ImageScraper</a>
pudo超过 7 年前
We&#x27;re about to announce a new Python scraping toolkit, memorious: <a href="https:&#x2F;&#x2F;github.com&#x2F;alephdata&#x2F;memorious" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;alephdata&#x2F;memorious</a> - it&#x27;s a pretty lightweight toolkit, using YAML config files to glue together pre-built and custom-made components into flexible and distributed pipelines. A simple web UI helps track errors and execution can be scheduled via celery.<p>We looked at scrapy, but it just seemed like the wrong type of framing for the type of scrapers we build: requests, some html&#x2F;xml parser, and output into a service API or a SQL store.<p>Maybe some people will enjoy it.
kbd超过 7 年前
For simple tasks, curl into pup is very convenient.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ericchiang&#x2F;pup" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ericchiang&#x2F;pup</a>
kopos超过 7 年前
Scrapy [<a href="https:&#x2F;&#x2F;github.com&#x2F;scrapy&#x2F;scrapy" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;scrapy&#x2F;scrapy</a>] works really well.
vinitagr超过 7 年前
<a href="https:&#x2F;&#x2F;github.com&#x2F;matthewmueller&#x2F;x-ray" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;matthewmueller&#x2F;x-ray</a>
Lxr超过 7 年前
Python requests + lxml, with Selenium as a last resort.
评论 #15695142 未加载
bantersaurus超过 7 年前
beautifulsoup
评论 #15694555 未加载
评论 #15694569 未加载
fazkan超过 7 年前
scrapy and BS4, for serious stuff. Selenium, for automating logging and other UI related stuff, you can even play games with it.
kazinator超过 7 年前
TXR: <a href="http:&#x2F;&#x2F;www.nongnu.org&#x2F;txr" rel="nofollow">http:&#x2F;&#x2F;www.nongnu.org&#x2F;txr</a>
crispytx超过 7 年前
I did a little web scraping project a few years ago using:<p>* cURL<p>* regex
thejosh超过 7 年前
If you are scraping specific pages on a site, curl. Then transform that into the language you use.
cm2012超过 7 年前
For non developers dexi.io is great.
novaleaf超过 7 年前
i wrote a tool: PhantomJsCloud.com<p>it&#x27;s getting a little long in the tooth, but I will be updating it soon to use a Chrome based renderer. If you have any suggestions, you can leave it here or PM me :)
aaronhoffman超过 7 年前
This tool takes a list of URIs and crawls each site for contact info. Phone, email, twitter, etc<p><a href="https:&#x2F;&#x2F;github.com&#x2F;aaronhoffman&#x2F;WebsiteContactHarvester" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;aaronhoffman&#x2F;WebsiteContactHarvester</a>
jpepinho超过 7 年前
WebDriver.io using Selenium and PhantomJS would be a good way to go!
kzisme超过 7 年前
So in general what do most people use web scraping for? Is it building up their on database of things not available via an API or something? It always sounds interesting, but the need for it is what confuses me.
评论 #15697717 未加载
greyfox超过 7 年前
i did a quick search and didnt see this listed here:<p><a href="https:&#x2F;&#x2F;www.httrack.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.httrack.com&#x2F;</a>
etattva超过 7 年前
Scrapy and Jsoup are best combinations
tomc1985超过 7 年前
Perl or Ruby and Regular Expressions
herbst超过 7 年前
Nokogiri
vsupalov超过 7 年前
That really depends on your project and tech stack. If you&#x27;re into Python and are going to deal with relatively static HTML, then the Python modules Scrapy [1], BeautifulSoup [2] and the whole Python data crunching ecosystem are at your disposal. There&#x27;s lots of great posts about getting such a stack off the ground and using it in the wild [3]. It can get you pretty darn far, the architecture is <i>solid</i> and there are lots of services and plugins which probably do everything you need.<p>Here&#x27;s where I hit the limit with that setup: dynamic websites. If you&#x27;re looking at something like discourse-powered communities or similar, and don&#x27;t feel a bit too lazy to dig into all the ways requests are expected to look, it&#x27;s no fun anymore. Luckily, there&#x27;s lots of js-goodness which can handle dynamic website, inject your javascript for convenience and more [4].<p>The recently published Headless Chrome [5] and puppeteer [6] (a Node API for it), are really promising for many kinds of tasks - scraping among them. You can get a first impression in this article [7]. The ecosystem does not seem to be as mature yet, but I think this will be foundation of the next go-to scraping tech stack.<p>If you want to try it yourself, I&#x27;ve written a brief intro [8] and published a simple dockerized development environment [9], so you can give it a go without cluttering your machine or find out what dependencies you need and how the libraries are called.<p>[1] <a href="https:&#x2F;&#x2F;scrapy.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;scrapy.org&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;www.crummy.com&#x2F;software&#x2F;BeautifulSoup&#x2F;bs4&#x2F;doc&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.crummy.com&#x2F;software&#x2F;BeautifulSoup&#x2F;bs4&#x2F;doc&#x2F;</a><p>[3] <a href="http:&#x2F;&#x2F;sangaline.com&#x2F;post&#x2F;advanced-web-scraping-tutorial&#x2F;" rel="nofollow">http:&#x2F;&#x2F;sangaline.com&#x2F;post&#x2F;advanced-web-scraping-tutorial&#x2F;</a><p>[4] <a href="https:&#x2F;&#x2F;franciskim.co&#x2F;dont-need-no-stinking-api-web-scraping-2016-beyond&#x2F;" rel="nofollow">https:&#x2F;&#x2F;franciskim.co&#x2F;dont-need-no-stinking-api-web-scraping...</a><p>[5] <a href="https:&#x2F;&#x2F;developers.google.com&#x2F;web&#x2F;updates&#x2F;2017&#x2F;04&#x2F;headless-chrome" rel="nofollow">https:&#x2F;&#x2F;developers.google.com&#x2F;web&#x2F;updates&#x2F;2017&#x2F;04&#x2F;headless-c...</a><p>[6] <a href="https:&#x2F;&#x2F;github.com&#x2F;GoogleChrome&#x2F;puppeteer" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;GoogleChrome&#x2F;puppeteer</a><p>[7] <a href="https:&#x2F;&#x2F;blog.phantombuster.com&#x2F;web-scraping-in-2017-headless-chrome-tips-tricks-4d6521d695e8" rel="nofollow">https:&#x2F;&#x2F;blog.phantombuster.com&#x2F;web-scraping-in-2017-headless...</a><p>[8] <a href="https:&#x2F;&#x2F;vsupalov.com&#x2F;headless-chrome-puppeteer-docker&#x2F;" rel="nofollow">https:&#x2F;&#x2F;vsupalov.com&#x2F;headless-chrome-puppeteer-docker&#x2F;</a><p>[9] <a href="https:&#x2F;&#x2F;github.com&#x2F;vsupalov&#x2F;docker-puppeteer-dev" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;vsupalov&#x2F;docker-puppeteer-dev</a>
21stio超过 7 年前
golang
评论 #15701320 未加载
pwaai超过 7 年前
hey I&#x27;m working on this thing called BAML (browser automation markup language) and it looks something like this:<p><pre><code> OPEN http:&#x2F;&#x2F;asdf.com CRAWL a EXTRACT {&#x27;title&#x27;: &#x27;.title&#x27;} </code></pre> It&#x27;s meant to be super simple and built from ground up to support crawling Single Page Applications.<p>Also, creating a terminal client (early ver: <a href="https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;RYx5g" rel="nofollow">https:&#x2F;&#x2F;imgur.com&#x2F;a&#x2F;RYx5g</a>) for it which will launch a Chrome browser and scrape everything. <a href="http:&#x2F;&#x2F;export.sh" rel="nofollow">http:&#x2F;&#x2F;export.sh</a> is still very early in the works, I&#x27;d appreciate any feedback (<i>email in profile, contact form doesn&#x27;t work</i>).
dor_jack超过 7 年前
If you need to perform a web-scale crawl I strongly recommend <a href="https:&#x2F;&#x2F;www.mixnode.com" rel="nofollow">https:&#x2F;&#x2F;www.mixnode.com</a>.