TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Web scraping with Ruby

55 pointsby hecticjeffover 10 years ago

10 comments

boie0025over 10 years ago
I had to write scrapers in Ruby for a very large application that scraped all kinds of government information from various states. We found (after a lot of pain working with very procedural scrapers) that a modified producer&#x2F;consumer pattern worked well. We found that making classes for the producers (they were classes that described each page to be scraped, with methods that matched the modeled data) allowed for easy maintenance. We then created consumers that could be passed any of the page specific producer classes, and knew how to persist the scraped data.<p>Once I had a good pattern in place I could easily create subclasses of the data type I was trying to scrape, basically pointing each of the modeled data methods to an xpath that was specific to that page.
评论 #8913073 未加载
评论 #8913228 未加载
评论 #8912517 未加载
Doctor_Feggover 10 years ago
I&#x27;d suggest going with mechanize from the off - not just, as the article says, &quot;[when] the site you’re scraping requires you to login first, for those instances I recommend looking into mechanize&quot;.<p>Mechanize allows you to write clean, efficient scraper code without all the boilerplate. It&#x27;s the nicest scraping solution I&#x27;ve yet encountered.
评论 #8911499 未加载
wnmover 10 years ago
I recommend having a look at capybara [0]. It is build on top of nokogiri, and is actually a tool to write acceptence tests. But it can also be used for web scraping: you can open websites, click on links, fill in forms, find elements on a page (via xpath or css), get their values, etc... I prefer it over nokogiri because of its nice DSL and good documentation [1]. It also can execute javascript, which sometimes is handy for scraping.<p>I&#x27;ve spend a lot of time working on web scrapers for two of my projects, <a href="http://themescroller.com" rel="nofollow">http:&#x2F;&#x2F;themescroller.com</a> (dead) and <a href="http://www.remoteworknewsletter.com" rel="nofollow">http:&#x2F;&#x2F;www.remoteworknewsletter.com</a>, and I think the holy grail is to build a rails app around your scraper. You can write your scrapers as libs, and then make them executable as rake tasks, or even cronjobs. And because its a rails app you can save all scraped data as actual models and have them persisted in a database. With rails its also super easy to build an api around your data, or build a quick backend for it via rails scaffolds.<p>[0] <a href="https://github.com/jnicklas/capybara" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jnicklas&#x2F;capybara</a> [1] <a href="http://www.rubydoc.info/github/jnicklas/capybara/" rel="nofollow">http:&#x2F;&#x2F;www.rubydoc.info&#x2F;github&#x2F;jnicklas&#x2F;capybara&#x2F;</a>
joshmnover 10 years ago
I always see people using something like HTTParty or open-uri for pulling down the page. My preferred (by far) is typhoeus, as it supports parallel requests and wraps around libcurl.<p><a href="https://github.com/typhoeus/typhoeus" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;typhoeus&#x2F;typhoeus</a>
jstoikoover 10 years ago
I&#x27;d suggest taking a look at Scrapy (<a href="http://scrapy.org" rel="nofollow">http:&#x2F;&#x2F;scrapy.org</a>). It is built on top of Twisted (asynchronous) and uses xPath which makes your &quot;scraping&quot; code a lot more readable.
评论 #8913194 未加载
评论 #8912282 未加载
pkmishraover 10 years ago
Scraping is generally easy but challenges come when you are scraping large amount of unstructured data and how well you respond to page changes pro-actively. Scrapy is very good. I couldn&#x27;t find similar tool in Ruby though.
k__over 10 years ago
Can anyone list some good resources about scraping, with gotchas etc.?
评论 #8911900 未加载
评论 #8911731 未加载
评论 #8912080 未加载
programminggeekover 10 years ago
Why not just use like watir or selenium?
评论 #8914263 未加载
richardpetersenover 10 years ago
How do you get the script to save the json file?
评论 #8912352 未加载
评论 #8911496 未加载
mychaelangeloover 10 years ago
thanks for sharing this - great scraping intro for us newbies (I&#x27;m new to ruby and ROR).