I just finished my talk at Mashed Library 2009 – an event for librarians wanting to mash and mix their data. My talk was almost definitely a bit overwhelming, judging by the backchannel, so I thought I’d bang out a quick blog post to try and help those I managed to confuse.
My talk was entitled “Scraping, Scripting and Hacking your way to API-less data”, and intended to give a high-level overview of some of the techniques that can be used to “get at data” on the web when the “nice” options of feeds and API’s aren’t available to you.
The context of the talk was this: almost everything we’re talking about with regard to mashups, visualisations and so on relies on data being available to us. In the cutting edge of Web2 apps, everything has got an API, a feed, a developer community. In the world of museums, libraries and government, this just isn’t the case. Data is usually held on-page as html (xhtml if we’re lucky), and programmatic access is nowhere to be found. If we want to use that data, we need to find other ways to get at it.
My slides are here:
[slideshare id=1690990&doc=scrapingscriptinghacking-090707060418-phpapp02]
A few people asked that I provide the URLs I mentioned together with a bit of context. Many of the slides above have links to examples, but here’s a simple list for those who’d prefer that:
- http://hoard.it as an example of “intelligent scraping” being used to take on-page content and re-deliver it as “nicer” machine-accessible content
- Yahoo! Pipes being used to scrape segments of this page using the Fetch Page module
- Google Docs being used to scrape this page using the importHTML() function (see Tony Hirst’s excellent blog post for a better example)
- Dapper being used to scrape pages with this shape and display extracted data
- YQL being used to scrape this page and deliver search results into a REST query
- HtTrack for downloading entire websites / sections of websites
- RegEx visual tool
- HTML Tidy, a tool for cleaning up “bad” html, available as both a download and a COM object for use in your scripts
- Using OpenCalais for doing natural language text parsing (example form here)
- Yahoo! Term Extraction – example form here
- Yahoo! Geo
- Freedom of Information (example from Frankie Roberto), OCR (from me) and Amazon Mechanical Turk
Phew. Now I can see why it was slightly overwhelming