Screen scraping
From Wikipedia, the free encyclopedia
Screen scraping is a technique in which a computer program extracts data from the display output of another program. The program doing the scraping is called a screen scraper. The key element that distinguishes screen scraping from regular parsing is that the output being scraped was intended for final display to a human user, rather than as input to another program, and is therefore usually neither documented nor structured for convenient parsing. Screen scraping often involves ignoring binary data (usually images or multimedia data) and formatting elements that obscure the essential, desired text data. Optical character recognition software is a kind of visual scraper.
There are a number of synonyms for screen scraping, including: Data scraping, data extraction, web scraping, page scraping, web page wrapping and HTML scraping (the last four being specific to scraping web pages).
Contents |
[edit] Description
Normally, data transfer between programs is accomplished using data structures suited for automated processing by computers, not people. Such interchange formats and protocols are typically rigidly structured, well-documented, easily parsed, compact, and keep ambiguity and duplication to a minimum. Very often, these transmissions are not human-readable at all.
In contrast, output intended to be human-readable is often the antithesis of this, with display formatting, redundant labels, superfluous commentary, and other information which is either irrelevant or inimical to automated processing. However, when the only output available is such a human-oriented display, screen scraping becomes the only automated way of accomplishing a data transfer.
Originally, screen scraping referred to the practice of reading text data from a computer display terminal's screen. This was generally done by reading the terminal's memory through its auxiliary port, or by connecting the terminal output port of one computer system to an input port on another. By analogy, screen scraping has also come to mean computerized parsing of the HTML text in web pages. In all cases, the screen scraper has to be programmed to not only process the text data of interest, but also to recognize and discard unwanted data, images, and display formatting.
Screen scraping is most often done to either (1) interface to a legacy system which has no other mechanism which is compatible with current hardware, or (2) interface to a third-party system which does not provide a more convenient API. In the second case, the operator of the third-party system may even see screen scraping as unwanted, due to reasons such as increased system load, the loss of advertisement revenue, or the loss of control of the information content.
Screen scraping is generally considered an ad-hoc, inelegant technique, often used only as a "last resort" when no other mechanism is available. Aside from the higher programming and processing overhead, output displays intended for human consumption often change structure frequently. Humans can cope with this easily, but computer programs will often crash or produce incorrect results.
Screen scraping generally requires intensive text parsing algorithms. Computer languages that have strong support for regular expressions and other text processing are thus a popular choice for writing screen scraping programs.
[edit] Web scraping
Web pages are built using text-based mark-up languages (HTML and XHTML), and frequently contain a wealth of useful data in text form. However, most web pages are designed for human consumption, and frequently mix content with presentation. Thus, screen scrapers were reborn in the web era to extract machine-friendly data from HTML and other markup. Even general-purpose search engines and other web crawlers use many techniques in the same vein as web scraping.
[edit] Scraping by design: towards the Semantic Web
The emergence of XML and web services has lent itself to the creation of technologies that improve the process of extracting machine-friendly data from web pages. Indeed, an explicit goal of the Semantic Web project is to enable the creation of documents which are easily read by both humans and machines. While this is seen as less efficient in terms of computer resources, it is asserted that computer technology has advanced to the point where such efficiency arguments are no longer a primary concern.
Extracting data from a web page or service explicitly designed to be machine-readable differs somewhat from the traditional meaning of screen scraping, which implies a preferred mechanism is not available. However, the techniques used in traditional web scraping are so similar that the same tools are often usable in both situations.
Screen scraping has thus recently taken a new dimension with tools such as Piggy Bank --a part of W3C and MIT's SIMILE joint project. The purpose of such technologies is to give the Internet community tools to increase the interoperability of disparate digital resources by adding a new semantic layer to online information. Some of these tools use user-designed scrapers, others analyze the data structure of Web pages and store structures and annotations as metadata, sometimes putting it back online as shared repositories, linking to the original sources.
Tools like Kapow RoboMaker and web-based Dapper enable wrappers to be created for all kinds of web sites, meaning data can be harvested from web sites and converted to XML. More advanced tools like EasyWrap Mashup Studio automate the creation of web wrappers and even allow the creation of RESTful APIs for accessing web sites programmatically.
[edit] Technical measures to stop scraping
With the prevalence of web scraping, many website owners have begun using anti-screen scraping techniques. See Web scraping
[edit] Examples
As a concrete example of a classic screen scraper, consider a hypothetical legacy system dating from the 1960s -- the dawn of computerized data processing. Computer to user interfaces from that era were often simply text-based dumb terminals which were not much more than virtual teleprinters. (Such systems are still in use today, for various reasons.) The desire to interface such a system to more modern systems is common. An elegant solution will often require things no longer available, such as source code, system documentation, APIs, and/or programmers with experience in a 45 year old computer system. In such cases, the only feasible solution may be to write a screen scraper which "pretends" to be a user at a terminal. The screen scraper might connect to the legacy system via Telnet, emulate the keystrokes needed to navigate the old user interface, process the resulting display output, extract the desired data, and pass it on to the modern system.
Modern web scrapers are much easier to find. For example, there are numerous programs and utilities which query commercial web sites (e.g., Google Product Search) to get product information and display it out of the context of the commercial service. Such usage is also an example of why some web-site operators see web scraping as undesirable. A popular method to protect a site from being web scraped is the use of CAPTCHA, which attempts to block automated access to a website.
[edit] Implementations
The Perl language, and modules from the Comprehensive Perl Archive Network, contain many features suitable for screen scraping, some purpose-built for it.
Microsoft has built into its implementation of web services the ability to create a web service which extracts its data from a web page with the help of an extension to the WSDL standard and the use of regular expressions.
The PHP programming language has developed in areas suited to creating web scraping applications. The release of PHP5 included many new XML and DOM additions, including functions to parse badly formed HTML documents into DOM-trees, and work on them as if they were well-formed XML.
Java offers support for web scraping techniques, by leveraging the W3C's XQuery specification.
Python and Ruby also have libraries for web scraping.
Scroogle is a screen scraping proxy that allows users to perform Google searches without receiving Google advertisements.
Many Greasemonkey or Opera user scripts work by interpreting and adapting website code.
The Outwit platform is a Web Collection Engine and development platform for Web automation. A library of recognition and extraction functions (OutWit Kernel) is available as a Firefox extension, to be used in specific collection applications.
In Unix-like environments, one can render formatted output with e.g.,
$ lynx -dump URL $ w3m -dump URL
[edit] References
[edit] Books
- Hemenway, Kevin and Calishain, Tara. Spidering Hacks. Cambridge, Massachusetts: O'Reilly, 2003. ISBN 0-596-00577-6.
[edit] External links
- PHP & cURL Screen Scraping Tutorials
- PHP scraping Web site about web scraping using PHP
- Data extraction for Web 2.0: Screen scraping in Ruby/Rails - Article about web scraping using Ruby
- Screen-scraping with WWW::Mechanize - Article about web scraping using Perl
- How to write screen scrapers - Article on writing Javascript-based screen scrapers
- Creating XML Web Services That Parse the Contents of a Web Page - Microsoft MSDN article
- Three common methods for data extraction - Article from a blog about Screen Scraping
- FEAR-less Site Scraping - An article about how to do screen scraping using FEAR::API
- Web scraping with Java - Article about web scraping using the Java programming language (requires commercial library)
- Web scraping with PHP and Tcl - Articles about web scraping using PHP and Tcl
- TTSS. Rapid implimentation of Scanning systems. Since 1991 Inovators in Scanning Airlines and Tour Operators Systems
- Techreform - web scraping - A commercial provider of web scraping services based in the United Kingdom
- OutWit Technologies - Publishers of a Web Collection Engine for Firefox
- Piggy Bank - A joint project by W3C and MIT