Screen scraping

From Wikipedia, the free encyclopedia

Screen scraping is a technique in which a computer program extracts text data from the display output of another program, ignoring all binary data (usually images or multimedia data). The program doing the scraping is called a screen scraper. The key element that distinguishes screen scraping from regular parsing is that scraped output is usually neither documented, structured nor intended for data transmission.

There are a number of synonyms for screen scraping, including: Data scraping, data extraction, web scraping, page scraping, web page wrapping and HTML scraping (the last four being specific to scraping web pages).

Contents

[edit] Description

Normally, data transfer between programs is accomplished using data structures suited for automated processing by computers, not people. Such interchange formats and protocols are typically rigidly structured, well-documented, easily parsed, compact, and keep ambiguity and duplication to a minimum. Very often, these transmissions are not human-readable at all.

In contrast, output intended to be human-readable is often the antithesis of this, with display formatting, redundant labels, superfluous commentary, and other information which is either irrelevant or inimical to automated processing. However, when the only output available is such a human-friendly display, screen scraping becomes the only automated way of accomplishing a data transfer.

Originally, screen scraping referred to the practice of reading text data from a computer display terminal's screen. This was generally done by reading the terminal's memory through its auxiliary port, or by connecting the terminal output port of one computer system to an input port on another. By analogy, screen scraping has also come to mean computerized parsing of the HTML text in web pages. In all cases, the screen scraper has to be programmed to not only process the text data of interest, but also to recognize and discard unwanted data, images, and display formatting.

Screen scraping is most often done to either (1) interface to a legacy system which has no other mechanism which is compatible with current hardware, or (2) interface to a third-party system which does not provide a more sophisticated API. In the second case, the operator of the third-party system may even see screen scraping as unwanted, due to reasons such as increased system load, the loss of advertisement revenue, or the loss of control of the information content.

Screen scraping is generally considered an ad-hoc, inelegant technique, often used only as a "last resort" when no other mechanism is available. Aside from the higher programming and processing overhead, output displays intended for human consumption often change structure frequently. Humans can cope with this easily, but computer programs will often crash or produce incorrect results.

Screen scraping generally requires intensive text parsing algorithms. Computer languages that have strong support for regular expressions and other text processing are thus a popular choice for writing screen scraping programs.

[edit] Web scraping

Web pages are built using text-based mark-up languages (HTML and XHTML), and frequently contain a wealth of useful data in text form. However, most web pages are designed for human consumption, and frequently mix content with presentation. Thus, screen scrapers were reborn in the web era to extract machine-friendly data from HTML and other markup. Even general-purpose search engines and other web crawlers use many techniques in the same vein as web scraping.

With the prevalence of web scraping, many website owners have begun developing anti-screen scraping techniques. These include the blocking of individual and ranges of IP addresses, which stops the majority of "cookie cutter" screen scraping applications.

[edit] Scraping by design

The emergence of XML and web services has lent itself to the creation of technologies that improve the process of extracting machine-friendly data from web pages. Indeed, an explicit goal of the Semantic Web project is to enable the creation of documents which are easily read by both humans and machines. While this is seen as less efficient in terms of computer resources, it is asserted that computer technology has advanced to the point where such efficiency arguments are no longer a primary concern.

Extracting data from a web page or service explicitly designed to be machine-readable differs somewhat from the traditional meaning of screen scraping, which implies a preferred mechanism is not available. However, the techniques used in traditional web scraping are so similar that the same tools are often usable in both situations.

[edit] Examples

As a concrete example of a classic screen scraper, consider a hypothetical legacy system dating from the 1960s -- the dawn of computerized data processing. Computer to user interfaces from that era were often simply text-based dumb terminals which were not much more than virtual teleprinters. (Such systems are still in use today, for various reasons.) The desire to interface such a system to more modern systems is common. An elegant solution will often require things no longer available, such as source code, system documentation, APIs, and/or programmers with experience in a 45 year old computer system. In such cases, the only feasible solution may be to write a screen scraper which "pretends" to be a user at a terminal. The screen scraper might connect to the legacy system via Telnet, emulate the keystrokes needed to navigate the old user interface, process the resulting display output, extract the desired data, and pass it on to the modern system.

Modern web scrapers are much easier to find. For example, there are numerous programs and utilities which query commercial web sites (e.g., Froogle) to get product information and display it out of the context of the commercial service. Such usage is also an example of why some web-site operators see web scraping as undesirable. A popular method to protect a site from being web scraped is the use of CAPTCHA, which attempts to block automated access to a website.

[edit] Implementations

The Perl language, and modules from the Comprehensive Perl Archive Network, contain many features suitable for screen scraping, some purpose-built for it.

Microsoft has built into its implementation of web services the ability to create a web service which extracts its data from a web page with the help of an extension to the WSDL standard and the use of regular expressions.

The PHP programming language has developed in areas suited to creating web scraping applications. The release of PHP5 included many new XML and DOM additions, including functions to parse badly formed HTML documents into DOM-trees, and work on them as if they were well-formed XML.

Java offers support for web scraping techniques, by leveraging the W3C's XQuery specification.

Scroogle is a screen scraping proxy that allows users to perform Google searches without receiving Google advertisements.

Dapper is a web-based GUI tool for extracting content from any website.

Many Greasemonkey or Opera user scripts work by interpreting and adapting website code.

There are also several implementations that aim to provide user-friendly wrappers of web scraping technologies, like iMacros or The Easy Bee.

[edit] References

[edit] Books

  • Hemenway, Kevin and Calishain, Tara. Spidering Hacks. Cambridge, Massachusetts: O'Reilly, 2003. ISBN 0-596-00577-6.

[edit] External links

In other languages