Distributed web crawling
From Wikipedia, the free encyclopedia
Distributed web crawling is a distributed computing technique whereby Internet search engines employ many computers to index the Internet via web crawling. The idea is to spread out the required resources of computation and bandwidth to many computers and networks.
Contents |
[edit] Implementations
As of 2003 most modern commercial search engines use this technique. Google uses thousands of individual computers in multiple locations to crawl the Web.
Newer projects are attempting to use a less structured, more ad-hoc form of collaboration by enlisting volunteers to join the effort using, in many cases, their home or personal computers. LookSmart is the largest search engine to use this technique, which powers its Grub distributed web-crawling project.
This solution uses computers that are connected to the Internet to crawl Internet addresses in the background. Upon downloading crawled web pages, they are compressed and sent back together with a status flag (e.g. changed, new, down, redirected) to the powerful central servers. The servers, which manage a large database, send out new URLs to clients for testing.
It appears that many people (including founding members) behind Grub left the project. The side effect of that is that bugs aren't being fixed and even after 4 years the project doesn't give the option for searching among crawled results.
[edit] Draw-backs
According to the Nutch, an open-source search engine FAQ, the savings in bandwidth by distributed web crawling are not significant, since "A successful search engine requires more bandwidth to upload query result pages than its crawler needs to download pages...".
[edit] See also
- Distributed computing
- Web crawler
- Yacy - P2P web search engine with distributed crawling