Apache Nutch


Apache Nutch is a highly extensible and scalable open source web crawler software project.

Features

Nutch is coded entirely in the Java programming language, but data is written in language-independent formats. It has a highly modular architecture, allowing developers to create plug-ins for media-type parsing, data retrieval, querying and clustering.
The fetcher has been written from scratch specifically for this project.

History

Nutch originated with Doug Cutting, creator of both Lucene and Hadoop, and Mike Cafarella.
In June, 2003, a successful 100-million-page demonstration system was developed. To meet the multi-machine processing needs of the crawl and index tasks, the Nutch project has also implemented a MapReduce facility and a distributed file system. The two facilities have been spun out into their own subproject, called Hadoop.
In January, 2005, Nutch joined the Apache Incubator, from which it graduated to become a subproject of Lucene in June of that same year. Since April, 2010, Nutch has been considered an independent, top level project of the Apache Software Foundation.
In February 2014 the Common Crawl project adopted Nutch for its open, large-scale web crawl.
While it was once a goal for the Nutch project to release a global large-scale web search engine, that is no longer the case.

Release history

Scalability

IBM Research studied the performance of Nutch/Lucene as part of its Commercial Scale Out project. Their findings were that a scale-out system, such as Nutch/Lucene, could achieve a performance level on a cluster of blades that was not achievable on any scale-up computer such as the POWER5.
The ClueWeb09 dataset was gathered using Nutch, with an average speed of 755.31 documents per second.

Related projects