Introduction
Apache Nutch is an open source Web crawler written in Java. By using it, we can find Web page hyperlinks in an automated manner, reduce lots of maintenance work, for example checking broken links, and create a copy of all the visited pages for searching over. That’s where Apache Solr comes in. Solr is an open source full text search framework, with Solr we can search the visited pages from Nutch. Luckily, integration between Nutch and Solr is pretty straightforward as explained below.
Apache Nutch supports Solr out-the-box, greatly simplifying Nutch-Solr integration. It also removes the legacy dependence upon both Apache Tomcat for running the old Nutch Web Application and upon Apache Lucene for indexing. Just download a binary release from here.
Table of Contents
Contents
Steps
This tutorial describes the installation and use of Nutch 1.x (current release is 1.7). How to compile and set up Nutch 2.x with HBase, see Nutch2Tutorial.
1. Setup Nutch from binary distribution
Download a binary package (apache-nutch-1.X-bin.zip) from here.
Unzip your binary Nutch package. There should be a folder apache-nutch-1.X.
cd apache-nutch-1.X/
From now on, we are going to use ${NUTCH_RUNTIME_HOME} to refer to the current directory (apache-nutch-1.X/).
Set up from the source distribution
Advanced users may also use the source distribution:
Download a source package (apache-nutch-1.X-src.zip)
- Unzip
cd apache-nutch-1.X/
Run ant in this folder (cf. RunNutchInEclipse)
Now there is a directory runtime/local which contains a ready to use Nutch installation.
When the source distribution is used ${NUTCH_RUNTIME_HOME} refers to apache-nutch-1.X/runtime/local/. Note that
config files should be modified in apache-nutch-1.X/runtime/local/conf/
ant clean will remove this directory (keep copies of modified config files)
2. Verify your Nutch installation
run "bin/nutch" - You can confirm a correct installation if you seeing similar to the following:
Usage: nutch COMMAND where command is one of: crawl one-step crawler for intranets (DEPRECATED) readdb read / dump crawl db mergedb merge crawldb-s, with optional filtering readlinkdb read / dump link db inject inject new urls into the database generate generate new segments to fetch from crawl db freegen generate new segments to fetch from text files fetch fetch a segment's pages
Some troubleshooting tips:
- Run the following command if you are seeing "Permission denied":
chmod +x bin/nutch
Setup JAVA_HOME if you are seeing JAVA_HOME not set. On Mac, you can run the following command or add it to ~/.bashrc:
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
On Debian or Ubuntu, you can run the following command or add it to ~/.bashrc:
export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
3. Crawl your first website
Add your agent name in the value field of the http.agent.name property in conf/nutch-site.xml, for example:
<property> <name>http.agent.name</name> <value>My Nutch Spider</value> </property>
mkdir -p urls
cd urls
touch seed.txt to create a text file seed.txt under urls/ with the following content (one URL per line for each site you want Nutch to crawl).
http://nutch.apache.org/
Edit the file conf/regex-urlfilter.txt and replace
# accept anything else +.
with a regular expression matching the domain you wish to crawl. For example, if you wished to limit the crawl to the nutch.apache.org domain, the line should read:
+^http://([a-z0-9]*\.)*nutch.apache.org/
This will include any URL in the domain nutch.apache.org.
3.1 Using the Crawl Command
The crawl command is deprecated. Please see section 3.3 on how to use the crawl script that is intended to replace the crawl command.
Now we are ready to initiate a crawl, use the following parameters:
-dir dir names the directory to put the crawl in.
-threads threads determines the number of threads that will fetch in parallel.
-depth depth indicates the link depth from the root page that should be crawled.
-topN N determines the maximum number of pages that will be retrieved at each level up to the depth.
- Run the following command:
bin/nutch crawl urls -dir crawl -depth 3 -topN 5
- Now you should be able to see the following directories created:
crawl/crawldb crawl/linkdb crawl/segments
NOTE: If you have a Solr core already set up and wish to index to it, you are required to add the -solr <solrUrl> parameter to your crawl command e.g.
bin/nutch crawl urls -solr http://localhost:8983/solr/ -depth 3 -topN 5
If not then please skip to here for how to set up your Solr instance and index your crawl data.
Typically one starts testing one's configuration by crawling at shallow depths, sharply limiting the number of pages fetched at each level (-topN), and watching the output to check that desired pages are fetched and undesirable pages are not. Once one is confident of the configuration, then an appropriate depth for a full crawl is around 10. The number of pages per level (-topN) for a full crawl can be from tens of thousands to millions, depending on your resources.
3.2 Using Individual Commands for Whole-Web Crawling
NOTE: If you previously modified the file conf/regex-urlfilter.txt as covered here you will need to change it back.
Whole-Web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines. This also permits more control over the crawl process, and incremental crawling. It is important to note that whole Web crawling does not necessarily mean crawling the entire World Wide Web. We can limit a whole Web crawl to just a list of the URLs we want to crawl. This is done by using a filter just like we the one we used when we did the crawl command (above).
Step-by-Step: Concepts
Nutch data is composed of:
- The crawl database, or crawldb. This contains information about every URL known to Nutch, including whether it was fetched, and, if so, when.
- The link database, or linkdb. This contains the list of known links to each URL, including both the source URL and anchor text of the link.
- A set of segments. Each segment is a set of URLs that are fetched as a unit. Segments are directories with the following subdirectories:
a crawl_generate names a set of URLs to be fetched
a crawl_fetch contains the status of fetching each URL
a content contains the raw content retrieved from each URL
a parse_text contains the parsed text of each URL
a parse_data contains outlinks and metadata parsed from each URL
a crawl_parse contains the outlink URLs, used to update the crawldb
Step-by-Step: Seeding the crawldb with a list of URLs
Option 1: Bootstrapping from the DMOZ database.
The injector adds URLs to the crawldb. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+ MB file, so this will take a few minutes.)
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz gunzip content.rdf.u8.gz
Next we select a random subset of these pages. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We select one out of every 5,000, so that we end up with around 1,000 URLs:
mkdir dmoz bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 -subset 5000 > dmoz/urls
The parser also takes a few minutes, as it must parse the full file. Finally, we initialize the crawldb with the selected URLs.
bin/nutch inject crawl/crawldb dmoz
Now we have a Web database with around 1,000 as-yet unfetched URLs in it.
Option 2. Bootstrapping from an initial seed list.
This option shadows the creation of the seed list as covered here.
bin/nutch inject crawl/crawldb urls
Step-by-Step: Fetching
To fetch, we first generate a fetch list from the database:
bin/nutch generate crawl/crawldb crawl/segments
This generates a fetch list for all of the pages due to be fetched. The fetch list is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable s1:
s1=`ls -d crawl/segments/2* | tail -1` echo $s1
Now we run the fetcher on this segment with:
bin/nutch fetch $s1
Then we parse the entries:
bin/nutch parse $s1
When this is complete, we update the database with the results of the fetch:
bin/nutch updatedb crawl/crawldb $s1
Now the database contains both updated entries for all initial pages as well as new entries that correspond to newly discovered pages linked from the initial set.
Now we generate and fetch a new segment containing the top-scoring 1,000 pages:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000 s2=`ls -d crawl/segments/2* | tail -1` echo $s2 bin/nutch fetch $s2 bin/nutch parse $s2 bin/nutch updatedb crawl/crawldb $s2
Let's fetch one more round:
bin/nutch generate crawl/crawldb crawl/segments -topN 1000 s3=`ls -d crawl/segments/2* | tail -1` echo $s3 bin/nutch fetch $s3 bin/nutch parse $s3 bin/nutch updatedb crawl/crawldb $s3
By this point we've fetched a few thousand pages. Let's invert links and index them!
Step-by-Step: Invertlinks
Before indexing we first invert all of the links, so that we may index incoming anchor text with the pages.
bin/nutch invertlinks crawl/linkdb -dir crawl/segments
We are now ready to search with Apache Solr.
Step-by-Step: Indexing into Apache Solr
Note: For this step you should have Solr installation. If you didn't integrate Nutch with Solr. You should read here.
Now we are ready!!! To go on and index the all the resources. For more information see this paper
Usage: bin/nutch solrindex <solr url> <crawldb> [-linkdb <linkdb>][-params k1=v1&k2=v2...] (<segment> ...| -dir <segments>) [-noCommit] [-deleteGone] [-filter] [-normalize] Example: bin/nutch solrindex http://localhost:8983/solr crawl/crawldb/ -linkdb crawl/linkdb/ crawl/segments/20131108063838/ -filter -normalize
Step-by-Step: Deleting Duplicates
Once indexed the entire contents, it must be disposed of duplicate urls in this way ensures that the urls are unique.
Map: Identity map where keys are digests and values are SolrRecord instances (which contain id, boost and timestamp)
Reduce: After map, SolrRecords with the same digest will be grouped together. Now, of these documents with the same digests, delete all of them except the one with the highest score (boost field). If two (or more) documents have the same score, then the document with the latest timestamp is kept. Again, every other is deleted from solr index.
Usage: bin/nutch solrdedup <solr url> Example: /bin/nutch solrdedup http://localhost:8983/solr
Step-by-Step: Cleaning Solr
The class scans a crawldb directory looking for entries with status DB_GONE (404) and sends delete requests to Solr for those documents. Once Solr receives the request the aforementioned documents are duly deleted. This maintains a healthier quality of Solr index.
Usage: bin/nutch solrclean <crawldb> <solrurl> Example: /bin/nutch solrclean crawl/crawldb/ http://localhost:8983/solr
3.3. Using the crawl script
If you have followed the 3.2 section above on how the crawling can be done step by step, you might be wondering how a bash script can be written to automate all the process described above.
Nutch developers have written one for you :), and it is available at bin/crawl.
Usage: bin/crawl <seedDir> <crawlID> <solrURL> <numberOfRounds> Example: bin/crawl urls/seed.txt TestCrawl http://localhost:8983/solr/ 2 Or you can use: Example: bin/nutch crawl urls -solr http://localhost:8983/solr/ -depth 3 -topN 5
The crawl script has lot of parameters set, and you can modify the parameters to your needs. It would be ideal to understand the parameters before setting up big crawls.
4. Setup Solr for search
download binary file from here
unzip to $HOME/apache-solr-3.X, we will now refer to this as ${APACHE_SOLR_HOME}
cd ${APACHE_SOLR_HOME}/example
java -jar start.jar
5. Verify Solr installation
After you started Solr admin console, you should be able to access the following links:
http://localhost:8983/solr/#/
6. Integrate Solr with Nutch
We have both Nutch and Solr installed and setup correctly. And Nutch already created crawl data from the seed URL(s). Below are the steps to delegate searching to Solr for links to be searchable:
- mv ${APACHE_SOLR_HOME}/example/solr/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/conf/schema.xml.org
cp ${NUTCH_RUNTIME_HOME}/conf/schema.xml ${APACHE_SOLR_HOME}/example/solr/conf/
- vi ${APACHE_SOLR_HOME}/example/solr/conf/schema.xml
Copy exactly in 351 line: <field name="_version_" type="long" indexed="true" stored="true"/>
restart Solr with the command “java -jar start.jar” under ${APACHE_SOLR_HOME}/example
- run the Solr Index command:
bin/nutch solrindex http://127.0.0.1:8983/solr/ crawl/crawldb -linkdb crawl/linkdb crawl/segments/*
The call signature for running the solrindex has changed. The linkdb is now optional, so you need to denote it with a "-linkdb" flag on the command line.
This will send all crawl data to Solr for indexing. For more information please see bin/nutch solrindex
If all has gone to plan, we are now ready to search with http://localhost:8983/solr/admin/. If you want to see the raw HTML indexed by Solr, change the content field definition in schema.xml to:
<field name="content" type="text" stored="true" indexed="true"/>
NutchTutorial (last edited 2013-11-30 11:02:17 by talat)
http://wiki.apache.org/nutch/NutchTutorial
'Working, Studying, 잡다구리보관소 > IT, Science' 카테고리의 다른 글
Doxygen 기본 설정법 (0) | 2014.05.27 |
---|---|
명령어(wevtutil.exe)를 사용하여 윈도우 이벤트 로그 지우는 방법 (0) | 2014.05.27 |
win-get(windows-get / 윈도우에서 apt-get 대체 프로그램) (0) | 2014.05.20 |
Python 추천 라이브러리 (0) | 2014.05.20 |
BRUTE FORCE 사전공격 단어리스트 (0) | 2014.05.19 |