Published on May 8th, 2015 📆 | 4834 Views ⚑0
ACHE — focused Web Crawler
ACHE is a focused Web crawler that can be customized to search for pages the belong to a given topic or have a given property. To configure ACHE, you need to: define a topic of interest (e.g., Ebola, terrorism, cooking recipes); create a model to detect Web pages that belong to this topic; and identify seeds that will serve as a starting point for the crawl. Starting from the seeds, ACHE will crawl the Web attempting to maximize the number of relevant pages retrieved while avoiding visiting unproductive regions of the Web. the end of this process, you will have your own collection of webpages related to your topic of interest.
Build focused Web Crawler: ACHE
$git clone email@example.com:ViDA-NYU/ache.git
To compile ACHE from source code, use compile_crawler.sh:
Build a model for ACHE’s page classifier.
To focus on a certain topic ACHE needs to have access to a model of its content. This model is then used by a classifier to decide, given a new crawled page, whether it is on-topic or not. Assume that you store positive and negative examples in two directories postive and negative respectively. Also, these directories are placed in training_data directory. Here is how you build a model from these examples:
$./script/build_model.sh <training data path> <output path>
<training data path> is path to the directory containing positive and negative examples.
<output path> is the new directory that you want to save the generated model that consists of two files:
After you generated a model, you need to prepare the seed file, that each line is an url. To start the crawler, run:
$./build/install/bin/ache startCrawl <data output path> <config path> <seed path> <model path> <lang detect profile path>
<configuration path> is path to the config directory.
<seed path> is the seed file.
<model path> is the path to the model directory (containing pageclassifier.model and pageclassifier.features).
<data output path> is path to data output directory.
<lang detect profile path> is the path to the language detection profile: “libs/langdetect-03-03-2014.jar”
Example of running ACHE:
$./build/install/bin/ache startCrawl output config/sample_config config/sample.seeds config/sample_model libs/langdetect-03-03-2014.jar
data_target contains relevant pages.
data_negative contains irrelevant pages. In default setting, the crawler does not save the irrelevant pages.
data_monitor contains current status of the crawler.
data_backlinks are where persistent storages keep information of frontier and crawled graph.
Unless you stop it, the crawler exists when the number of crawled pages exeeds the limit in the setting, which is 9M at default. You can look at this file
data_monitor/harvestinfo.csv to know how many pages has been downloaded to decide whether you want to stop the crawler. The 1st, 2nd, 3rd columns are number of relevant pages, number of visited pages, timestamp.
Source && Download