⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 writerpoolprocessor.java

📁 Heritrix是一个开源,可扩展的web爬虫项目。Heritrix设计成严格按照robots.txt文件的排除指示和META robots标签。
💻 JAVA
📖 第 1 页 / 共 2 页
字号:
/* WriterPoolProcessor * * $Id: WriterPoolProcessor.java 5029 2007-03-29 23:53:50Z gojomo $ * * Created on July 19th, 2006 * * Copyright (C) 2006 Internet Archive. * * This file is part of the Heritrix web crawler (crawler.archive.org). * * Heritrix is free software; you can redistribute it and/or modify * it under the terms of the GNU Lesser Public License as published by * the Free Software Foundation; either version 2.1 of the License, or * any later version. * * Heritrix is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the * GNU Lesser Public License for more details. * * You should have received a copy of the GNU Lesser Public License * along with Heritrix; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA */package org.archive.crawler.framework;import java.io.DataInputStream;import java.io.DataOutputStream;import java.io.File;import java.io.FileInputStream;import java.io.FileNotFoundException;import java.io.FileOutputStream;import java.io.IOException;import java.io.ObjectInputStream;import java.io.StringWriter;import java.net.InetAddress;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.Arrays;import java.util.Iterator;import java.util.List;import java.util.concurrent.atomic.AtomicInteger;import java.util.logging.Logger;import javax.management.AttributeNotFoundException;import javax.management.MBeanException;import javax.management.ReflectionException;import javax.xml.transform.SourceLocator;import javax.xml.transform.Templates;import javax.xml.transform.Transformer;import javax.xml.transform.TransformerConfigurationException;import javax.xml.transform.TransformerException;import javax.xml.transform.TransformerFactory;import javax.xml.transform.stream.StreamResult;import javax.xml.transform.stream.StreamSource;import org.archive.crawler.Heritrix;import org.archive.crawler.datamodel.CoreAttributeConstants;import org.archive.crawler.datamodel.CrawlHost;import org.archive.crawler.datamodel.CrawlOrder;import org.archive.crawler.datamodel.CrawlURI;import org.archive.crawler.datamodel.FetchStatusCodes;import org.archive.crawler.deciderules.recrawl.IdenticalDigestDecideRule;import org.archive.crawler.event.CrawlStatusListener;import org.archive.crawler.settings.SimpleType;import org.archive.crawler.settings.StringList;import org.archive.crawler.settings.Type;import org.archive.crawler.settings.XMLSettingsHandler;import org.archive.io.ObjectPlusFilesInputStream;import org.archive.io.WriterPool;import org.archive.io.WriterPoolMember;/** * Abstract implementation of a file pool processor. * Subclass to implement for a particular {@link WriterPoolMember} instance. * @author Parker Thompson * @author stack */public abstract class WriterPoolProcessor extends Processorimplements CoreAttributeConstants, CrawlStatusListener, FetchStatusCodes {    private final Logger logger = Logger.getLogger(this.getClass().getName());    /**     * Key to use asking settings for file compression value.     */    public static final String ATTR_COMPRESS = "compress";    /**     * Default as to whether we do compression of files.     */    public static final boolean DEFAULT_COMPRESS = true;    /**     * Key to use asking settings for file prefix value.     */    public static final String ATTR_PREFIX = "prefix";        /**     * Key to use asking settings for arc path value.     */    public static final String ATTR_PATH ="path";    /**     * Key to use asking settings for file suffix value.     */    public static final String ATTR_SUFFIX = "suffix";    /**     * Key to use asking settings for file max size value.     */    public static final String ATTR_MAX_SIZE_BYTES = "max-size-bytes";        /**     * Key to get maximum pool size.     *     * This key is for maximum files active in the pool.     */    public static final String ATTR_POOL_MAX_ACTIVE = "pool-max-active";    /**     * Key to get maximum wait on pool object before we give up and     * throw IOException.     */    public static final String ATTR_POOL_MAX_WAIT = "pool-max-wait";    /**     * Key for the maximum bytes to write attribute.     */    public static final String ATTR_MAX_BYTES_WRITTEN =    	"total-bytes-to-write";        /**     * Key for whether to skip writing records of content-digest repeats      */    public static final String ATTR_SKIP_IDENTICAL_DIGESTS =        "skip-identical-digests";        /**     * CrawlURI annotation indicating no record was written     */    protected static final String ANNOTATION_UNWRITTEN = "unwritten";        /**     * Default maximum file size.     * TODO: Check that subclasses can set a different MAX_FILE_SIZE and     * it will be used in the constructor as default.     */    private static final int DEFAULT_MAX_FILE_SIZE = 100000000;        /**     * Default path list.     *      * TODO: Confirm this one gets picked up.     */    private static final String [] DEFAULT_PATH = {"crawl-store"};    /**     * Reference to pool.     */    transient private WriterPool pool = null;        /**     * Total number of bytes written to disc.     */    private long totalBytesWritten = 0;        /**     * Calculate metadata once only.     */    transient private List<String> cachedMetadata = null;    /**     * @param name Name of this processor.     */    public WriterPoolProcessor(String name) {    	this(name, "Pool of files processor");    }    	    /**     * @param name Name of this processor.     * @param description Description for this processor.     */    public WriterPoolProcessor(final String name,        		final String description) {        super(name, description);        Type e = addElementToDefinition(            new SimpleType(ATTR_COMPRESS, "Compress files when " +            	"writing to disk.", new Boolean(DEFAULT_COMPRESS)));        e.setOverrideable(false);        e = addElementToDefinition(            new SimpleType(ATTR_PREFIX,                 "File prefix. " +                "The text supplied here will be used as a prefix naming " +                "writer files.  For example if the prefix is 'IAH', " +                "then file names will look like " +                "IAH-20040808101010-0001-HOSTNAME.arc.gz " +                "...if writing ARCs (The prefix will be " +                "separated from the date by a hyphen).",                WriterPoolMember.DEFAULT_PREFIX));        e = addElementToDefinition(            new SimpleType(ATTR_SUFFIX, "Suffix to tag onto " +                "files. If value is '${HOSTNAME}', will use hostname for " +                "suffix. If empty, no suffix will be added.",                WriterPoolMember.DEFAULT_SUFFIX));        e.setOverrideable(false);        e = addElementToDefinition(            new SimpleType(ATTR_MAX_SIZE_BYTES, "Max size of each file",                new Long(DEFAULT_MAX_FILE_SIZE)));        e.setOverrideable(false);        e = addElementToDefinition(            new StringList(ATTR_PATH, "Where to files. " +                "Supply absolute or relative path.  If relative, files " +                "will be written relative to " +                "the " + CrawlOrder.ATTR_DISK_PATH + "setting." +                " If more than one path specified, we'll round-robin" +                " dropping files to each.  This setting is safe" +                " to change midcrawl (You can remove and add new dirs" +                " as the crawler progresses).", getDefaultPath()));        e.setOverrideable(false);        e = addElementToDefinition(new SimpleType(ATTR_POOL_MAX_ACTIVE,            "Maximum active files in pool. " +            "This setting cannot be varied over the life of a crawl.",            new Integer(WriterPool.DEFAULT_MAX_ACTIVE)));        e.setOverrideable(false);        e = addElementToDefinition(new SimpleType(ATTR_POOL_MAX_WAIT,            "Maximum time to wait on pool element" +            " (milliseconds). This setting cannot be varied over the life" +            " of a crawl.",            new Integer(WriterPool.DEFAULT_MAXIMUM_WAIT)));        e.setOverrideable(false);        e = addElementToDefinition(new SimpleType(ATTR_MAX_BYTES_WRITTEN,            "Total file bytes to write to disk." +            " Once the size of all files on disk has exceeded this " +            "limit, this processor will stop the crawler. " +            "A value of zero means no upper limit.", new Long(0)));        e.setOverrideable(false);        e.setExpertSetting(true);        e = addElementToDefinition(new SimpleType(ATTR_SKIP_IDENTICAL_DIGESTS,                "Whether to skip the writing of a record when URI " +                "history information is available and indicates the " +                "prior fetch had an identical content digest. " +                "Default is false.", new Boolean(false)));        e.setOverrideable(true);        e.setExpertSetting(true);    }        protected String [] getDefaultPath() {    	return DEFAULT_PATH;	}    public synchronized void initialTasks() {        // Add this class to crawl state listeners and setup pool.        getSettingsHandler().getOrder().getController().            addCrawlStatusListener(this);        setupPool(new AtomicInteger());        // Run checkpoint recovery code.        if (getSettingsHandler().getOrder().getController().        		isCheckpointRecover()) {        	checkpointRecover();        }    }        protected AtomicInteger getSerialNo() {        return ((WriterPool)getPool()).getSerialNo();    }    /**     * Set up pool of files.     */    protected abstract void setupPool(final AtomicInteger serialNo);    /**     * Writes a CrawlURI and its associated data to store file.     *     * Currently this method understands the following uri types: dns, http,      * and https.     *     * @param curi CrawlURI to process.     */    protected abstract void innerProcess(CrawlURI curi);        protected void checkBytesWritten() {        long max = getMaxToWrite();        if (max <= 0) {            return;        }        if (max <= this.totalBytesWritten) {            getController().requestCrawlStop("Finished - Maximum bytes (" +                Long.toString(max) + ") written");        }    }        /**     * Whether the given CrawlURI should be written to archive files.      * Annotates CrawlURI with a reason for any negative answer.      *      * @param curi CrawlURI     * @return true if URI should be written; false otherwise     */    protected boolean shouldWrite(CrawlURI curi) {        // check for duplicate content write suppression        if(((Boolean)getUncheckedAttribute(curi, ATTR_SKIP_IDENTICAL_DIGESTS))             && IdenticalDigestDecideRule.hasIdenticalDigest(curi)) {            curi.addAnnotation(ANNOTATION_UNWRITTEN + ":identicalDigest");            return false;         }        String scheme = curi.getUURI().getScheme().toLowerCase();        // TODO: possibly move this sort of isSuccess() test into CrawlURI        boolean retVal;         if (scheme.equals("dns")) {            retVal = curi.getFetchStatus() == S_DNS_SUCCESS;        } else if (scheme.equals("http") || scheme.equals("https")) {            retVal = curi.getFetchStatus() > 0 && curi.isHttpTransaction();        } else if (scheme.equals("ftp")) {            retVal = curi.getFetchStatus() == 200;        } else {            // unsupported scheme            curi.addAnnotation(ANNOTATION_UNWRITTEN + ":scheme");            return false;         }        if (retVal == false) {            // status not deserving writing            curi.addAnnotation(ANNOTATION_UNWRITTEN + ":status");            return false;         }        return true;     }        /**     * Return IP address of given URI suitable for recording (as in a     * classic ARC 5-field header line).     *      * @param curi CrawlURI     * @return String of IP address     */    protected String getHostAddress(CrawlURI curi) {        // special handling for DNS URIs: want address of DNS server        if(curi.getUURI().getScheme().toLowerCase().equals("dns")) {            return curi.getString(A_DNS_SERVER_IP_LABEL);        }        // otherwise, host referenced in URI        CrawlHost h = getController().getServerCache().getHostFor(curi);        if (h == null) {            throw new NullPointerException("Crawlhost is null for " +                curi + " " + curi.getVia());        }        InetAddress a = h.getIP();        if (a == null) {            throw new NullPointerException("Address is null for " +                curi + " " + curi.getVia() + ". Address " +                ((h.getIpFetched() == CrawlHost.IP_NEVER_LOOKED_UP)?                     "was never looked up.":                     (System.currentTimeMillis() - h.getIpFetched()) +                         " ms ago."));        }        return h.getIP().getHostAddress();    }    

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -