⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 stripwwwnrule.java

📁 这是个爬虫和lucece相结合最好了
💻 JAVA
字号:
/* StripWWWRule *  * Created on Oct 5, 2004 * * Copyright (C) 2004 Internet Archive. *  * This file is part of the Heritrix web crawler (crawler.archive.org). *  * Heritrix is free software; you can redistribute it and/or modify * it under the terms of the GNU Lesser Public License as published by * the Free Software Foundation; either version 2.1 of the License, or * any later version. *  * Heritrix is distributed in the hope that it will be useful,  * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the * GNU Lesser Public License for more details. *  * You should have received a copy of the GNU Lesser Public License * along with Heritrix; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA */package org.archive.crawler.url.canonicalize;import java.util.regex.Pattern;/** * Strip any 'www[0-9]*' found on http/https URLs IF they have some * path/query component (content after third slash). Top 'slash page'  * URIs are left unstripped: we prefer crawling redundant * top pages to missing an entire site only available from either * the www-full or www-less hostname, but not both.  * @author stack * @version $Date: 2006-09-18 20:32:47 +0000 (Mon, 18 Sep 2006) $, $Revision: 4634 $ */public class StripWWWNRule extends BaseRule {    private static final long serialVersionUID = 3619916990307308590L;    private static final String DESCRIPTION = "Strip any 'www[0-9]*' found. " +        "Use this rule to equate 'http://www.archive.org/index.html' and " +        "'http://www0001.archive.org/index.html' with " +        "'http://archive.org/index.html'.  The resulting canonicalization " +        "returns 'http://archive.org/index.html'.  It removes any www's " +        "or wwwNNN's found, where 'N' is one or more numerics, EXCEPT " +        "on URIs that have no path/query component " +        ". Top-level 'slash page' URIs are left unstripped: we prefer " +        "crawling redundant top pages to missing an entire site only " +        "available from either the www-full or www-less hostname, but not " +        "both.  Operates on http and https schemes only. " +        "Use StripWWWRule to strip a lone 'www' only (This rule is a " +        "more general version of StripWWWRule).";        private static final Pattern REGEX =        Pattern.compile("(?i)^(https?://)(?:www[0-9]*\\.)([^/]*/.+)$");    public StripWWWNRule(String name) {        super(name, DESCRIPTION);    }    public String canonicalize(String url, Object context) {        return doStripRegexMatch(url, REGEX.matcher(url));    }}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -