⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 morelikethis.java

📁 lucene2.2.0版本
💻 JAVA
📖 第 1 页 / 共 3 页
字号:
/**
 * Copyright 2004-2005 The Apache Software Foundation.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package org.apache.lucene.search.similar;

import org.apache.lucene.util.PriorityQueue;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermFreqVector;
import org.apache.lucene.search.BooleanClause;	
import org.apache.lucene.search.DefaultSimilarity;
import org.apache.lucene.search.Similarity;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.Hits;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;

import java.util.Set;
import java.util.HashMap;
import java.util.Map;
import java.util.Collection;
import java.util.Iterator;
import java.io.IOException;
import java.io.Reader;
import java.io.File;
import java.io.PrintStream;
import java.io.StringReader;
import java.io.FileReader;
import java.io.InputStreamReader;
import java.net.URL;
import java.util.ArrayList;


/**
 * Generate "more like this" similarity queries. 
 * Based on this mail:
 * <code><pre>
 * Lucene does let you access the document frequency of terms, with IndexReader.docFreq().
 * Term frequencies can be computed by re-tokenizing the text, which, for a single document,
 * is usually fast enough.  But looking up the docFreq() of every term in the document is
 * probably too slow.
 * 
 * You can use some heuristics to prune the set of terms, to avoid calling docFreq() too much,
 * or at all.  Since you're trying to maximize a tf*idf score, you're probably most interested
 * in terms with a high tf. Choosing a tf threshold even as low as two or three will radically
 * reduce the number of terms under consideration.  Another heuristic is that terms with a
 * high idf (i.e., a low df) tend to be longer.  So you could threshold the terms by the
 * number of characters, not selecting anything less than, e.g., six or seven characters.
 * With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms
 * that do a pretty good job of characterizing a document.
 * 
 * It all depends on what you're trying to do.  If you're trying to eek out that last percent
 * of precision and recall regardless of computational difficulty so that you can win a TREC
 * competition, then the techniques I mention above are useless.  But if you're trying to
 * provide a "more like this" button on a search results page that does a decent job and has
 * good performance, such techniques might be useful.
 * 
 * An efficient, effective "more-like-this" query generator would be a great contribution, if
 * anyone's interested.  I'd imagine that it would take a Reader or a String (the document's
 * text), analyzer Analyzer, and return a set of representative terms using heuristics like those
 * above.  The frequency and length thresholds could be parameters, etc.
 * 
 * Doug
 * </pre></code>
 *
 *
 * <p>
 * <h3>Initial Usage</h3>
 *
 * This class has lots of options to try to make it efficient and flexible.
 * See the body of {@link #main main()} below in the source for real code, or
 * if you want pseudo code, the simpliest possible usage is as follows. The bold
 * fragment is specific to this class.
 *
 * <code><pre>
 *
 * IndexReader ir = ...
 * IndexSearcher is = ...
 * <b>
 * MoreLikeThis mlt = new MoreLikeThis(ir);
 * Reader target = ... </b><em>// orig source of doc you want to find similarities to</em><b>
 * Query query = mlt.like( target);
 * </b>
 * Hits hits = is.search(query);
 * <em>// now the usual iteration thru 'hits' - the only thing to watch for is to make sure
 * you ignore the doc if it matches your 'target' document, as it should be similar to itself </em>
 *
 * </pre></code>
 *
 * Thus you:
 * <ol>
 * <li> do your normal, Lucene setup for searching,
 * <li> create a MoreLikeThis,
 * <li> get the text of the doc you want to find similaries to
 * <li> then call one of the like() calls to generate a similarity query
 * <li> call the searcher to find the similar docs
 * </ol>
 *
 * <h3>More Advanced Usage</h3>
 *
 * You may want to use {@link #setFieldNames setFieldNames(...)} so you can examine
 * multiple fields (e.g. body and title) for similarity.
 * <p>
 *
 * Depending on the size of your index and the size and makeup of your documents you
 * may want to call the other set methods to control how the similarity queries are
 * generated:
 * <ul>
 * <li> {@link #setMinTermFreq setMinTermFreq(...)}
 * <li> {@link #setMinDocFreq setMinDocFreq(...)}
 * <li> {@link #setMinWordLen setMinWordLen(...)}
 * <li> {@link #setMaxWordLen setMaxWordLen(...)}
 * <li> {@link #setMaxQueryTerms setMaxQueryTerms(...)}
 * <li> {@link #setMaxNumTokensParsed setMaxNumTokensParsed(...)}
 * <li> {@link #setStopWords setStopWord(...)} 
 * </ul> 
 *
 * <hr>
 * <pre>
 * Changes: Mark Harwood 29/02/04
 * Some bugfixing, some refactoring, some optimisation.
 *  - bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code
 *  - bugfix: No significant terms being created for fields with a termvector - because 
 *            was only counting one occurence per term/field pair in calculations(ie not including frequency info from TermVector) 
 *  - refactor: moved common code into isNoiseWord()
 *  - optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
 * </pre>
 * 
 * @author David Spencer
 * @author Bruce Ritchie
 * @author Mark Harwood
 */
public final class MoreLikeThis {

	/**
	 * Default maximum number of tokens to parse in each example doc field that is not stored with TermVector support.
	 * @see #getMaxNumTokensParsed
	 */
    public static final int DEFAULT_MAX_NUM_TOKENS_PARSED=5000;
       

	/**
     * Default analyzer to parse source doc with.
	 * @see #getAnalyzer
     */
    public static final Analyzer DEFAULT_ANALYZER = new StandardAnalyzer();

    /**
     * Ignore terms with less than this frequency in the source doc.
	 * @see #getMinTermFreq
	 * @see #setMinTermFreq	 
     */
    public static final int DEFAULT_MIN_TERM_FREQ = 2;

    /**
     * Ignore words which do not occur in at least this many docs.
	 * @see #getMinDocFreq
	 * @see #setMinDocFreq	 
     */
    public static final int DEFAULT_MIN_DOC_FREQ = 5;

    /**
     * Boost terms in query based on score.
	 * @see #isBoost
	 * @see #setBoost 
     */
    public static final boolean DEFAULT_BOOST = false;

    /**
     * Default field names. Null is used to specify that the field names should be looked
     * up at runtime from the provided reader.
     */
    public static final String[] DEFAULT_FIELD_NAMES = new String[] { "contents"};

    /**
     * Ignore words less than this length or if 0 then this has no effect.
	 * @see #getMinWordLen
	 * @see #setMinWordLen	 
     */
    public static final int DEFAULT_MIN_WORD_LENGTH = 0;

    /**
     * Ignore words greater than this length or if 0 then this has no effect.
	 * @see #getMaxWordLen
	 * @see #setMaxWordLen	 
     */
    public static final int DEFAULT_MAX_WORD_LENGTH = 0;

	/**
	 * Default set of stopwords.
	 * If null means to allow stop words.
	 *
	 * @see #setStopWords
	 * @see #getStopWords
	 */
	public static final Set DEFAULT_STOP_WORDS = null;

	/**
	 * Current set of stop words.
	 */
	private Set stopWords = DEFAULT_STOP_WORDS;

    /**
     * Return a Query with no more than this many terms.
     *
     * @see BooleanQuery#getMaxClauseCount
	 * @see #getMaxQueryTerms
	 * @see #setMaxQueryTerms	 
     */
    public static final int DEFAULT_MAX_QUERY_TERMS = 25;

    /**
     * Analyzer that will be used to parse the doc.
     */
    private Analyzer analyzer = DEFAULT_ANALYZER;

    /**
     * Ignore words less freqent that this.
     */
    private int minTermFreq = DEFAULT_MIN_TERM_FREQ;

    /**
     * Ignore words which do not occur in at least this many docs.
     */
    private int minDocFreq = DEFAULT_MIN_DOC_FREQ;

    /**
     * Should we apply a boost to the Query based on the scores?
     */
    private boolean boost = DEFAULT_BOOST;

    /**
     * Field name we'll analyze.
     */
    private String[] fieldNames = DEFAULT_FIELD_NAMES;

	/**
	 * The maximum number of tokens to parse in each example doc field that is not stored with TermVector support
	 */
	private int maxNumTokensParsed=DEFAULT_MAX_NUM_TOKENS_PARSED;   
    


    /**
     * Ignore words if less than this len.
     */
    private int minWordLen = DEFAULT_MIN_WORD_LENGTH;

    /**
     * Ignore words if greater than this len.
     */
    private int maxWordLen = DEFAULT_MAX_WORD_LENGTH;

    /**
     * Don't return a query longer than this.
     */
    private int maxQueryTerms = DEFAULT_MAX_QUERY_TERMS;

    /**
     * For idf() calculations.
     */
    private Similarity similarity = new DefaultSimilarity();

    /**
     * IndexReader to use
     */
    private final IndexReader ir;

    /**
     * Constructor requiring an IndexReader.
     */
    public MoreLikeThis(IndexReader ir) {
        this.ir = ir;
    }

    /**
     * Returns an analyzer that will be used to parse source doc with. The default analyzer
     * is the {@link #DEFAULT_ANALYZER}.
     *
     * @return the analyzer that will be used to parse source doc with.
	 * @see #DEFAULT_ANALYZER
     */
    public Analyzer getAnalyzer() {
        return analyzer;
    }

    /**
     * Sets the analyzer to use. An analyzer is not required for generating a query with the
     * {@link #like(int)} method, all other 'like' methods require an analyzer.
     *
     * @param analyzer the analyzer to use to tokenize text.
     */

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -