⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 paste.java

📁 MG4J (Managing Gigabytes for Java) is a free full-text search engine for large document collections
💻 JAVA
字号:
package it.unimi.dsi.mg4j.tool;/*		  * MG4J: Managing Gigabytes for Java * * Copyright (C) 2005-2007 Sebastiano Vigna  * *  This library is free software; you can redistribute it and/or modify it *  under the terms of the GNU Lesser General Public License as published by the Free *  Software Foundation; either version 2.1 of the License, or (at your option) *  any later version. * *  This library is distributed in the hope that it will be useful, but *  WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY *  or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Lesser General Public License *  for more details. * *  You should have received a copy of the GNU Lesser General Public License *  along with this program; if not, write to the Free Software *  Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */import it.unimi.dsi.fastutil.ints.AbstractIntComparator;import it.unimi.dsi.fastutil.ints.IntHeapPriorityQueue;import it.unimi.dsi.fastutil.ints.IntIterator;import it.unimi.dsi.mg4j.index.BitStreamIndex;import it.unimi.dsi.mg4j.index.CachingOutputBitStream;import it.unimi.dsi.mg4j.index.Index;import it.unimi.dsi.mg4j.index.IndexIterator;import it.unimi.dsi.mg4j.index.CompressionFlags.Coding;import it.unimi.dsi.mg4j.index.CompressionFlags.Component;import it.unimi.dsi.io.InputBitStream;import it.unimi.dsi.io.OutputBitStream;import it.unimi.dsi.Util;import java.io.Closeable;import java.io.File;import java.io.IOException;import java.lang.reflect.InvocationTargetException;import java.net.URISyntaxException;import java.util.Map;import org.apache.commons.configuration.ConfigurationException;import org.apache.log4j.Logger;import com.martiansoftware.jsap.JSAPException;/** Pastes several indices. *  * <p>Pasting is a very slow way of combining indices: we assume * that not only documents, but also document occurrences might be scattered * throughout several indices. When a document appears in several indices, * its occurrences in a given index are combined by renumbering them starting  * from the sum of the sizes for the document in the previous indices. *  * <p>Conceptually, this operation is equivalent to splitting a collection * <em>vertically</em>: each document is divided into a fixed number <var>n</var>  * of consecutive segments (possibly of length 0), and a set of <var>n</var> indices * is created using the <var>k</var>-th segment of all documents. Pasting the * resulting indices will produce an index that is identical to the index generated * by the original collection. The behaviour is analogous to that of the UN*X * <samp>paste</samp> command if documents are single-line lists of words. *  * <p>In pratice, pasting is usually applied to indices obtained from  * a {@linkplain it.unimi.dsi.mg4j.document.DocumentFactory.FieldType#VIRTUAL virtual field}  * (e.g., indices containing anchor text fragments). *  * <p>Note that in case every document appears at most in one index pasting * is equivalent to {@linkplain it.unimi.dsi.mg4j.tool.Merge merging}. It is, however, * significantly slower, as the presence of the same document in several lists makes * it necessary to scan completely the inverted lists to be pasted to compute the * frequency. *    * @author Sebastiano Vigna * @since 1.0 */final public class Paste extends Combine {	@SuppressWarnings("unused")	private static final Logger LOGGER = Util.getLogger( Paste.class );		/** The default size of the temporary bit stream buffer used while pasting. Posting lists larger	 * than this size will be precomputed on disk and then added to the index. */	public final static int DEFAULT_MEMORY_BUFFER_SIZE = 16 * 1024 * 1024;		/** The reference array of the document queue. */	protected int[] doc;	/** The queue containing document pointers (for remapped indices). */	protected IntHeapPriorityQueue documentQueue;	/** The temporary cache file {@link #combine(int)}. */	private File tempFile;	/** The temporary output bit stream for {@link #combine(int)}. */	private CachingOutputBitStream cacheBitStreamOut;	/** The temporary output bit stream for {@link #combine(int)}. */	private InputBitStream cacheBitStreamIn;	/** The input bit stream used to wrap directly {@link #cacheBitStreamOut}'s buffer. */	private InputBitStream cacheBitStreamInWrapper;	/** The size of the size list for each index. */	private final int[] sizesSize;		public Paste( final String outputBasename,			final String[] inputBasename,			final boolean metadataOnly,			final int bufferSize,			final File tempFileDir,			final int tempBufferSize,			final Map<Component,Coding> writerFlags,			final boolean interleaved,			final boolean skips,			final int quantum,			final int height,			final int skipBufferSize,			final long logInterval ) throws IOException, ConfigurationException, URISyntaxException, ClassNotFoundException, SecurityException, InstantiationException, IllegalAccessException, InvocationTargetException, NoSuchMethodException {		super( outputBasename, inputBasename, metadataOnly, bufferSize, writerFlags, interleaved, skips, quantum, height, skipBufferSize, logInterval );		tempFile = File.createTempFile( "MG4J", ".data", tempFileDir );		cacheBitStreamOut = new CachingOutputBitStream( tempFile, tempBufferSize );		cacheBitStreamIn = new InputBitStream( tempFile, bufferSize );		cacheBitStreamInWrapper = new InputBitStream( cacheBitStreamOut.buffer() );		/* In this case, we must reallocate position as by merging occurences we might		 * obtain an occurrence list as large as the concatenation of all largest		 * lists. We use this estimate to allocate position, and update maxCount in		 * combine() to get the real maxCount. */		int estimateForMaxCount = 0, tempSize = 0;		sizesSize = new int[ numIndices ];		for( int i = 0; i < numIndices; i++ ) {			if ( index[ i ].hasPayloads ) throw new IllegalArgumentException( "You cannot paste indices with payloads" );			sizesSize[ i ] = index[ i ].sizes.size();			estimateForMaxCount += index[ i ].maxCount;			tempSize = Math.max( tempSize, index[ i ].maxCount );		}		position = new int[ estimateForMaxCount ];		doc = new int[ numIndices ];		documentQueue = new IntHeapPriorityQueue( numIndices, new DocumentIndexComparator( doc ) );	}	/** A comparator making an integer priority queue work much like an indirect	 * priority queue, with the additional property of using the reference index as secondary key.	 */		private final static class DocumentIndexComparator extends AbstractIntComparator {		private final int[] refArray;		public DocumentIndexComparator( final int[] refArray ) {			this.refArray = refArray;		} 				public int compare( final int i, final int j ) {			final int t = refArray[ i ] - refArray[ j ];			return t != 0 ? t : i - j;		}	}			/** Returns an index with given basename, loading document sizes.	 * 	 * @param basename an index basename.	 * @return an index loaded with document sizes.	 */		protected BitStreamIndex getIndex( final CharSequence basename ) throws ConfigurationException, IOException, URISyntaxException, ClassNotFoundException, SecurityException, InstantiationException, IllegalAccessException, InvocationTargetException, NoSuchMethodException {		return (BitStreamIndex)Index.getInstance( basename, false, true, false );	}	protected int combineNumberOfDocuments() {		int n = 0;		for( int i = 0; i < numIndices; i++ ) n = Math.max( n, index[ i ].numberOfDocuments );		return n;	}	protected int combineSizes() throws IOException {		int currDoc = 0, maxDocSize = 0;		for( int i = 0; i < numIndices; i++ ) {			final IntIterator sizes = sizes( i );			int s = 0;			int j = index[ i ].numberOfDocuments;			currDoc = 0;			while( j-- != 0 ) {				s = ( size[ currDoc++ ] += sizes.nextInt() );				if ( s > maxDocSize ) maxDocSize = s;			}			if ( sizes instanceof Closeable ) ((Closeable)sizes).close();		}		return maxDocSize;	}	protected int combine( final int numUsedIndices ) throws IOException {		/* If we're merging just one list, merging is fine, and moreover		 * maxCount need not be updated, as it is already initialised to		 * the maximum over all indices. */		int currIndex, prevDoc = -1, currDoc, count;		int temp[];		OutputBitStream obs;		Index i;		IndexIterator ii;			// Note that the total frequency can be computed only during the merge.		for( int k = numUsedIndices; k-- != 0; ) {			currIndex = usedIndex[ k ];			frequency[ currIndex ] = indexIterator[ currIndex ].frequency();			doc[ currIndex ] = indexIterator[ currIndex ].nextDocument();			documentQueue.enqueue( currIndex );		}				// First phase: we write the inverted list using a quick-and-dirty format in the cache.		cacheBitStreamOut.position( 0 );		int  totalFrequency = 0, increment, totalCount, prevIndex;				while( ! documentQueue.isEmpty() ) {			// We extract the smallest document pointer, and enqueue it in the new index.			currDoc = doc[ currIndex = documentQueue.firstInt() ];			cacheBitStreamOut.writeDelta( currDoc - prevDoc - 1 );			totalFrequency++;						totalCount = 0;			increment = 0;			prevIndex = 0;						do {				while( prevIndex < currIndex ) {					/* Note that some virtual documents could not exist at all in some index (in which					 * case we extend the size list with zeroes). */ 					if ( sizesSize[ prevIndex ] > currDoc ) increment += index[ prevIndex ].sizes.getInt( currDoc );					prevIndex++;				}				i = index[ currIndex ];				ii = indexIterator[ currIndex ];							if ( i.hasCounts ) {					count = ii.count();					if ( i.hasPositions ) {						temp = ii.positionArray();						for( int k = count; k-- != 0; ) position[ totalCount + k ] = temp[ k ] + increment;					}					totalCount += count;				}								// If we just wrote the last document pointer of this term in index j, we dequeue it.				if ( --frequency[ currIndex ] == 0 ) documentQueue.dequeue();				else {					doc[ currIndex ] = ii.nextDocument();					documentQueue.changed();				}			} while( ! documentQueue.isEmpty() && doc[ currIndex = documentQueue.firstInt() ] == currDoc );				if ( totalCount > maxCount ) maxCount = totalCount;				if ( hasCounts ) { 				cacheBitStreamOut.writeGamma( totalCount );				if ( hasPositions ) {					cacheBitStreamOut.writeDelta( position[ 0 ] );					for( int k = 1; k < totalCount; k++ ) cacheBitStreamOut.writeDelta( position[ k ] - position[ k - 1 ] - 1 );				}			}				prevDoc = currDoc;		}			// Finally, we pour the data into the actual index.				indexWriter.newInvertedList();		indexWriter.writeFrequency( totalFrequency );		cacheBitStreamOut.align();		final InputBitStream ibs;			if ( cacheBitStreamOut.buffer() != null ) ibs = cacheBitStreamInWrapper;		else {			cacheBitStreamOut.flush();			ibs = cacheBitStreamIn;			ibs.flush();		}			ibs.position( 0 );				currDoc = -1;		for( int j = totalFrequency; j-- != 0; ) {			obs = indexWriter.newDocumentRecord();			indexWriter.writeDocumentPointer( obs, currDoc = ibs.readDelta() + currDoc + 1 );			count = ibs.readGamma();			if ( hasCounts ) {				indexWriter.writePositionCount( obs, count );				if ( hasPositions ) {					position[ 0 ] = ibs.readDelta();					for( int k = 1; k < count; k++ ) position[ k ] = position[ k - 1 ] + ibs.readDelta() + 1;					indexWriter.writeDocumentPositions( obs, position, 0, count, size != null ? size[ currDoc ] : -1 );				}			}		}		return totalFrequency;	}		public void run() throws ConfigurationException, IOException {		super.run();		cacheBitStreamOut.close();		tempFile.delete();	}	public static void main( String arg[] ) throws ConfigurationException, SecurityException, JSAPException, IOException, URISyntaxException, ClassNotFoundException, InstantiationException, IllegalAccessException, InvocationTargetException, NoSuchMethodException {		Combine.main( arg, Paste.class );	}}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -