⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 backingstorehashtable.java

📁 derby database source code.good for you.
💻 JAVA
📖 第 1 页 / 共 2 页
字号:
/*   Derby - Class org.apache.derby.iapi.store.access.BackingStoreHashtable   Copyright 1999, 2004 The Apache Software Foundation or its licensors, as applicable.   Licensed under the Apache License, Version 2.0 (the "License");   you may not use this file except in compliance with the License.   You may obtain a copy of the License at      http://www.apache.org/licenses/LICENSE-2.0   Unless required by applicable law or agreed to in writing, software   distributed under the License is distributed on an "AS IS" BASIS,   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   See the License for the specific language governing permissions and   limitations under the License. */package org.apache.derby.iapi.store.access;import org.apache.derby.iapi.services.sanity.SanityManager;import org.apache.derby.iapi.services.io.Storable;import org.apache.derby.iapi.error.StandardException; import org.apache.derby.iapi.types.CloneableObject;import org.apache.derby.iapi.types.DataValueDescriptor;import org.apache.derby.iapi.services.cache.ClassSize;import java.util.Enumeration;import java.util.Hashtable;import java.util.Properties; import java.util.Vector;import java.util.NoSuchElementException;/**A BackingStoreHashtable is a utility class which will store a set of rows intoan in memory hash table, or overflow the hash table to a tempory on disk structure.<p>All rows must contain the same number of columns, and the column at positionN of all the rows must have the same format id.  If the BackingStoreHashtable needs to beoverflowed to disk, then an arbitrary row will be chosen and used as a templatefor creating the underlying overflow container.<p>The hash table will be built logically as follows (actual implementationmay differ).  The important points are that the hash value is the standardjava hash value on the row[key_column_numbers[0], if key_column_numbers.length is 1,or row[key_column_numbers[0, 1, ...]] if key_column_numbers.length > 1, and that duplicate detection is done by the standard java duplicate detection provided by java.util.Hashtable.<p>import java.util.Hashtable;hash_table = new Hashtable();Object[] row;boolean  needsToClone = rowSource.needsToClone();while((row = rowSource.getNextRowFromRowSource()) != null){    if (needsToClone)        row = clone_row_from_row(row);	Object key = KeyHasher.buildHashKey(row, key_column_numbers);    if ((duplicate_value =         hash_table.put(key, row)) != null)    {        Vector row_vec;        // inserted a duplicate        if ((duplicate_value instanceof vector))        {            row_vec = (Vector) duplicate_value;        }        else        {            // allocate vector to hold duplicates            row_vec = new Vector(2);            // insert original row into vector            row_vec.addElement(duplicate_value);            // put the vector as the data rather than the row            hash_table.put(key, row_vec);        }                // insert new row into vector        row_vec.addElement(row);    }}**/public class BackingStoreHashtable{    /**************************************************************************     * Fields of the class     **************************************************************************     */    private TransactionController tc;    private Hashtable   hash_table;    private int[]       key_column_numbers;    private boolean     remove_duplicates;	private boolean		skipNullKeyColumns;    private Properties  auxillary_runtimestats;	private RowSource	row_source;    /* If max_inmemory_rowcnt > 0 then use that to decide when to spill to disk.     * Otherwise compute max_inmemory_size based on the JVM memory size when the BackingStoreHashtable     * is constructed and use that to decide when to spill to disk.     */    private long max_inmemory_rowcnt;    private long inmemory_rowcnt;    private long max_inmemory_size;    private boolean keepAfterCommit;    /**     * The estimated number of bytes used by Vector(0)     */      private final static int vectorSize = ClassSize.estimateBaseFromCatalog(java.util.Vector.class);        private DiskHashtable diskHashtable;    /**************************************************************************     * Constructors for This class:     **************************************************************************     */    private BackingStoreHashtable(){}    /**     * Create the BackingStoreHashtable from a row source.     * <p>     * This routine drains the RowSource.  The performance characteristics     * depends on the number of rows inserted and the parameters to the      * constructor.       * <p>     * If the number of rows is <= "max_inmemory_rowcnt", then the rows are     * inserted into a java.util.Hashtable.  In this case no      * TransactionController is necessary, a "null" tc is valid.     * <p>     * If the number of rows is > "max_inmemory_rowcnt", then the rows will     * be all placed in some sort of Access temporary file on disk.  This      * case requires a valid TransactionController.     *     * @param tc                An open TransactionController to be used if the     *                          hash table needs to overflow to disk.     *     * @param row_source        RowSource to read rows from.     *     * @param key_column_numbers The column numbers of the columns in the     *                          scan result row to be the key to the Hashtable.     *                          "0" is the first column in the scan result     *                          row (which may be different than the first     *                          row in the table of the scan).     *     * @param remove_duplicates Should the Hashtable automatically remove     *                          duplicates, or should it create the Vector of     *                          duplicates?     *     * @param estimated_rowcnt  The estimated number of rows in the hash table.     *                          Pass in -1 if there is no estimate.     *     * @param max_inmemory_rowcnt     *                          The maximum number of rows to insert into the      *                          inmemory Hash table before overflowing to disk.     *                          Pass in -1 if there is no maximum.     *     * @param initialCapacity   If not "-1" used to initialize the java      *                          Hashtable.     *     * @param loadFactor        If not "-1" used to initialize the java      *                          Hashtable.	 *	 * @param skipNullKeyColumns	Skip rows with a null key column, if true.     *     * @param keepAfterCommit If true the hash table is kept after a commit,     *                        if false the hash table is dropped on the next commit.     *     *	 * @exception  StandardException  Standard exception policy.     **/    public BackingStoreHashtable(    TransactionController   tc,    RowSource               row_source,    int[]                   key_column_numbers,    boolean                 remove_duplicates,    long                    estimated_rowcnt,    long                    max_inmemory_rowcnt,    int                     initialCapacity,    float                   loadFactor,	boolean					skipNullKeyColumns,    boolean                 keepAfterCommit)        throws StandardException    {        this.key_column_numbers    = key_column_numbers;        this.remove_duplicates    = remove_duplicates;		this.row_source			   = row_source;		this.skipNullKeyColumns	   = skipNullKeyColumns;        this.max_inmemory_rowcnt = max_inmemory_rowcnt;        if( max_inmemory_rowcnt > 0)            max_inmemory_size = Long.MAX_VALUE;        else            max_inmemory_size = Runtime.getRuntime().totalMemory()/100;        this.tc = tc;        this.keepAfterCommit = keepAfterCommit;        Object[] row;        // use passed in capacity and loadfactor if not -1, you must specify        // capacity if you want to specify loadfactor.        if (initialCapacity != -1)        {            hash_table =                 ((loadFactor == -1) ?                      new Hashtable(initialCapacity) :                      new Hashtable(initialCapacity, loadFactor));        }        else        {            /* We want to create the hash table based on the estimated row             * count if a) we have an estimated row count (i.e. it's greater             * than zero) and b) we think we can create a hash table to             * hold the estimated row count without running out of memory.             * The check for "b" is required because, for deeply nested             * queries and/or queries with a high number of tables in             * their FROM lists, the optimizer can end up calculating             * some very high row count estimates--even up to the point of             * Double.POSITIVE_INFINITY.  In that case attempts to             * create a Hashtable of size estimated_rowcnt can cause             * OutOfMemory errors when we try to create the Hashtable.             * So as a "red flag" for that kind of situation, we check to             * see if the estimated row count is greater than the max             * in-memory size for this table.  Unit-wise this comparison             * is relatively meaningless: rows vs bytes.  But if our             * estimated row count is greater than the max number of             * in-memory bytes that we're allowed to consume, then             * it's very likely that creating a Hashtable with a capacity             * of estimated_rowcnt will lead to memory problems.  So in             * that particular case we leave hash_table null here and             * initialize it further below, using the estimated in-memory             * size of the first row to figure out what a reasonable size             * for the Hashtable might be.             */            hash_table =                 (((estimated_rowcnt <= 0) || (row_source == null)) ?                     new Hashtable() :                     (estimated_rowcnt < max_inmemory_size) ?                         new Hashtable((int) estimated_rowcnt) :                         null);        }        if (row_source != null)        {            boolean needsToClone = row_source.needsToClone();            while ((row = getNextRowFromRowSource()) != null)            {                // If we haven't initialized the hash_table yet then that's                // because a Hashtable with capacity estimated_rowcnt would                // probably cause memory problems.  So look at the first row                // that we found and use that to create the hash table with                // an initial capacity such that, if it was completely full,                // it would still satisfy the max_inmemory condition.  Note                // that this isn't a hard limit--the hash table can grow if                // needed.                if (hash_table == null)                {                    // Check to see how much memory we think the first row                    // is going to take, and then use that to set the initial                    // capacity of the Hashtable.                    double rowUsage = getEstimatedMemUsage(row);                    hash_table = new Hashtable((int)(max_inmemory_size / rowUsage));                }                if (needsToClone)                {                    row = cloneRow(row);                }                Object key =                     KeyHasher.buildHashKey(row, key_column_numbers);                add_row_to_hash_table(hash_table, key, row);            }        }        // In the (unlikely) event that we received a "red flag" estimated_rowcnt        // that is too big (see comments above), it's possible that, if row_source        // was null or else didn't have any rows, hash_table could still be null        // at this point.  So we initialize it to an empty hashtable (representing        // an empty result set) so that calls to other methods on this        // BackingStoreHashtable (ex. "size()") will have a working hash_table        // on which to operate.        if (hash_table == null)            hash_table = new Hashtable();    }    /**************************************************************************     * Private/Protected methods of This class:     **************************************************************************     */	/**	 * Call method to either get next row or next row with non-null	 * key columns.	 *     *	 * @exception  StandardException  Standard exception policy.	 */	private Object[] getNextRowFromRowSource()		throws StandardException	{		Object[] row = row_source.getNextRowFromRowSource();		if (skipNullKeyColumns)		{			while (row != null)			{				// Are any key columns null?				int index = 0;				for ( ; index < key_column_numbers.length; index++)				{					if (SanityManager.DEBUG)					{						if (! (row[key_column_numbers[index]] instanceof Storable))						{							SanityManager.THROWASSERT(								"row[key_column_numbers[index]] expected to be Storable, not " +								row[key_column_numbers[index]].getClass().getName());						}					}					Storable storable = (Storable) row[key_column_numbers[index]];					if (storable.isNull())					{						break;					}				}				// No null key columns				if (index == key_column_numbers.length)				{					return row;				}				// 1 or more null key columns				row = row_source.getNextRowFromRowSource();			}		}		return row;	}    /**     * Return a cloned copy of the row.     *	 * @return The cloned row row to use.     *	 * @exception  StandardException  Standard exception policy.     **/    static Object[] cloneRow(Object[] old_row)        throws StandardException    {        Object[] new_row = new DataValueDescriptor[old_row.length];		// the only difference between getClone and cloneObject is cloneObject does		// not objectify a stream.  We use getClone here.  Beetle 4896.        for (int i = 0; i < old_row.length; i++)        {            if( old_row[i] != null)                new_row[i] = ((DataValueDescriptor) old_row[i]).getClone();        }        return(new_row);    }    /**     * Do the work to add one row to the hash table.     * <p>     *     * @param row               Row to add to the hash table.     * @param hash_table        The java HashTable to load into.     *	 * @exception  StandardException  Standard exception policy.     **/    private void add_row_to_hash_table(    Hashtable   hash_table,    Object      key,    Object[]    row)		throws StandardException    {        if( spillToDisk( hash_table, key, row))            return;                Object  duplicate_value = null;        if ((duplicate_value = hash_table.put(key, row)) == null)            doSpaceAccounting( row, false);        else        {            if (!remove_duplicates)            {

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -