⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 abstractconcurrentreadcache.java

📁 oscache-2.4.1-full
💻 JAVA
📖 第 1 页 / 共 5 页
字号:
/*
 * Copyright (c) 2002-2003 by OpenSymphony
 * All rights reserved.
 */
/*
        File: AbstractConcurrentReadCache

        Written by Doug Lea. Adapted from JDK1.2 HashMap.java and Hashtable.java
        which carries the following copyright:

                 * Copyright 1997 by Sun Microsystems, Inc.,
                 * 901 San Antonio Road, Palo Alto, California, 94303, U.S.A.
                 * All rights reserved.
                 *
                 * This software is the confidential and proprietary information
                 * of Sun Microsystems, Inc. ("Confidential Information").  You
                 * shall not disclose such Confidential Information and shall use
                 * it only in accordance with the terms of the license agreement
                 * you entered into with Sun.

        This class is a modified version of ConcurrentReaderHashMap, which was written
        by Doug Lea (http://gee.cs.oswego.edu/dl/). The modifications where done
        by Pyxis Technologies. This is a base class for the OSCache module of the
        openSymphony project (www.opensymphony.com).

        History:
        Date       Who                What
        28oct1999  dl               Created
        14dec1999  dl               jmm snapshot
        19apr2000  dl               use barrierLock
        12jan2001  dl               public release
        Oct2001    abergevin@pyxis-tech.com
                                                                Integrated persistence and outer algorithm support
*/
package com.opensymphony.oscache.base.algorithm;


/** OpenSymphony BEGIN */
import com.opensymphony.oscache.base.CacheEntry;
import com.opensymphony.oscache.base.persistence.CachePersistenceException;
import com.opensymphony.oscache.base.persistence.PersistenceListener;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

import java.io.IOException;
import java.io.Serializable;

import java.util.*;

/**
 * A version of Hashtable that supports mostly-concurrent reading, but exclusive writing.
 * Because reads are not limited to periods
 * without writes, a concurrent reader policy is weaker than a classic
 * reader/writer policy, but is generally faster and allows more
 * concurrency. This class is a good choice especially for tables that
 * are mainly created by one thread during the start-up phase of a
 * program, and from then on, are mainly read (with perhaps occasional
 * additions or removals) in many threads.  If you also need concurrency
 * among writes, consider instead using ConcurrentHashMap.
 * <p>
 *
 * Successful retrievals using get(key) and containsKey(key) usually
 * run without locking. Unsuccessful ones (i.e., when the key is not
 * present) do involve brief synchronization (locking).  Also, the
 * size and isEmpty methods are always synchronized.
 *
 * <p> Because retrieval operations can ordinarily overlap with
 * writing operations (i.e., put, remove, and their derivatives),
 * retrievals can only be guaranteed to return the results of the most
 * recently <em>completed</em> operations holding upon their
 * onset. Retrieval operations may or may not return results
 * reflecting in-progress writing operations.  However, the retrieval
 * operations do always return consistent results -- either those
 * holding before any single modification or after it, but never a
 * nonsense result.  For aggregate operations such as putAll and
 * clear, concurrent reads may reflect insertion or removal of only
 * some entries. In those rare contexts in which you use a hash table
 * to synchronize operations across threads (for example, to prevent
 * reads until after clears), you should either encase operations
 * in synchronized blocks, or instead use java.util.Hashtable.
 *
 * <p>
 *
 * This class also supports optional guaranteed
 * exclusive reads, simply by surrounding a call within a synchronized
 * block, as in <br>
 * <code>AbstractConcurrentReadCache t; ... Object v; <br>
 * synchronized(t) { v = t.get(k); } </code> <br>
 *
 * But this is not usually necessary in practice. For
 * example, it is generally inefficient to write:
 *
 * <pre>
 *   AbstractConcurrentReadCache t; ...            // Inefficient version
 *   Object key; ...
 *   Object value; ...
 *   synchronized(t) {
 *     if (!t.containsKey(key))
 *       t.put(key, value);
 *       // other code if not previously present
 *     }
 *     else {
 *       // other code if it was previously present
 *     }
 *   }
 *</pre>
 * Instead, just take advantage of the fact that put returns
 * null if the key was not previously present:
 * <pre>
 *   AbstractConcurrentReadCache t; ...                // Use this instead
 *   Object key; ...
 *   Object value; ...
 *   Object oldValue = t.put(key, value);
 *   if (oldValue == null) {
 *     // other code if not previously present
 *   }
 *   else {
 *     // other code if it was previously present
 *   }
 *</pre>
 * <p>
 *
 * Iterators and Enumerations (i.e., those returned by
 * keySet().iterator(), entrySet().iterator(), values().iterator(),
 * keys(), and elements()) return elements reflecting the state of the
 * hash table at some point at or since the creation of the
 * iterator/enumeration.  They will return at most one instance of
 * each element (via next()/nextElement()), but might or might not
 * reflect puts and removes that have been processed since they were
 * created.  They do <em>not</em> throw ConcurrentModificationException.
 * However, these iterators are designed to be used by only one
 * thread at a time. Sharing an iterator across multiple threads may
 * lead to unpredictable results if the table is being concurrently
 * modified.  Again, you can ensure interference-free iteration by
 * enclosing the iteration in a synchronized block.  <p>
 *
 * This class may be used as a direct replacement for any use of
 * java.util.Hashtable that does not depend on readers being blocked
 * during updates. Like Hashtable but unlike java.util.HashMap,
 * this class does NOT allow <tt>null</tt> to be used as a key or
 * value.  This class is also typically faster than ConcurrentHashMap
 * when there is usually only one thread updating the table, but
 * possibly many retrieving values from it.
 * <p>
 *
 * Implementation note: A slightly faster implementation of
 * this class will be possible once planned Java Memory Model
 * revisions are in place.
 *
 * <p>[<a href="http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/intro.html"> Introduction to this package. </a>]
 **/
public abstract class AbstractConcurrentReadCache extends AbstractMap implements Map, Cloneable, Serializable {
    /**
     * The default initial number of table slots for this table (32).
     * Used when not otherwise specified in constructor.
     **/
    public static final int DEFAULT_INITIAL_CAPACITY = 32;

    /**
     * The minimum capacity.
     * Used if a lower value is implicitly specified
     * by either of the constructors with arguments.
     * MUST be a power of two.
     */
    private static final int MINIMUM_CAPACITY = 4;

    /**
     * The maximum capacity.
     * Used if a higher value is implicitly specified
     * by either of the constructors with arguments.
     * MUST be a power of two <= 1<<30.
     */
    private static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * The default load factor for this table.
     * Used when not otherwise specified in constructor, the default is 0.75f.
     **/
    public static final float DEFAULT_LOAD_FACTOR = 0.75f;

    //OpenSymphony BEGIN (pretty long!)
    protected static final String NULL = "_nul!~";
    
    private static final Log log = LogFactory.getLog(AbstractConcurrentReadCache.class);

    /*
      The basic strategy is an optimistic-style scheme based on
      the guarantee that the hash table and its lists are always
      kept in a consistent enough state to be read without locking:

      * Read operations first proceed without locking, by traversing the
         apparently correct list of the apparently correct bin. If an
         entry is found, but not invalidated (value field null), it is
         returned. If not found, operations must recheck (after a memory
         barrier) to make sure they are using both the right list and
         the right table (which can change under resizes). If
         invalidated, reads must acquire main update lock to wait out
         the update, and then re-traverse.

      * All list additions are at the front of each bin, making it easy
         to check changes, and also fast to traverse.  Entry next
         pointers are never assigned. Remove() builds new nodes when
         necessary to preserve this.

      * Remove() (also clear()) invalidates removed nodes to alert read
         operations that they must wait out the full modifications.

    */

    /**
     * Lock used only for its memory effects. We use a Boolean
     * because it is serializable, and we create a new one because
     * we need a unique object for each cache instance.
     **/
    protected final Boolean barrierLock = new Boolean(true);

    /**
     * field written to only to guarantee lock ordering.
     **/
    protected transient Object lastWrite;

    /**
     * The hash table data.
     */
    protected transient Entry[] table;

    /**
     * The total number of mappings in the hash table.
     */
    protected transient int count;

    /**
     * Persistence listener.
     */
    protected transient PersistenceListener persistenceListener = null;

    /**
     * Use memory cache or not.
     */
    protected boolean memoryCaching = true;

    /**
     * Use unlimited disk caching.
     */
    protected boolean unlimitedDiskCache = false;

    /**
     * The load factor for the hash table.
     *
     * @serial
     */
    protected float loadFactor;

    /**
     * Default cache capacity (number of entries).
     */
    protected final int DEFAULT_MAX_ENTRIES = 100;

    /**
     * Max number of element in cache when considered unlimited.
     */
    protected final int UNLIMITED = 2147483646;
    protected transient Collection values = null;

    /**
     * A HashMap containing the group information.
     * Each entry uses the group name as the key, and holds a
     * <code>Set</code> of containing keys of all
     * the cache entries that belong to that particular group.
     */
    protected HashMap groups = new HashMap();
    protected transient Set entrySet = null;

    // Views
    protected transient Set keySet = null;

    /**
     * Cache capacity (number of entries).
     */
    protected int maxEntries = DEFAULT_MAX_ENTRIES;

    /**
     * The table is rehashed when its size exceeds this threshold.
     * (The value of this field is always (int)(capacity * loadFactor).)
     *
     * @serial
     */
    protected int threshold;

    /**
     * Use overflow persistence caching.
     */
    private boolean overflowPersistence = false;

    /**
     * Constructs a new, empty map with the specified initial capacity and the specified load factor.
     *
     * @param initialCapacity the initial capacity
     *  The actual initial capacity is rounded to the nearest power of two.
     * @param loadFactor  the load factor of the AbstractConcurrentReadCache
     * @throws IllegalArgumentException  if the initial maximum number
     *               of elements is less
     *               than zero, or if the load factor is nonpositive.
     */
    public AbstractConcurrentReadCache(int initialCapacity, float loadFactor) {
        if (loadFactor <= 0) {
            throw new IllegalArgumentException("Illegal Load factor: " + loadFactor);
        }

        this.loadFactor = loadFactor;

        int cap = p2capacity(initialCapacity);
        table = new Entry[cap];
        threshold = (int) (cap * loadFactor);
    }

    /**
     * Constructs a new, empty map with the specified initial capacity and default load factor.
     *
     * @param   initialCapacity   the initial capacity of the
     *                            AbstractConcurrentReadCache.
     * @throws    IllegalArgumentException if the initial maximum number
     *              of elements is less
     *              than zero.
     */
    public AbstractConcurrentReadCache(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }

    /**
     * Constructs a new, empty map with a default initial capacity and load factor.
     */
    public AbstractConcurrentReadCache() {
        this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
    }

    /**
     * Constructs a new map with the same mappings as the given map.
     * The map is created with a capacity of twice the number of mappings in
     * the given map or 11 (whichever is greater), and a default load factor.
     */
    public AbstractConcurrentReadCache(Map t) {
        this(Math.max(2 * t.size(), 11), DEFAULT_LOAD_FACTOR);
        putAll(t);
    }

    /**
     * Returns <tt>true</tt> if this map contains no key-value mappings.
     *
     * @return <tt>true</tt> if this map contains no key-value mappings.
     */
    public synchronized boolean isEmpty() {
        return count == 0;
    }

    /**
     * Returns a set of the cache keys that reside in a particular group.
     *
     * @param   groupName The name of the group to retrieve.
     * @return  a set containing all of the keys of cache entries that belong
     * to this group, or <code>null</code> if the group was not found.
     * @exception  NullPointerException if the groupName is <code>null</code>.
     */
    public Set getGroup(String groupName) {
        if (log.isDebugEnabled()) {
            log.debug("getGroup called (group=" + groupName + ")");
        }

        Set groupEntries = null;

        if (memoryCaching && (groups != null)) {

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -