📄 concurrenthashmap.java
字号:
/**File Name:CocurrentHashMap.java
* Company: 中国移动集团公司
* Date : 2004-1-9
* */
package com.cmcc.mm7.vasp.common;
import java.util.Map;
import java.util.AbstractMap;
import java.util.AbstractSet;
import java.util.AbstractCollection;
import java.util.Collection;
import java.util.Set;
import java.util.Iterator;
import java.util.Enumeration;
import java.util.NoSuchElementException;
import java.io.Serializable;
import java.io.IOException;
/**
* A version of Hashtable supporting
* concurrency for both retrievals and updates:
*
* <dl>
* <dt> Retrievals
*
* <dd> Retrievals may overlap updates. (This is the same policy as
* ConcurrentReaderHashMap.) Successful retrievals using get(key) and
* containsKey(key) usually run without locking. Unsuccessful
* retrievals (i.e., when the key is not present) do involve brief
* synchronization (locking). Because retrieval operations can
* ordinarily overlap with update operations (i.e., put, remove, and
* their derivatives), retrievals can only be guaranteed to return the
* results of the most recently <em>completed</em> operations holding
* upon their onset. Retrieval operations may or may not return
* results reflecting in-progress writing operations. However, the
* retrieval operations do always return consistent results -- either
* those holding before any single modification or after it, but never
* a nonsense result. For aggregate operations such as putAll and
* clear, concurrent reads may reflect insertion or removal of only
* some entries.
* <p>
*
* Iterators and Enumerations (i.e., those returned by
* keySet().iterator(), entrySet().iterator(), values().iterator(),
* keys(), and elements()) return elements reflecting the state of the
* hash table at some point at or since the creation of the
* iterator/enumeration. They will return at most one instance of
* each element (via next()/nextElement()), but might or might not
* reflect puts and removes that have been processed since they were
* created. They do <em>not</em> throw ConcurrentModificationException.
* However, these iterators are designed to be used by only one
* thread at a time. Passing an iterator across multiple threads may
* lead to unpredictable results if the table is being concurrently
* modified. <p>
*
*
* <dt> Updates
*
* <dd> This class supports a hard-wired preset <em>concurrency
* level</em> of 32. This allows a maximum of 32 put and/or remove
* operations to proceed concurrently. This level is an upper bound on
* concurrency, not a guarantee, since it interacts with how
* well-strewn elements are across bins of the table. (The preset
* value in part reflects the fact that even on large multiprocessors,
* factors other than synchronization tend to be bottlenecks when more
* than 32 threads concurrently attempt updates.)
* Additionally, operations triggering internal resizing and clearing
* do not execute concurrently with any operation.
* <p>
*
* There is <em>NOT</em> any support for locking the entire table to
* prevent updates. This makes it imposssible, for example, to
* add an element only if it is not already present, since another
* thread may be in the process of doing the same thing.
* If you need such capabilities, consider instead using the
* ConcurrentReaderHashMap class.
*
* </dl>
*
* Because of how concurrency control is split up, the
* size() and isEmpty() methods require accumulations across 32
* control segments, and so might be slightly slower than you expect.
* <p>
*
* This class may be used as a direct replacement for
* java.util.Hashtable in any application that does not rely
* on the ability to lock the entire table to prevent updates.
* As of this writing, it performs much faster than Hashtable in
* typical multi-threaded applications with multiple readers and writers.
* Like Hashtable but unlike java.util.HashMap,
* this class does NOT allow <tt>null</tt> to be used as a key or
* value.
* <p>
*
* Implementation note: A slightly
* faster implementation of this class will be possible once planned
* Java Memory Model revisions are in place.
*
* <p>[<a href="http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/intro.html"> Introduction to this package. </a>]
**/
public class ConcurrentHashMap
extends AbstractMap
implements Map, Cloneable, Serializable {
/*
The basic strategy is an optimistic-style scheme based on
the guarantee that the hash table and its lists are always
kept in a consistent enough state to be read without locking:
* Read operations first proceed without locking, by traversing the
apparently correct list of the apparently correct bin. If an
entry is found, but not invalidated (value field null), it is
returned. If not found, operations must recheck (after a memory
barrier) to make sure they are using both the right list and
the right table (which can change under resizes). If
invalidated, reads must acquire main update lock to wait out
the update, and then re-traverse.
* All list additions are at the front of each bin, making it easy
to check changes, and also fast to traverse. Entry next
pointers are never assigned. Remove() builds new nodes when
necessary to preserve this.
* Remove() (also clear()) invalidates removed nodes to alert read
operations that they must wait out the full modifications.
* Locking for puts, removes (and, when necessary gets, etc)
is controlled by Segments, each covering a portion of the
table. During operations requiring global exclusivity (mainly
resize and clear), ALL of these locks are acquired at once.
Note that these segments are NOT contiguous -- they are based
on the least 5 bits of hashcodes. This ensures that the same
segment controls the same slots before and after resizing, which
is necessary for supporting concurrent retrievals. This
comes at the price of a mismatch of logical vs physical locality,
but this seems not to be a performance problem in practice.
*/
/**
* The hash table data.
*/
protected transient Entry[] table;
/**
* The number of concurrency control segments.
* The value can be at most 32 since ints are used
* as bitsets over segments. Emprically, it doesn't
* seem to pay to decrease it either, so the value should be at least 32.
* In other words, do not redefine this :-)
**/
protected static final int CONCURRENCY_LEVEL = 32;
/**
* Mask value for indexing into segments
**/
protected static final int SEGMENT_MASK = CONCURRENCY_LEVEL - 1;
/**
* Bookkeeping for each concurrency control segment.
* Each segment contains a local count of the number of
* elements in its region.
* However, the main use of a Segment is for its lock.
**/
protected final static class Segment {
/**
* The number of elements in this segment's region.
* It is always updated within synchronized blocks.
**/
protected int count;
/**
* Get the count under synch.
**/
protected synchronized int getCount() { return count; }
/**
* Force a synchronization
**/
protected synchronized void synch() {}
}
/**
* The array of concurrency control segments.
**/
protected final Segment[] segments = new Segment[CONCURRENCY_LEVEL];
/**
* The default initial number of table slots for this table (32).
* Used when not otherwise specified in constructor.
**/
public static int DEFAULT_INITIAL_CAPACITY = 32;
/**
* The minimum capacity, used if a lower value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two.
*/
private static final int MINIMUM_CAPACITY = 32;
/**
* The maximum capacity, used if a higher value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two <= 1<<30.
*/
private static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* The default load factor for this table (0.75)
* Used when not otherwise specified in constructor.
**/
public static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* The load factor for the hash table.
*
* @serial
*/
protected final float loadFactor;
/**
* Per-segment resize threshold.
*
* @serial
*/
protected int threshold;
/**
* Number of segments voting for resize. The table is
* doubled when 1/4 of the segments reach threshold.
* Volatile but updated without synch since this is just a heuristic.
**/
protected transient volatile int votesForResize;
/**
* Return the number of set bits in w.
* For a derivation of this algorithm, see
* "Algorithms and data structures with applications to
* graphics and geometry", by Jurg Nievergelt and Klaus Hinrichs,
* Prentice Hall, 1993.
* See also notes by Torsten Sillke at
* http://www.mathematik.uni-bielefeld.de/~sillke/PROBLEMS/bitcount
**/
protected static int bitcount(int w) {
w -= (0xaaaaaaaa & w) >>> 1;
w = (w & 0x33333333) + ((w >>> 2) & 0x33333333);
w = (w + (w >>> 4)) & 0x0f0f0f0f;
w += w >>> 8;
w += w >>> 16;
return w & 0xff;
}
/**
* Returns the appropriate capacity (power of two) for the specified
* initial capacity argument.
*/
private int p2capacity(int initialCapacity) {
int cap = initialCapacity;
// Compute the appropriate capacity
int result;
if (cap > MAXIMUM_CAPACITY || cap < 0) {
result = MAXIMUM_CAPACITY;
} else {
result = MINIMUM_CAPACITY;
while (result < cap)
result <<= 1;
}
return result;
}
/**
* Return hash code for Object x. Since we are using power-of-two
* tables, it is worth the effort to improve hashcode via
* the same multiplicative scheme as used in IdentityHashMap.
*/
protected static int hash(Object x) {
int h = x.hashCode();
// Multiply by 127 (quickly, via shifts), and mix in some high
// bits to help guard against bunching of codes that are
// consecutive or equally spaced.
return ((h << 7) - h + (h >>> 9) + (h >>> 17));
}
/**
* Check for equality of non-null references x and y.
**/
protected boolean eq(Object x, Object y) {
return x == y || x.equals(y);
}
/** Create table array and set the per-segment threshold **/
protected Entry[] newTable(int capacity) {
threshold = (int)(capacity * loadFactor / CONCURRENCY_LEVEL) + 1;
return new Entry[capacity];
}
/**
* Constructs a new, empty map with the specified initial
* capacity and the specified load factor.
*
* @param initialCapacity the initial capacity.
* The actual initial capacity is rounded to the nearest power of two.
* @param loadFactor the load factor threshold, used to control resizing.
* This value is used in an approximate way: When at least
* a quarter of the segments of the table reach per-segment threshold, or
* one of the segments itself exceeds overall threshold,
* the table is doubled.
* This will on average cause resizing when the table-wide
* load factor is slightly less than the threshold. If you'd like
* to avoid resizing, you can set this to a ridiculously large
* value.
* @throws IllegalArgumentException if the load factor is nonpositive.
*/
public ConcurrentHashMap(int initialCapacity,
float loadFactor) {
if (!(loadFactor > 0))
throw new IllegalArgumentException("Illegal Load factor: "+
loadFactor);
this.loadFactor = loadFactor;
for (int i = 0; i < segments.length; ++i)
segments[i] = new Segment();
int cap = p2capacity(initialCapacity);
table = newTable(cap);
}
/**
* Constructs a new, empty map with the specified initial
* capacity and default load factor.
*
* @param initialCapacity the initial capacity of the
* ConcurrentHashMap.
* @throws IllegalArgumentException if the initial maximum number
* of elements is less
* than zero.
*/
public ConcurrentHashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
/**
* Constructs a new, empty map with a default initial capacity
* and default load factor.
*/
public ConcurrentHashMap() {
this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
}
/**
* Constructs a new map with the same mappings as the given map. The
* map is created with a capacity of twice the number of mappings in
* the given map or 32 (whichever is greater), and a default load factor.
*/
public ConcurrentHashMap(Map t) {
this(Math.max((int) (t.size() / DEFAULT_LOAD_FACTOR) + 1,
MINIMUM_CAPACITY),
DEFAULT_LOAD_FACTOR);
putAll(t);
}
/**
* Returns the number of key-value mappings in this map.
*
* @return the number of key-value mappings in this map.
*/
public int size() {
int c = 0;
for (int i = 0; i < segments.length; ++i)
c += segments[i].getCount();
return c;
}
/**
* Returns <tt>true</tt> if this map contains no key-value mappings.
*
* @return <tt>true</tt> if this map contains no key-value mappings.
*/
public boolean isEmpty() {
for (int i = 0; i < segments.length; ++i)
if (segments[i].getCount() != 0)
return false;
return true;
}
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -