⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 abstractqueuedsynchronizer.java

📁 JAVA的一些源码 JAVA2 STANDARD EDITION DEVELOPMENT KIT 5.0
💻 JAVA
📖 第 1 页 / 共 5 页
字号:
/* * @(#)AbstractQueuedSynchronizer.java	1.4 07/01/04 * * Copyright 2004 Sun Microsystems, Inc. All rights reserved. * SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms. */package java.util.concurrent.locks;import java.util.*;import java.util.concurrent.*;import java.util.concurrent.atomic.*;import sun.misc.Unsafe;/** * Provides a framework for implementing blocking locks and related * synchronizers (semaphores, events, etc) that rely on * first-in-first-out (FIFO) wait queues.  This class is designed to * be a useful basis for most kinds of synchronizers that rely on a * single atomic <tt>int</tt> value to represent state. Subclasses * must define the protected methods that change this state, and which * define what that state means in terms of this object being acquired * or released.  Given these, the other methods in this class carry * out all queuing and blocking mechanics. Subclasses can maintain * other state fields, but only the atomically updated <tt>int</tt> * value manipulated using methods {@link #getState}, {@link * #setState} and {@link #compareAndSetState} is tracked with respect * to synchronization. * * <p>Subclasses should be defined as non-public internal helper * classes that are used to implement the synchronization properties * of their enclosing class.  Class * <tt>AbstractQueuedSynchronizer</tt> does not implement any * synchronization interface.  Instead it defines methods such as * {@link #acquireInterruptibly} that can be invoked as * appropriate by concrete locks and related synchronizers to * implement their public methods.  * * <p>This class supports either or both a default <em>exclusive</em> * mode and a <em>shared</em> mode. When acquired in exclusive mode, * attempted acquires by other threads cannot succeed. Shared mode * acquires by multiple threads may (but need not) succeed. This class * does not &quot;understand&quot; these differences except in the * mechanical sense that when a shared mode acquire succeeds, the next * waiting thread (if one exists) must also determine whether it can * acquire as well. Threads waiting in the different modes share the * same FIFO queue. Usually, implementation subclasses support only * one of these modes, but both can come into play for example in a * {@link ReadWriteLock}. Subclasses that support only exclusive or * only shared modes need not define the methods supporting the unused mode. * * <p>This class defines a nested {@link ConditionObject} class that * can be used as a {@link Condition} implementation by subclasses * supporting exclusive mode for which method {@link * #isHeldExclusively} reports whether synchronization is exclusively * held with respect to the current thread, method {@link #release} * invoked with the current {@link #getState} value fully releases * this object, and {@link #acquire}, given this saved state value, * eventually restores this object to its previous acquired state.  No * <tt>AbstractQueuedSynchronizer</tt> method otherwise creates such a * condition, so if this constraint cannot be met, do not use it.  The * behavior of {@link ConditionObject} depends of course on the * semantics of its synchronizer implementation. *  * <p> This class provides inspection, instrumentation, and monitoring * methods for the internal queue, as well as similar methods for * condition objects. These can be exported as desired into classes * using an <tt>AbstractQueuedSynchronizer</tt> for their * synchronization mechanics. * * <p> Serialization of this class stores only the underlying atomic * integer maintaining state, so deserialized objects have empty * thread queues. Typical subclasses requiring serializability will * define a <tt>readObject</tt> method that restores this to a known * initial state upon deserialization. * * <h3>Usage</h3> * * <p> To use this class as the basis of a synchronizer, redefine the * following methods, as applicable, by inspecting and/or modifying * the synchronization state using {@link #getState}, {@link * #setState} and/or {@link #compareAndSetState}:  * * <ul> * <li> {@link #tryAcquire} * <li> {@link #tryRelease} * <li> {@link #tryAcquireShared} * <li> {@link #tryReleaseShared} * <li> {@link #isHeldExclusively} *</ul> * * Each of these methods by default throws {@link * UnsupportedOperationException}.  Implementations of these methods * must be internally thread-safe, and should in general be short and * not block. Defining these methods is the <em>only</em> supported * means of using this class. All other methods are declared * <tt>final</tt> because they cannot be independently varied. * * <p> Even though this class is based on an internal FIFO queue, it * does not automatically enforce FIFO acquisition policies.  The core * of exclusive synchronization takes the form: * * <pre> * Acquire: *     while (!tryAcquire(arg)) { *        <em>enqueue thread if it is not already queued</em>; *        <em>possibly block current thread</em>; *     } * * Release: *     if (tryRelease(arg)) *        <em>unblock the first queued thread</em>; * </pre> * * (Shared mode is similar but may involve cascading signals.) * * <p> Because checks in acquire are invoked before enqueuing, a newly * acquiring thread may <em>barge</em> ahead of others that are * blocked and queued. However, you can, if desired, define * <tt>tryAcquire</tt> and/or <tt>tryAcquireShared</tt> to disable * barging by internally invoking one or more of the inspection * methods. In particular, a strict FIFO lock can define * <tt>tryAcquire</tt> to immediately return <tt>false</tt> if {@link * #getFirstQueuedThread} does not return the current thread.  A * normally preferable non-strict fair version can immediately return * <tt>false</tt> only if {@link #hasQueuedThreads} returns * <tt>true</tt> and <tt>getFirstQueuedThread</tt> is not the current * thread; or equivalently, that <tt>getFirstQueuedThread</tt> is both * non-null and not the current thread.  Further variations are * possible. * * <p> Throughput and scalability are generally highest for the * default barging (also known as <em>greedy</em>, * <em>renouncement</em>, and <em>convoy-avoidance</em>) strategy. * While this is not guaranteed to be fair or starvation-free, earlier * queued threads are allowed to recontend before later queued * threads, and each recontention has an unbiased chance to succeed * against incoming threads.  Also, while acquires do not * &quot;spin&quot; in the usual sense, they may perform multiple * invocations of <tt>tryAcquire</tt> interspersed with other * computations before blocking.  This gives most of the benefits of * spins when exclusive synchronization is only briefly held, without * most of the liabilities when it isn't. If so desired, you can * augment this by preceding calls to acquire methods with * "fast-path" checks, possibly prechecking {@link #hasContended} * and/or {@link #hasQueuedThreads} to only do so if the synchronizer * is likely not to be contended. * * <p> This class provides an efficient and scalable basis for * synchronization in part by specializing its range of use to * synchronizers that can rely on <tt>int</tt> state, acquire, and * release parameters, and an internal FIFO wait queue. When this does * not suffice, you can build synchronizers from a lower level using * {@link java.util.concurrent.atomic atomic} classes, your own custom * {@link java.util.Queue} classes, and {@link LockSupport} blocking * support. *  * <h3>Usage Examples</h3> * * <p>Here is a non-reentrant mutual exclusion lock class that uses * the value zero to represent the unlocked state, and one to * represent the locked state. It also supports conditions and exposes * one of the instrumentation methods: * * <pre> * class Mutex implements Lock, java.io.Serializable { * *    // Our internal helper class *    private static class Sync extends AbstractQueuedSynchronizer { *      // Report whether in locked state *      protected boolean isHeldExclusively() {  *        return getState() == 1;  *      } * *      // Acquire the lock if state is zero *      public boolean tryAcquire(int acquires) { *        assert acquires == 1; // Otherwise unused *        return compareAndSetState(0, 1); *      } * *      // Release the lock by setting state to zero *      protected boolean tryRelease(int releases) { *        assert releases == 1; // Otherwise unused *        if (getState() == 0) throw new IllegalMonitorStateException(); *        setState(0); *        return true; *      } *        *      // Provide a Condition *      Condition newCondition() { return new ConditionObject(); } * *      // Deserialize properly *      private void readObject(ObjectInputStream s) throws IOException, ClassNotFoundException { *        s.defaultReadObject(); *        setState(0); // reset to unlocked state *      } *    } * *    // The sync object does all the hard work. We just forward to it. *    private final Sync sync = new Sync(); * *    public void lock()                { sync.acquire(1); } *    public boolean tryLock()          { return sync.tryAcquire(1); } *    public void unlock()              { sync.release(1); } *    public Condition newCondition()   { return sync.newCondition(); } *    public boolean isLocked()         { return sync.isHeldExclusively(); } *    public boolean hasQueuedThreads() { return sync.hasQueuedThreads(); } *    public void lockInterruptibly() throws InterruptedException {  *      sync.acquireInterruptibly(1); *    } *    public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException { *      return sync.tryAcquireNanos(1, unit.toNanos(timeout)); *    } * } * </pre> * * <p> Here is a latch class that is like a {@link CountDownLatch} * except that it only requires a single <tt>signal</tt> to * fire. Because a latch is non-exclusive, it uses the <tt>shared</tt> * acquire and release methods. * * <pre> * class BooleanLatch { * *    private static class Sync extends AbstractQueuedSynchronizer { *      boolean isSignalled() { return getState() != 0; } * *      protected int tryAcquireShared(int ignore) { *        return isSignalled()? 1 : -1; *      } *         *      protected boolean tryReleaseShared(int ignore) { *        setState(1); *        return true; *      } *    } * *    private final Sync sync = new Sync(); *    public boolean isSignalled() { return sync.isSignalled(); } *    public void signal()         { sync.releaseShared(1); } *    public void await() throws InterruptedException { *      sync.acquireSharedInterruptibly(1); *    } * } * * </pre> * * @since 1.5 * @author Doug Lea */public abstract class AbstractQueuedSynchronizer implements java.io.Serializable {    private static final long serialVersionUID = 7373984972572414691L;    /**     * Creates a new <tt>AbstractQueuedSynchronizer</tt> instance     * with initial synchronization state of zero.     */    protected AbstractQueuedSynchronizer() { }    /**     * Wait queue node class.     *     * <p> The wait queue is a variant of a "CLH" (Craig, Landin, and     * Hagersten) lock queue. CLH locks are normally used for     * spinlocks.  We instead use them for blocking synchronizers, but     * use the same basic tactic of holding some of the control     * information about a thread in the predecessor of its node.  A     * "status" field in each node keeps track of whether a thread     * should block.  A node is signalled when its predecessor     * releases.  Each node of the queue otherwise serves as a     * specific-notification-style monitor holding a single waiting     * thread. The status field does NOT control whether threads are     * granted locks etc though.  A thread may try to acquire if it is     * first in the queue. But being first does not guarantee success;     * it only gives the right to contend.  So the currently released     * contender thread may need to rewait.     *     * <p>To enqueue into a CLH lock, you atomically splice it in as new     * tail. To dequeue, you just set the head field.       * <pre>     *      +------+  prev +-----+       +-----+     * head |      | <---- |     | <---- |     |  tail     *      +------+       +-----+       +-----+     * </pre>     *     * <p>Insertion into a CLH queue requires only a single atomic     * operation on "tail", so there is a simple atomic point of     * demarcation from unqueued to queued. Similarly, dequeing     * involves only updating the "head". However, it takes a bit     * more work for nodes to determine who their successors are,     * in part to deal with possible cancellation due to timeouts     * and interrupts.     *     * <p>The "prev" links (not used in original CLH locks), are mainly     * needed to handle cancellation. If a node is cancelled, its     * successor is (normally) relinked to a non-cancelled     * predecessor. For explanation of similar mechanics in the case     * of spin locks, see the papers by Scott and Scherer at     * http://www.cs.rochester.edu/u/scott/synchronization/     *      * <p>We also use "next" links to implement blocking mechanics.     * The thread id for each node is kept in its own node, so a     * predecessor signals the next node to wake up by traversing     * next link to determine which thread it is.  Determination of     * successor must avoid races with newly queued nodes to set     * the "next" fields of their predecessors.  This is solved     * when necessary by checking backwards from the atomically     * updated "tail" when a node's successor appears to be null.     * (Or, said differently, the next-links are an optimization     * so that we don't usually need a backward scan.)     *     * <p>Cancellation introduces some conservatism to the basic     * algorithms.  Since we must poll for cancellation of other     * nodes, we can miss noticing whether a cancelled node is     * ahead or behind us. This is dealt with by always unparking     * successors upon cancellation, allowing them to stabilize on     * a new predecessor.     *     * <p>CLH queues need a dummy header node to get started. But     * we don't create them on construction, because it would be wasted     * effort if there is never contention. Instead, the node     * is constructed and head and tail pointers are set upon first     * contention.     *     * <p>Threads waiting on Conditions use the same nodes, but     * use an additional link. Conditions only need to link nodes     * in simple (non-concurrent) linked queues because they are     * only accessed when exclusively held.  Upon await, a node is     * inserted into a condition queue.  Upon signal, the node is     * transferred to the main queue.  A special value of status     * field is used to mark which queue a node is on.     *     * <p>Thanks go to Dave Dice, Mark Moir, Victor Luchangco, Bill     * Scherer and Michael Scott, along with members of JSR-166     * expert group, for helpful ideas, discussions, and critiques     * on the design of this class.     */    static final class Node {        /** waitStatus value to indicate thread has cancelled */        static final int CANCELLED =  1;        /** waitStatus value to indicate thread needs unparking */

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -