threadpoolexecutor.java
来自「SRI international 发布的OAA框架软件」· Java 代码 · 共 1,576 行 · 第 1/4 页
JAVA
1,576 行
/*
* Written by Doug Lea with assistance from members of JCP JSR-166
* Expert Group and released to the public domain, as explained at
* http://creativecommons.org/licenses/publicdomain
*/
package edu.emory.mathcs.backport.java.util.concurrent;
//import java.util.concurrent.locks.*;
import java.util.*;
import edu.emory.mathcs.backport.java.util.Assert;
import edu.emory.mathcs.backport.java.util.concurrent.locks.*;
import edu.emory.mathcs.backport.java.util.concurrent.helpers.*;
/**
* An {@link ExecutorService} that executes each submitted task using
* one of possibly several pooled threads, normally configured
* using {@link Executors} factory methods.
*
* <p>Thread pools address two different problems: they usually
* provide improved performance when executing large numbers of
* asynchronous tasks, due to reduced per-task invocation overhead,
* and they provide a means of bounding and managing the resources,
* including threads, consumed when executing a collection of tasks.
* Each <tt>ThreadPoolExecutor</tt> also maintains some basic
* statistics, such as the number of completed tasks.
*
* <p>To be useful across a wide range of contexts, this class
* provides many adjustable parameters and extensibility
* hooks. However, programmers are urged to use the more convenient
* {@link Executors} factory methods {@link
* Executors#newCachedThreadPool} (unbounded thread pool, with
* automatic thread reclamation), {@link Executors#newFixedThreadPool}
* (fixed size thread pool) and {@link
* Executors#newSingleThreadExecutor} (single background thread), that
* preconfigure settings for the most common usage
* scenarios. Otherwise, use the following guide when manually
* configuring and tuning this class:
*
* <dl>
*
* <dt>Core and maximum pool sizes</dt>
*
* <dd>A <tt>ThreadPoolExecutor</tt> will automatically adjust the
* pool size
* (see {@link ThreadPoolExecutor#getPoolSize})
* according to the bounds set by corePoolSize
* (see {@link ThreadPoolExecutor#getCorePoolSize})
* and
* maximumPoolSize
* (see {@link ThreadPoolExecutor#getMaximumPoolSize}).
* When a new task is submitted in method {@link
* ThreadPoolExecutor#execute}, and fewer than corePoolSize threads
* are running, a new thread is created to handle the request, even if
* other worker threads are idle. If there are more than
* corePoolSize but less than maximumPoolSize threads running, a new
* thread will be created only if the queue is full. By setting
* corePoolSize and maximumPoolSize the same, you create a fixed-size
* thread pool. By setting maximumPoolSize to an essentially unbounded
* value such as <tt>Integer.MAX_VALUE</tt>, you allow the pool to
* accommodate an arbitrary number of concurrent tasks. Most typically,
* core and maximum pool sizes are set only upon construction, but they
* may also be changed dynamically using {@link
* ThreadPoolExecutor#setCorePoolSize} and {@link
* ThreadPoolExecutor#setMaximumPoolSize}. <dd>
*
* <dt> On-demand construction
*
* <dd> By default, even core threads are initially created and
* started only when needed by new tasks, but this can be overridden
* dynamically using method {@link
* ThreadPoolExecutor#prestartCoreThread} or
* {@link ThreadPoolExecutor#prestartAllCoreThreads}. </dd>
*
* <dt>Creating new threads</dt>
*
* <dd>New threads are created using a {@link
* edu.emory.mathcs.backport.java.util.concurrent.ThreadFactory}. If not otherwise specified, a
* {@link Executors#defaultThreadFactory} is used, that creates threads to all
* be in the same {@link ThreadGroup} and with the same
* <tt>NORM_PRIORITY</tt> priority and non-daemon status. By supplying
* a different ThreadFactory, you can alter the thread's name, thread
* group, priority, daemon status, etc. If a <tt>ThreadFactory</tt> fails to create
* a thread when asked by returning null from <tt>newThread</tt>,
* the executor will continue, but might
* not be able to execute any tasks. </dd>
*
* <dt>Keep-alive times</dt>
*
* <dd>If the pool currently has more than corePoolSize threads,
* excess threads will be terminated if they have been idle for more
* than the keepAliveTime (see {@link
* ThreadPoolExecutor#getKeepAliveTime}). This provides a means of
* reducing resource consumption when the pool is not being actively
* used. If the pool becomes more active later, new threads will be
* constructed. This parameter can also be changed dynamically using
* method {@link ThreadPoolExecutor#setKeepAliveTime}. Using a value
* of <tt>Long.MAX_VALUE</tt> {@link TimeUnit#NANOSECONDS} effectively
* disables idle threads from ever terminating prior to shut down. By
* default, the keep-alive policy applies only when there are more
* than corePoolSizeThreads. But method {@link
* ThreadPoolExecutor#allowCoreThreadTimeOut} can be used to apply
* this time-out policy to core threads as well. </dd>
*
* <dt>Queuing</dt>
*
* <dd>Any {@link BlockingQueue} may be used to transfer and hold
* submitted tasks. The use of this queue interacts with pool sizing:
*
* <ul>
*
* <li> If fewer than corePoolSize threads are running, the Executor
* always prefers adding a new thread
* rather than queuing.</li>
*
* <li> If corePoolSize or more threads are running, the Executor
* always prefers queuing a request rather than adding a new
* thread.</li>
*
* <li> If a request cannot be queued, a new thread is created unless
* this would exceed maximumPoolSize, in which case, the task will be
* rejected.</li>
*
* </ul>
*
* There are three general strategies for queuing:
* <ol>
*
* <li> <em> Direct handoffs.</em> A good default choice for a work
* queue is a {@link SynchronousQueue} that hands off tasks to threads
* without otherwise holding them. Here, an attempt to queue a task
* will fail if no threads are immediately available to run it, so a
* new thread will be constructed. This policy avoids lockups when
* handling sets of requests that might have internal dependencies.
* Direct handoffs generally require unbounded maximumPoolSizes to
* avoid rejection of new submitted tasks. This in turn admits the
* possibility of unbounded thread growth when commands continue to
* arrive on average faster than they can be processed. </li>
*
* <li><em> Unbounded queues.</em> Using an unbounded queue (for
* example a {@link LinkedBlockingQueue} without a predefined
* capacity) will cause new tasks to be queued in cases where all
* corePoolSize threads are busy. Thus, no more than corePoolSize
* threads will ever be created. (And the value of the maximumPoolSize
* therefore doesn't have any effect.) This may be appropriate when
* each task is completely independent of others, so tasks cannot
* affect each others execution; for example, in a web page server.
* While this style of queuing can be useful in smoothing out
* transient bursts of requests, it admits the possibility of
* unbounded work queue growth when commands continue to arrive on
* average faster than they can be processed. </li>
*
* <li><em>Bounded queues.</em> A bounded queue (for example, an
* {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
* used with finite maximumPoolSizes, but can be more difficult to
* tune and control. Queue sizes and maximum pool sizes may be traded
* off for each other: Using large queues and small pools minimizes
* CPU usage, OS resources, and context-switching overhead, but can
* lead to artificially low throughput. If tasks frequently block (for
* example if they are I/O bound), a system may be able to schedule
* time for more threads than you otherwise allow. Use of small queues
* generally requires larger pool sizes, which keeps CPUs busier but
* may encounter unacceptable scheduling overhead, which also
* decreases throughput. </li>
*
* </ol>
*
* </dd>
*
* <dt>Rejected tasks</dt>
*
* <dd> New tasks submitted in method {@link
* ThreadPoolExecutor#execute} will be <em>rejected</em> when the
* Executor has been shut down, and also when the Executor uses finite
* bounds for both maximum threads and work queue capacity, and is
* saturated. In either case, the <tt>execute</tt> method invokes the
* {@link RejectedExecutionHandler#rejectedExecution} method of its
* {@link RejectedExecutionHandler}. Four predefined handler policies
* are provided:
*
* <ol>
*
* <li> In the
* default {@link ThreadPoolExecutor.AbortPolicy}, the handler throws a
* runtime {@link RejectedExecutionException} upon rejection. </li>
*
* <li> In {@link
* ThreadPoolExecutor.CallerRunsPolicy}, the thread that invokes
* <tt>execute</tt> itself runs the task. This provides a simple
* feedback control mechanism that will slow down the rate that new
* tasks are submitted. </li>
*
* <li> In {@link ThreadPoolExecutor.DiscardPolicy},
* a task that cannot be executed is simply dropped. </li>
*
* <li>In {@link
* ThreadPoolExecutor.DiscardOldestPolicy}, if the executor is not
* shut down, the task at the head of the work queue is dropped, and
* then execution is retried (which can fail again, causing this to be
* repeated.) </li>
*
* </ol>
*
* It is possible to define and use other kinds of {@link
* RejectedExecutionHandler} classes. Doing so requires some care
* especially when policies are designed to work only under particular
* capacity or queuing policies. </dd>
*
* <dt>Hook methods</dt>
*
* <dd>This class provides <tt>protected</tt> overridable {@link
* ThreadPoolExecutor#beforeExecute} and {@link
* ThreadPoolExecutor#afterExecute} methods that are called before and
* after execution of each task. These can be used to manipulate the
* execution environment; for example, reinitializing ThreadLocals,
* gathering statistics, or adding log entries. Additionally, method
* {@link ThreadPoolExecutor#terminated} can be overridden to perform
* any special processing that needs to be done once the Executor has
* fully terminated.
*
* <p>If hook or callback methods throw
* exceptions, internal worker threads may in turn fail and
* abruptly terminate.</dd>
*
* <dt>Queue maintenance</dt>
*
* <dd> Method {@link ThreadPoolExecutor#getQueue} allows access to
* the work queue for purposes of monitoring and debugging. Use of
* this method for any other purpose is strongly discouraged. Two
* supplied methods, {@link ThreadPoolExecutor#remove} and {@link
* ThreadPoolExecutor#purge} are available to assist in storage
* reclamation when large numbers of queued tasks become
* cancelled.</dd> </dl>
*
* <p> <b>Extension example</b>. Most extensions of this class
* override one or more of the protected hook methods. For example,
* here is a subclass that adds a simple pause/resume feature:
*
* <pre>
* class PausableThreadPoolExecutor extends ThreadPoolExecutor {
* private boolean isPaused;
* private ReentrantLock pauseLock = new ReentrantLock();
* private Condition unpaused = pauseLock.newCondition();
*
* public PausableThreadPoolExecutor(...) { super(...); }
*
* protected void beforeExecute(Thread t, Runnable r) {
* super.beforeExecute(t, r);
* pauseLock.lock();
* try {
* while (isPaused) unpaused.await();
* } catch(InterruptedException ie) {
* t.interrupt();
* } finally {
* pauseLock.unlock();
* }
* }
*
* public void pause() {
* pauseLock.lock();
* try {
* isPaused = true;
* } finally {
* pauseLock.unlock();
* }
* }
*
* public void resume() {
* pauseLock.lock();
* try {
* isPaused = false;
* unpaused.signalAll();
* } finally {
* pauseLock.unlock();
* }
* }
* }
* </pre>
* @since 1.5
* @author Doug Lea
*/
public class ThreadPoolExecutor extends AbstractExecutorService {
/**
* Only used to force toArray() to produce a Runnable[].
*/
private static final Runnable[] EMPTY_RUNNABLE_ARRAY = new Runnable[0];
/**
* Permission for checking shutdown
*/
private static final RuntimePermission shutdownPerm =
new RuntimePermission("modifyThread");
/**
* Queue used for holding tasks and handing off to worker threads.
*/
private final BlockingQueue workQueue;
/**
* Lock held on updates to poolSize, corePoolSize, maximumPoolSize, and
* workers set.
*/
private final ReentrantLock mainLock = new ReentrantLock();
/**
* Wait condition to support awaitTermination
*/
private final Condition termination = mainLock.newCondition();
/**
* Set containing all worker threads in pool.
*/
private final HashSet workers = new HashSet();
/**
* Timeout in nanoseconds for idle threads waiting for work.
* Threads use this timeout only when there are more than
* corePoolSize present. Otherwise they wait forever for new work.
*/
private volatile long keepAliveTime;
/**
* If false (default) core threads stay alive even when idle.
* If true, core threads use keepAliveTime to time out waiting for work.
*/
private boolean allowCoreThreadTimeOut;
/**
* Core pool size, updated only while holding mainLock,
* but volatile to allow concurrent readability even
* during updates.
*/
private volatile int corePoolSize;
/**
* Maximum pool size, updated only while holding mainLock
* but volatile to allow concurrent readability even
* during updates.
*/
private volatile int maximumPoolSize;
/**
* Current pool size, updated only while holding mainLock
* but volatile to allow concurrent readability even
* during updates.
*/
private volatile int poolSize;
/**
* Lifecycle state
*/
volatile int runState;
// Special values for runState
/** Normal, not-shutdown mode */
static final int RUNNING = 0;
/** Controlled shutdown mode */
static final int SHUTDOWN = 1;
/** Immediate shutdown mode */
static final int STOP = 2;
/** Final state */
static final int TERMINATED = 3;
/**
* Handler called when saturated or shutdown in execute.
*/
private volatile RejectedExecutionHandler handler;
/**
* Factory for new threads.
*/
private volatile ThreadFactory threadFactory;
/**
* Tracks largest attained pool size.
*/
private int largestPoolSize;
/**
* Counter for completed tasks. Updated only on termination of
* worker threads.
*/
private long completedTaskCount;
/**
* The default rejected execution handler
*/
private static final RejectedExecutionHandler defaultHandler =
new AbortPolicy();
/**
* Invoke the rejected execution handler for the given command.
*/
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?