阅读源码之线程池
线程池流程图

更美观一点的如下图

核心类的构造函数如下
// Public constructors and methods
/**
* Creates a new {@code ThreadPoolExecutor} with the given initial
* parameters and default thread factory and rejected execution handler.
* It may be more convenient to use one of the {@link Executors} factory
* methods instead of this general purpose constructor.
*
* @param corePoolSize the number of threads to keep in the pool, even
* if they are idle, unless {@code allowCoreThreadTimeOut} is set
- 核心线程数量,及时空闲也不会退出,除非你设置了allowCoreThreadTimeOut参数
* @param maximumPoolSize the maximum number of threads to allow in the
* pool
-最大线程数量
* @param keepAliveTime when the number of threads is greater than
* the core, this is the maximum time that excess idle threads
* will wait for new tasks before terminating.
- 一般都是超过核心线程时才起作用,任务结束前的最大等待时间,主要是为了方便,干完活还让你等一会儿,怕你刚回去,我又有任务,又得重新申请把你叫过来
* @param unit the time unit for the {@code keepAliveTime} argument
keepAliveTime的单位
* @param workQueue the queue to use for holding tasks before they are
* executed. This queue will hold only the {@code Runnable}
* tasks submitted by the {@code execute} method.
任务真正调用execute执行前都被放入到队列里面
* @throws IllegalArgumentException if one of the following holds:<br>
* {@code corePoolSize < 0}<br>
* {@code keepAliveTime < 0}<br>
* {@code maximumPoolSize <= 0}<br>
* {@code maximumPoolSize < corePoolSize}
* @throws NullPointerException if {@code workQueue} is null
*/
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue) {
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
Executors.defaultThreadFactory(), defaultHandler);
}
执行的核心代码
public void execute(Runnable command) {
if (command == null)
throw new NullPointerException();
/*
* Proceed in 3 steps:
*
* 1. If fewer than corePoolSize threads are running, try to
* start a new thread with the given command as its first
* task. The call to addWorker atomically checks runState and
* workerCount, and so prevents false alarms that would add
* threads when it shouldn't, by returning false.
小于核心线程数时,启动一个新的线程并赋予一个任务,添加工人的过程中会检查线程运行状态和活跃的工人数量,避免线程不应该被添加时还被强行添加
*
* 2. If a task can be successfully queued, then we still need
* to double-check whether we should have added a thread
* (because existing ones died since last checking) or that
* the pool shut down since entry into this method. So we
* recheck state and if necessary roll back the enqueuing if
* stopped, or start a new thread if there are none.
*
及时任务可以成功添加到队列中,也仍然需要进行双重检查,
因为上一次检查后有些线程立马退出或者死亡了,或者是进入到这个方法后线程池被关闭了,所以需要重写检查进行回滚操作,当线程停止时也需要进行出队操作,或者当没有线程时,启动一个新的线程来执行完队列中的任务.
* 3. If we cannot queue task, then we try to add a new
* thread. If it fails, we know we are shut down or saturated
* and so reject the task.
*/
int c = ctl.get();
if (workerCountOf(c) < corePoolSize) {
if (addWorker(command, true))
return;
c = ctl.get();
}
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
else if (!addWorker(command, false))
reject(command);
}
拒绝策略的执行时机
一:当线程池不是运行中状态 且 将任务移除队列失败 时
二:当任务数量大于最大线程数时
/**
* Checks if a new worker can be added with respect to current
* pool state and the given bound (either core or maximum). If so,
* the worker count is adjusted accordingly, and, if possible, a
* new worker is created and started, running firstTask as its
* first task. This method returns false if the pool is stopped or
* eligible to shut down. It also returns false if the thread
* factory fails to create a thread when asked. If the thread
* creation fails, either due to the thread factory returning
* null, or due to an exception (typically OutOfMemoryError in
* Thread.start()), we roll back cleanly.
检查在给定的范围内是否可以添加工作者, 如果是,则相应地调整工作者计数,并且如果可能,创建并启动新工作者,将firstTask作为其第一个任务运行.
如果池已停止或有资格关闭,则此方法返回false.如果线程工厂在询问时无法创建线程,它也会返回false。如果线程创建失败,或者由于线程工厂返回null,或者由于异常(通常是Thread.start()中的OutOfMemoryError),我们会进行回滚
* @param firstTask the task the new thread should run first (or
* null if none). Workers are created with an initial first task
* (in method execute()) to bypass queuing when there are fewer
* than corePoolSize threads (in which case we always start one),
* or when the queue is full (in which case we must bypass queue).
* Initially idle threads are usually created via
* prestartCoreThread or to replace other dying workers.
*
* @param core if true use corePoolSize as bound, else
* maximumPoolSize. (A boolean indicator is used here rather than a
* value to ensure reads of fresh values after checking other pool
* state).
* @return true if successful
*/
firstTask新线程应先运行的任务(如果没有则为null),使用初始第一个任务(在方法execute()中)创建工作程序,以便在少于corePoolSize线程时(在这种情况下我们始终启动一个)或队列已满(在这种情况下我们必须绕过队列)时绕过排队,最初空闲线程通常通过prestartCoreThread创建或替换其他垂死的工作者
private boolean addWorker(Runnable firstTask, boolean core) {
retry:
for (;;) {
int c = ctl.get();
int rs = runStateOf(c);
// Check if queue empty only if necessary.
if (rs >= SHUTDOWN &&
! (rs == SHUTDOWN &&
firstTask == null &&
! workQueue.isEmpty()))
return false;
for (;;) {
int wc = workerCountOf(c);
if (wc >= CAPACITY ||
wc >= (core ? corePoolSize : maximumPoolSize))
return false;
if (compareAndIncrementWorkerCount(c))
break retry;
c = ctl.get(); // Re-read ctl
if (runStateOf(c) != rs)
continue retry;
// else CAS failed due to workerCount change; retry inner loop
}
}
boolean workerStarted = false;
boolean workerAdded = false;
Worker w = null;
try {
w = new Worker(firstTask);
final Thread t = w.thread;
if (t != null) {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
// Recheck while holding lock.
// Back out on ThreadFactory failure or if
// shut down before lock acquired.
int rs = runStateOf(ctl.get());
if (rs < SHUTDOWN ||
(rs == SHUTDOWN && firstTask == null)) {
if (t.isAlive()) // precheck that t is startable
throw new IllegalThreadStateException();
workers.add(w);
int s = workers.size();
if (s > largestPoolSize)
largestPoolSize = s;
workerAdded = true;
}
} finally {
mainLock.unlock();
}
if (workerAdded) {
t.start();
workerStarted = true;
}
}
} finally {
if (! workerStarted)
addWorkerFailed(w);
}
return workerStarted;
}
解读
if (rs >= SHUTDOWN &&
! (rs == SHUTDOWN &&
firstTask == null &&
! workQueue.isEmpty()))
return false;
rs > SHUTDOWN代表以及停止了,当然就不允许继续添加了
rs == SHUTDOWN &&
firstTask == null && ! workQueue.isEmpty()
SHUTDOWN状态时,当队列里还有任务,是可以添加的
for (;;) {
int wc = workerCountOf(c);
if (wc >= CAPACITY ||
wc >= (core ? corePoolSize : maximumPoolSize))
return false;
if (compareAndIncrementWorkerCount(c))
break retry;
c = ctl.get(); // Re-read ctl
if (runStateOf(c) != rs)
continue retry;
// else CAS failed due to workerCount change; retry inner loop
}
相当于初始化核心线程数或者最大线程数的过程
w = new Worker(firstTask);
final Thread t = w.thread;
if (workerAdded) {
t.start();
workerStarted = true;
}
添加成功,我们执行t.start
我们可以看下 w.thread是什么
Worker(Runnable firstTask) {
setState(-1); // inhibit interrupts until runWorker
this.firstTask = firstTask;
this.thread = getThreadFactory().newThread(this);
}
看到this我们发现就是Worker自己去跑。
至于为什么 setState(-1)我们后面再说
Worker的run方法如下
public void run() {
runWorker(this);
}
runWorker方法详解
/**
* Main worker run loop. Repeatedly gets tasks from queue and
* executes them, while coping with a number of issues:
*
* 1. We may start out with an initial task, in which case we
* don't need to get the first one. Otherwise, as long as pool is
* running, we get tasks from getTask. If it returns null then the
* worker exits due to changed pool state or configuration
* parameters. Other exits result from exception throws in
* external code, in which case completedAbruptly holds, which
* usually leads processWorkerExit to replace this thread.
*
* 2. Before running any task, the lock is acquired to prevent
* other pool interrupts while the task is executing, and then we
* ensure that unless pool is stopping, this thread does not have
* its interrupt set.
*
* 3. Each task run is preceded by a call to beforeExecute, which
* might throw an exception, in which case we cause thread to die
* (breaking loop with completedAbruptly true) without processing
* the task.
*
* 4. Assuming beforeExecute completes normally, we run the task,
* gathering any of its thrown exceptions to send to afterExecute.
* We separately handle RuntimeException, Error (both of which the
* specs guarantee that we trap) and arbitrary Throwables.
* Because we cannot rethrow Throwables within Runnable.run, we
* wrap them within Errors on the way out (to the thread's
* UncaughtExceptionHandler). Any thrown exception also
* conservatively causes thread to die.
*
* 5. After task.run completes, we call afterExecute, which may
* also throw an exception, which will also cause thread to
* die. According to JLS Sec 14.20, this exception is the one that
* will be in effect even if task.run throws.
*
* The net effect of the exception mechanics is that afterExecute
* and the thread's UncaughtExceptionHandler have as accurate
* information as we can provide about any problems encountered by
* user code.
*
* @param w the worker
*/
主要工人运行循环。 从队列中反复获取任务并执行它们,同时解决许多问题:
1.我们可能从一个初始任务开始,在这种情况下我们不需要获得第一个任务.否则,只要pool正在运行,我们就会从getTask获取任务。如果它返回null,则由于池状态或配置参数的更改而退出工作线程。其他退出是由外部代码中的异常抛出引起的,在这种情况下,completedAbruptly持有,这通常会导致processWorkerExit替换此线程。
2.在运行任何任务之前,获取锁以在任务执行时防止其他池中断,然后我们确保除非池停止,否则此线程没有设置其中断。
3.每个任务运行之前都会调用beforeExecute,这可能会引发异常,在这种情况下,我们会导致线程死亡(使用completedAbruptly打破循环为true)而不处理任务。
4.假设beforeExecute正常完成,我们运行任务,收集任何抛出的异常以发送到afterExecute.
我们分别处理RuntimeException,Error(两个规范保证我们陷阱)和任意Throwables。因为我们无法在Runnable.run中重新抛出Throwables,所以我们将它们包含在出错的Errors中(到线程的UncaughtExceptionHandler).任何抛出的异常也会保守地导致线程死亡。
5.在task.run完成之后,我们调用afterExecute,这也可能引发异常,这也会导致线程死亡。
根据JLS Sec 14.20,即使task.run抛出,该异常也将生效。
异常机制的净效果是afterExecute和线程的UncaughtExceptionHandler具有我们可以提供的关于用户代码遇到的任何问题的准确信息。
final void runWorker(Worker w) {
Thread wt = Thread.currentThread();
Runnable task = w.firstTask;
w.firstTask = null;
w.unlock(); // allow interrupts
boolean completedAbruptly = true;
try {
while (task != null || (task = getTask()) != null) {
w.lock();
// If pool is stopping, ensure thread is interrupted;
// if not, ensure thread is not interrupted. This
// requires a recheck in second case to deal with
// shutdownNow race while clearing interrupt
if ((runStateAtLeast(ctl.get(), STOP) ||
(Thread.interrupted() &&
runStateAtLeast(ctl.get(), STOP))) &&
!wt.isInterrupted())
wt.interrupt();
try {
beforeExecute(wt, task);
Throwable thrown = null;
try {
task.run();
} catch (RuntimeException x) {
thrown = x; throw x;
} catch (Error x) {
thrown = x; throw x;
} catch (Throwable x) {
thrown = x; throw new Error(x);
} finally {
afterExecute(task, thrown);
}
} finally {
task = null;
w.completedTasks++;
w.unlock();
}
}
completedAbruptly = false;
} finally {
processWorkerExit(w, completedAbruptly);
}
}
我们先来看
w.unlock(); // allow interrupts 允许中断
没有lock就直接unlock?是不是很奇怪?
我们看unlock方法为
ublic void unlock() { release(1); }
release()方法为AQS的方法
public final boolean release(int arg) {
if (tryRelease(arg)) {
Node h = head;
if (h != null && h.waitStatus != 0)
unparkSuccessor(h);
return true;
}
return false;
}
Worker实现了tryRelease()方法,如下
protected boolean tryRelease(int unused) {
setExclusiveOwnerThread(null);
setState(0);
return true;
}
我们同时注意到Worker中还有个方法
void interruptIfStarted() {
Thread t;
if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
try {
t.interrupt();
} catch (SecurityException ignore) {
}
}
}
意思即是启动之后允许中断,我们看到getState()>=0 所以我们这里我们的t.unlock即表示允许中断的意思。只是方法名有点怪异而已,并不是指释放锁。
同时我们再回到我们初始化Worker的时候
Worker(Runnable firstTask) {
setState(-1); // inhibit interrupts until runWorker
this.firstTask = firstTask;
this.thread = getThreadFactory().newThread(this);
}
setState(-1),只有真正跑起来才让你去中断
这里再继续看runWorke()方法中的
while (task != null || (task = getTask()) != null) {
结合之前我们往队列里添加任务时的代码
if (isRunning(c) && workQueue.offer(command)) {
int recheck = ctl.get();
if (! isRunning(recheck) && remove(command))
reject(command);
else if (workerCountOf(recheck) == 0)
addWorker(null, false);
}
将任务添加到队列时传的firstTask传的是null,队列只是作为一个容器,主要还是核心线程池里的线程或者最大线程池里的线程来队列中取任务。功能很明确
继续看getTask方法如下
private Runnable getTask() {
boolean timedOut = false; // Did the last poll() time out?
for (;;) {
int c = ctl.get();
int rs = runStateOf(c);
// Check if queue empty only if necessary.
if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
decrementWorkerCount();
return null;
}
int wc = workerCountOf(c);
// Are workers subject to culling?
boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
if ((wc > maximumPoolSize || (timed && timedOut))
&& (wc > 1 || workQueue.isEmpty())) {
if (compareAndDecrementWorkerCount(c))
return null;
continue;
}
try {
Runnable r = timed ?
workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
workQueue.take();
if (r != null)
return r;
timedOut = true;
} catch (InterruptedException retry) {
timedOut = false;
}
}
}
这里有个
boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
配合
try {
Runnable r = timed ?
workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
workQueue.take();
if (r != null)
return r;
timedOut = true;
} catch (InterruptedException retry) {
timedOut = false;
}
可以实现keepAliveTime的功能
workQueue.poll()方法可以设置超时时间,即线程空闲时间。
且从这里可以看出来 keepAliveTime 默认是在大于核心线程数时才起作用,但我们可以设置参数
allowCoreThreadTimeOut 允许核心线程数也可以超时
keepAliveTime可以理解成我借了几个工人来干活,当活干完后,我不会立即还回去以防刚还回去又立马有新的任务来。
继续看wile循环里面的方法
// If pool is stopping, ensure thread is interrupted;
// if not, ensure thread is not interrupted. This
// requires a recheck in second case to deal with
// shutdownNow race while clearing interrupt
如果池正在停止,确保线程被中断;如果没有,确保线程不被中断。这需要在第二种情况下重新检查以在清除中断时处理shutdownNow竞争
if ((runStateAtLeast(ctl.get(), STOP) ||
(Thread.interrupted() &&
runStateAtLeast(ctl.get(), STOP))) &&
!wt.isInterrupted())
wt.interrupt();
beforeExecute(wt, task);
afterExecute(task, thrown);
这两个方法都是交由子类去实现的,子类可以自行扩展。
再来看下runWoker()方法中的参数
boolean completedAbruptly = true;
突然中止的意思,默认是true,即快速失败, while方法运行成功后会将其值改为false.
最后我们再来看下
runWoker()中的
finally {
processWorkerExit(w, completedAbruptly);
}
方法解释如下
/**
* Performs cleanup and bookkeeping for a dying worker. Called
* only from worker threads. Unless completedAbruptly is set,
* assumes that workerCount has already been adjusted to account
* for exit. This method removes thread from worker set, and
* possibly terminates the pool or replaces the worker if either
* it exited due to user task exception or if fewer than
* corePoolSize workers are running or queue is non-empty but
* there are no workers.
*
* @param w the worker
* @param completedAbruptly if the worker died due to user exception
*/
为垂死的工作人员执行清理和记录。仅从工作线程调用,除非设置了completedAbruptly,否则假定workerCount已经调整为退出帐户.此方法从工作集中删除线程,并且如果由于用户任务异常而退出,或者如果少于corePoolSize工作正在运行或队列非空,但没有工作者。
private void processWorkerExit(Worker w, boolean completedAbruptly) {
if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
decrementWorkerCount();
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
completedTaskCount += w.completedTasks;
workers.remove(w);
} finally {
mainLock.unlock();
}
tryTerminate();
int c = ctl.get();
if (runStateLessThan(c, STOP)) {
if (!completedAbruptly) {
int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
if (min == 0 && ! workQueue.isEmpty())
min = 1;
if (workerCountOf(c) >= min)
return; // replacement not needed
}
addWorker(null, false);
}
}
我们可以看到这里有
addWorker(null, false);即死了一个,又给你加一个。
最后再来看一下
BlockingQueue 是一个接口,其实现有

ArrayBlockingQueue 有界队列
LinkedBlockingQueue 无界队列
SynchronousQueue 阻塞队列(默认是没有容量的一个消费者一个生产者)
DelayQueue 延时队列
我们再来看下工具类默认给我们提供的几个方法
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
核心线程数为0,最大线程数为Integer.MAX_VALUE,线程空闲时间为60秒,使用的阻塞队列
这会有什么问题呢?
newCachedThreadPool(),可能会无限的创建线程
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
核心线程数和最大线程数由我们自己传入,但使用的无界队列,相当于最大线程数其实是不起作用的。
不断的添加任务,直到内存耗尽,也是很有风险的。
再来看下几种拒绝策略
接口
RejectedExecutionHandler

DiscardPolicy 拒绝任务的处理程序,以静默方式丢弃被拒绝的任务。
CallerRunsPolicy 被拒绝任务的处理程序,它直接在execute方法的调用线程中运行被拒绝的任务,除非执行程序已关闭,在这种情况下,任务将被丢弃。
AbortPolicy 拒绝任务的处理程序,抛出RejectedExecutionException
DiscardOldestPolicy 被拒绝任务的处理程序,它丢弃最早的未处理请求,然后重试执行,除非执行程序被关闭,在这种情况下,任务将被丢弃
线程池的默认拒绝策略为默认抛异常
private static final RejectedExecutionHandler defaultHandler =
new AbortPolicy();
当然我们也可以实现RejectedExecutionHandler接口实现我们自己的拒绝策略
线程池的关闭
/**
* Initiates an orderly shutdown in which previously submitted
* tasks are executed, but no new tasks will be accepted.
* Invocation has no additional effect if already shut down.
*
* <p>This method does not wait for previously submitted tasks to
* complete execution. Use {@link #awaitTermination awaitTermination}
* to do that.
*
* @throws SecurityException {@inheritDoc}
*/
启动有序关闭,其中先前提交的任务将被执行,但不会接受任何新任务。如果已经关闭,调用没有其他影响。此方法不会等待先前提交的任务完成执行。 可以使用awaitTermination来做到这一点。
public void shutdown() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
checkShutdownAccess();
advanceRunState(SHUTDOWN);
interruptIdleWorkers();
onShutdown(); // hook for ScheduledThreadPoolExecutor
} finally {
mainLock.unlock();
}
tryTerminate();
}
/**
* Attempts to stop all actively executing tasks, halts the
* processing of waiting tasks, and returns a list of the tasks
* that were awaiting execution. These tasks are drained (removed)
* from the task queue upon return from this method.
*
* <p>This method does not wait for actively executing tasks to
* terminate. Use {@link #awaitTermination awaitTermination} to
* do that.
*
* <p>There are no guarantees beyond best-effort attempts to stop
* processing actively executing tasks. This implementation
* cancels tasks via {@link Thread#interrupt}, so any task that
* fails to respond to interrupts may never terminate.
*
* @throws SecurityException {@inheritDoc}
*/
尝试停止所有正在执行的任务,停止等待任务的处理,并返回等待执行的任务列表。从此方法返回时,这些任务将从任务队列中排空(删除)。此方法不等待主动执行任务终止。 使用awaitTermination来做到这一点。
除了尽力尝试停止处理主动执行任务之外,没有任何保证。这个实现通过Thread#interrupt取消任务,所以任何任务都可以无法响应中断可能永远不会终止。
public List<Runnable> shutdownNow() {
List<Runnable> tasks;
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
try {
checkShutdownAccess();
advanceRunState(STOP);
interruptWorkers();
tasks = drainQueue();
} finally {
mainLock.unlock();
}
tryTerminate();
return tasks;
}
整个线程池类注释翻译
/**
* An {@link ExecutorService} that executes each submitted task using
* one of possibly several pooled threads, normally configured
* using {@link Executors} factory methods.
(调用ExecutorService执行submitted提交的任务,会使用池化的线程,通常我们可以使用 Executors工厂方法来配置。)
* <p>Thread pools address two different problems: they usually
* provide improved performance when executing large numbers of
* asynchronous tasks, due to reduced per-task invocation overhead,
* and they provide a means of bounding and managing the resources,
* including threads, consumed when executing a collection of tasks.
(线程池解决了两个不同的问题,在执行很多的异步任务时由于减少了调用开销会有很大的性能提升,它们提供了一种限制和管理资源的手段,包括执行任务集合时消耗的线程)
* Each {@code ThreadPoolExecutor} also maintains some basic
* statistics, such as the number of completed tasks.
(每个{@code ThreadPoolExecutor}也保留一些基本统计信息,列入完成的任务数)
* <p>To be useful across a wide range of contexts, this class
* provides many adjustable parameters and extensibility
* hooks.
(该类也提供了许多自定义的参数和可扩展的回调钩子,以便在各种上下文中使用)
* However, programmers are urged to use the more convenient
* {@link Executors} factory methods {@link
* Executors#newCachedThreadPool} (unbounded thread pool, with
* automatic thread reclamation), {@link Executors#newFixedThreadPool}
* (fixed size thread pool) and {@link
* Executors#newSingleThreadExecutor} (single background thread), that
* preconfigure settings for the most common usage
* scenarios. Otherwise, use the following guide when manually
* configuring and tuning this class:
(但是,程序员们建议使用更方便的Executors工厂方法,newCachedThreadPool(无界线程池,带自动线程回收),newFixedThreadPool固定大小的线程数,newSingleThreadExecutor,单线程线程池,当然也可以进行自定义配置,以下是一些自定义配置参数
)
*
* <dl>
*
* <dt>Core and maximum pool sizes(核心线程数)</dt>
*
* <dd>A {@code ThreadPoolExecutor} will automatically adjust the
* pool size (see {@link #getPoolSize})
* according to the bounds set by
* corePoolSize (see {@link #getCorePoolSize}) and
* maximumPoolSize (see {@link #getMaximumPoolSize}).
(ThreadPoolExecutor可以根据设置的corePoolSize 与maximumPoolSize 范围来调整线程池的大小)
* When a new task is submitted in method {@link #execute(Runnable)},
* and fewer than corePoolSize threads are running, a new thread is
* created to handle the request, even if other worker threads are
* idle.
(当一个新任务提交并且运行中的线程数小于核心线程数时,将会创建一个新的线程去处理请求,即使其他工作线程处于空闲状态)
* If there are more than corePoolSize but less than
* maximumPoolSize threads running, a new thread will be created only
* if the queue is full.
(如果运行中的线程数超过了核心线程数但小于最大线程数,只有在队列已满时才会创建新线程)
* By setting corePoolSize and maximumPoolSize
* the same, you create a fixed-size thread pool.
(当核心线程数和最大线程数相同时,你就创建了一个固定大小的线程池)
* By setting
* maximumPoolSize to an essentially unbounded value such as {@code
* Integer.MAX_VALUE}, you allow the pool to accommodate an arbitrary
* number of concurrent tasks.
(当你设定最大线程数的实际范围值为Integer.MAX_VALUE,允许线程池容纳任意数量的并发任务)
* Most typically, core and maximum pool
* sizes are set only upon construction, but they may also be changed
* dynamically using {@link #setCorePoolSize} and {@link
* #setMaximumPoolSize}. </dd>
(通常,核心线程数和最大线程数会在构造函数初始化时进行设置,但是他们在运行中也可以通过setCorePoolSize和setMaximumPoolSize方法来进行改变)
*
* <dt>On-demand construction(构造函数)</dt>
*
* <dd>By default, even core threads are initially created and
* started only when new tasks arrive, but this can be overridden
* dynamically using method {@link #prestartCoreThread} or {@link
* #prestartAllCoreThreads}.
(默认情况下,即使核心线程最初也是在新任务到达时创建和启动的,但这可以使用方法如prestartCoreThread或者prestartAllCoreThreads进行动态覆盖)
* You probably want to prestart threads if
* you construct the pool with a non-empty queue.
(如果使用非空队列构造池,则可能需要预启动线程。)
</dd>
* <dt>Creating new threads(创建线程)</dt>
*
* <dd>New threads are created using a {@link ThreadFactory}.
(可以通过ThreadFactory创建新的线程)
* If not
* otherwise specified, a {@link Executors#defaultThreadFactory} is
* used, that creates threads to all be in the same {@link
* ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
* non-daemon status.
(如果没有特别需求,可以使用Executors的defaultThreadFactory,它将会创建相同线程组合相同优先级并且是非守护状态的线程。)
* By supplying a different ThreadFactory, you can
* alter the thread's name, thread group, priority, daemon status,
* etc.
(使用不同的ThreadFactory时,你也可以改变线程的名称,线程所属的组,线程的优先级,以及守护状态)
* If a {@code ThreadFactory} fails to create a thread when asked
* by returning null from {@code newThread}, the executor will
* continue, but might not be able to execute any tasks.
(ThreadFactory创建新线程失败时,将会返回Null,executor 将会继续,但可能不会执行任何任务)
* Threads
* should possess the "modifyThread" {@code RuntimePermission}.
(RuntimePermission,会检查是否具有运行中修改线程的能力)
* If
* worker threads or other threads using the pool do not possess this
* permission, service may be degraded: configuration changes may not
* take effect in a timely manner, and a shutdown pool may remain in a
* state in which termination is possible but not completed.
(如果使用池的工作线程或其他线程不具有此权限,则服务可能会降级:配置更改可能不会及时生效,并且关闭池可能保持可以终止但未完成的状态。)
</dd>
* <dt>Keep-alive times</dt>
*
* <dd>If the pool currently has more than corePoolSize threads,
* excess threads will be terminated if they have been idle for more
* than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}).
(如果当前线程池中运行的线程数超过了核心线程数,多余的线程如果空闲时间超过keepAliveTime,将被终止)
* This provides a means of reducing resource consumption when the
* pool is not being actively used.
(这样就提供了一种回收线程池资源的方式,当线程池没有被充分利用时)
* If the pool becomes more active
* later, new threads will be constructed.
(如果之后线程池被充分利用时,将会生成新的线程)
* This parameter can also be
* changed dynamically using method {@link #setKeepAliveTime(long,
* TimeUnit)}.
(这个参数也可以使用setKeepAliveTime方法进行动态的改变)
* Using a value of {@code Long.MAX_VALUE} {@link
* TimeUnit#NANOSECONDS} effectively disables idle threads from ever
* terminating prior to shut down.
(使用值{@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS}可以有效地禁止空闲线程在关闭之前终止)
* By default, the keep-alive policy
* applies only when there are more than corePoolSize threads.
(默认的,keep-alive策略仅在存在超出corePoolSize数时适用)
* But
* method {@link #allowCoreThreadTimeOut(boolean)} can be used to
* apply this time-out policy to core threads as well, so long as the
* keepAliveTime value is non-zero.
(但是方法allowCoreThreadTimeOut(boolean)也可用于将此超时策略应用于核心线程,只要keepAliveTime值不为零即可)
*</dd>
*
* <dt>Queuing</dt>
*
* <dd>Any {@link BlockingQueue} may be used to transfer and hold
* submitted tasks. The use of this queue interacts with pool sizing:
(任何{@link BlockingQueue}都可用于转移和保留提交的任务。 此队列的使用与池大小调整相互作用)
* <ul>
*
* <li> If fewer than corePoolSize threads are running, the Executor
* always prefers adding a new thread
* rather than queuing.
(当运行的线程小于核心线程数时,将会创建新的线程而不是排队)
</li>
* <li> If corePoolSize or more threads are running, the Executor
* always prefers queuing a request rather than adding a new
* thread.
当运行中的线程超出了核心线程时,将会使用队列来而不是创建新的线程
</li>
* <li> If a request cannot be queued, a new thread is created unless
* this would exceed maximumPoolSize, in which case, the task will be
* rejected.
当队列已经满了,并且没有运行线程数没有超出最大线程数时,将会创建新的线程,超出最大线程数时,将会执行拒绝策略
</li>
* </ul>
*
* There are three general strategies for queuing:
(有三种通用的队列)
* <ol>
* <li> <em> Direct handoffs.(直接交接)</em>
* A good default choice for a work
* queue is a {@link SynchronousQueue} that hands off tasks to threads
* without otherwise holding them.
(工作队列的一个很好的默认选择是{@link SynchronousQueue},它将任务交给线程而不另外保存它们。)
* Here, an attempt to queue a task
* will fail if no threads are immediately available to run it, so a
* new thread will be constructed.
(在这里,如果没有线程立即可用于运行它,则尝试对任务假如队列将失败,这时将构造新线程)
* This policy avoids lockups when
* handling sets of requests that might have internal dependencies.
(此策略在处理可能具有内部依赖性的请求集时避免了锁定)
* Direct handoffs generally require unbounded maximumPoolSizes to
* avoid rejection of new submitted tasks.
(直接交换通常会设置最大线程数为无界的,以避免新任务提交时被拒绝)
* This in turn admits the
* possibility of unbounded thread growth when commands continue to
* arrive on average faster than they can be processed.
(这也会导致,如果任务处理过慢时,会无限制的创建线程)
</li>
*
* <li><em> Unbounded queues.(无界队列)</em>
* Using an unbounded queue (for
* example a {@link LinkedBlockingQueue} without a predefined
* capacity) will cause new tasks to wait in the queue when all
* corePoolSize threads are busy.
(使用无界队列,如LinkedBlockingQueue没有一个预定义的容量将会导致新的任务,当所有corePoolSize线程都忙时,新任务将在队列里堆积)
* Thus, no more than corePoolSize
* threads will ever be created.
(因此,只会创建corePoolSize线程。)
* (And the value of the maximumPoolSize
* therefore doesn't have any effect.)
(最大线程数时没有作用的)
* This may be appropriate when
* each task is completely independent of others, so tasks cannot
* affect each others execution;
(当每项任务完全独立于其他任务时,这可能是适当的,任务不能影响彼此的执行)
* for example, in a web page server.
* While this style of queuing can be useful in smoothing out
* transient bursts of requests, it admits the possibility of
* unbounded work queue growth when commands continue to arrive on
* average faster than they can be processed.
(例如,在网页服务器中。 虽然这种排队方式可以有助于平滑瞬态突发请求,但它承认,当命令继续平均到达的速度超过可处理速度时,无限制的工作队列增长的可能性。)
</li>
* <li><em>Bounded queues.(有界队列)</em>
* A bounded queue (for example, an
* {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
* used with finite maximumPoolSizes, but can be more difficult to
* tune and control.
(有限队列(例如,{@link ArrayBlockingQueue})与有限maximumPoolSizes一起使用时有助于防止资源耗尽,但可能更难以调整和控制。)
* Queue sizes and maximum pool sizes may be traded
* off for each other: Using large queues and small pools minimizes
* CPU usage, OS resources, and context-switching overhead, but can
* lead to artificially low throughput.
(队列大小和最大池大小可以相互调换,使用大型队列和小型池可最大限度地减少CPU使用率,操作系统资源和上下文切换开销,但可能导致人为的低吞吐量)
* If tasks frequently block (for
* example if they are I/O bound), a system may be able to schedule
* time for more threads than you otherwise allow.
(如果任务经常阻塞(例如,如果它们是I / O绑定的),则系统可能能够为您提供比您允许的更多线程的时间。)
* Use of small queues
* generally requires larger pool sizes, which keeps CPUs busier but
* may encounter unacceptable scheduling overhead, which also
* decreases throughput.
(使用小队列通常需要更大的池大小,这会使CPU更加繁忙,但可能会遇到不可接受的调度开销,这也会降低吞吐量。)
</li>
*
* </ol>
*
* </dd>
*
* <dt>Rejected tasks(拒绝任务)</dt>
*
* <dd>New tasks submitted in method {@link #execute(Runnable)} will be
* <em>rejected</em>
* when the Executor has been shut down, and also when
* the Executor uses finite bounds for both maximum threads and work queue
* capacity, and is saturated.
(当Executor已经被关掉,或者使用有界队列和最大线程数已经饱和时将会执行拒绝策略)
* In either case, the {@code execute} method
* invokes the {@link
* RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)}
* method of its {@link RejectedExecutionHandler}.
(通过RejectedExecutionHandler的回调方法来执行拒绝策略)
* Four predefined handler
* policies are provided:
(提供了四种策略)
* <ol>
*
* <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
* handler throws a runtime {@link RejectedExecutionException} upon
* rejection.
(默认的拒绝策略,AbortPolicy舍弃策略,将会跑出运行时异常)
</li>
* <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
* that invokes {@code execute} itself runs the task.
(CallerRunsPolicy,回调运行策略,它本身去完成任务)
* This provides a
* simple feedback control mechanism that will slow down the rate that
* new tasks are submitted.
(这提供了简单的反馈控制机制,可以降低新任务的提交速度。)
</li>
* <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
* cannot be executed is simply dropped.
(丢弃策略,简单地删除无法执行的任务。)
</li>
* <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
* executor is not shut down, the task at the head of the work queue
* is dropped, and then execution is retried (which can fail again,
* causing this to be repeated.)
(DiscardOldestPolicy策略,删除老的任务策略,
如果执行程序未关闭,则删除工作队列头部的任务,然后重试执行(可能会再次失败,导致重复执行)。
)
</li>
*
* </ol>
*
* It is possible to define and use other kinds of {@link
* RejectedExecutionHandler} classes. Doing so requires some care
* especially when policies are designed to work only under particular
* capacity or queuing policies. </dd>
* <dt>Hook methods(钩子方法)</dt>
*
* <dd>This class provides {@code protected} overridable
* {@link #beforeExecute(Thread, Runnable)} and
* {@link #afterExecute(Runnable, Throwable)} methods that are called
* before and after execution of each task.
(可以重写beforeExecute(Thread, Runnable)方法和afterExecute(Runnable, Throwable)方法,来对任务执行前后做一些操作)
* These can be used to
* manipulate the execution environment;
(这些可用于操纵执行环境,)
* for example, reinitializing
* ThreadLocals, gathering statistics, or adding log entries.
(重新初始化ThreadLocals,收集统计信息或添加日志条目。)
* Additionally, method {@link #terminated} can be overridden to perform
* any special processing that needs to be done once the Executor has
* fully terminated.
(此外,可以重写terminated方法以执行Executor完全终止后需要执行的任何特殊处理。)
*
* <p>If hook or callback methods throw exceptions, internal worker
* threads may in turn fail and abruptly terminate.
(如果钩子方法抛出了异常,内部工作线程可能会失败并突然终止)
</dd>
*
* <dt>Queue maintenance</dt>
*
* <dd>Method {@link #getQueue()} allows access to the work queue
* for purposes of monitoring and debugging.
(方法getQueue,允许访问工作队列以进行监视和调试)
* Use of this method for
* any other purpose is strongly discouraged.
(强烈建议不要将此方法用于任何其他目的。)
* Two supplied methods,
* {@link #remove(Runnable)} and {@link #purge} are available to
* assist in storage reclamation when large numbers of queued tasks
* become cancelled.
(提供了两种方法,remove方法和purge方法在大量队列中的任被取消时可以协助进行内存回收)
</dd>
* <dt>Finalization</dt>
*
* <dd>A pool that is no longer referenced in a program <em>AND</em>
* has no remaining threads will be {@code shutdown} automatically.
(程序中不再引用且没有剩余线程的池将自动关闭)
* If
* you would like to ensure that unreferenced pools are reclaimed even
* if users forget to call {@link #shutdown}, then you must arrange
* that unused threads eventually die, by setting appropriate
* keep-alive times, using a lower bound of zero core threads and/or
* setting {@link #allowCoreThreadTimeOut(boolean)}.
(如果希望用户没有调用shutdown时没有被引用的线程池也会被回收,你可以通过设置,keep-alive times或者更小的核心线程数或者调用allowCoreThreadTimeOut方法运行核心线程超时)
* here is a subclass that adds a simple pause/resume feature:
*
* <pre> {@code
* class PausableThreadPoolExecutor extends ThreadPoolExecutor {
* private boolean isPaused;
* private ReentrantLock pauseLock = new ReentrantLock();
* private Condition unpaused = pauseLock.newCondition();
*
* public PausableThreadPoolExecutor(...) { super(...); }
*
* protected void beforeExecute(Thread t, Runnable r) {
* super.beforeExecute(t, r);
* pauseLock.lock();
* try {
* while (isPaused) unpaused.await();
* } catch (InterruptedException ie) {
* t.interrupt();
* } finally {
* pauseLock.unlock();
* }
* }
*
* public void pause() {
* pauseLock.lock();
* try {
* isPaused = true;
* } finally {
* pauseLock.unlock();
* }
* }
*
* public void resume() {
* pauseLock.lock();
* try {
* isPaused = false;
* unpaused.signalAll();
* } finally {
* pauseLock.unlock();
* }
* }
* }}</pre>
*
* @since 1.5
* @author Doug Lea
*/
/**
* The main pool control state, ctl, is an atomic integer packing
* two conceptual fields
* workerCount, indicating the effective number of threads
* runState, indicating whether running, shutting down etc
(ctl,一个integer的原子包,用于控制线程池状态,包含两个字段,
workerCount, 有效的线程数
runState, 线程运行状态)
* In order to pack them into one int, we limit workerCount to
* (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
* billion) otherwise representable.
(为了将它们打包成一个int进行展示,我们将workerCount限制为(2 ^ 29)-1(约5亿)个线程,而不是(2 ^ 31)-1(20亿)。)
* If this is ever an issue in
* the future, the variable can be changed to be an AtomicLong,
* and the shift/mask constants below adjusted.
(如果以后这是一个问题,可以将变量更改为AtomicLong,并调整移位/掩码常数)
* But until the need
* arises, this code is a bit faster and simpler using an int.
(But until the need arises, this code is a bit faster and simpler using an int.)
*
* The workerCount is the number of workers that have been
* permitted to start and not permitted to stop.
(workerCount是运行中的线程数)
* The value may be
* transiently different from the actual number of live threads,
(该值可能与实际线程的实际数量瞬时不同)
* for example when a ThreadFactory fails to create a thread when
* asked, and when exiting threads are still performing
* bookkeeping before terminating.
(例如,当ThreadFactory无法创建线程时,以及当退出线程在终止之前仍在执行时。)
* The user-visible pool size is
* reported as the current size of the workers set.
* The runState provides the main lifecycle control, taking on values:
*
* RUNNING: Accept new tasks and process queued tasks
(接受新任务并且执行队列中的任务)
* SHUTDOWN: Don't accept new tasks, but process queued tasks
(不接受新任务,但执行队列中的任务)
* STOP: Don't accept new tasks, don't process queued tasks,
* and interrupt in-progress tasks
(不接受新任务,不执行队列中的任务,并且将会终止运行中的任务)
* TIDYING: All tasks have terminated, workerCount is zero,
* the thread transitioning to state TIDYING
* will run the terminated() hook method
(所有任务已经被终止,运行中的线程数为零,线程状态将会转变为TIDYING并且将会运行terminated()的钩子方法)
* TERMINATED: terminated() has completed
(钩子方法terminated执行完毕)
*
* The numerical order among these values matters, to allow
* ordered comparisons. The runState monotonically increases over
* time, but need not hit each state. The transitions are:
(这些值之间的数字顺序很重要,以允许有序比较。 runState随着时间的推移单调增加,但不需要命中每个状态。)
* RUNNING -> SHUTDOWN
* On invocation of shutdown(), perhaps implicitly in finalize()
(在调用shutdown()时,可能隐含在finalize()中)
* (RUNNING or SHUTDOWN) -> STOP
* On invocation of shutdownNow()
(调用shutdownNow()方法)
* SHUTDOWN -> TIDYING
* When both queue and pool are empty
(当队列和线程池都为空)
* STOP -> TIDYING
* When pool is empty
(线程池为空)
* TIDYING -> TERMINATED
* When the terminated() hook method has completed
(当钩子方法terminated()执行完毕)
* Threads waiting in awaitTermination() will return when the
* state reaches TERMINATED.
(当状态达到TERMINATED时,在awaitTermination()中等待的线程将返回。)
*
* Detecting the transition from SHUTDOWN to TIDYING is less
* straightforward than you'd like because the queue may become
* empty after non-empty and vice versa during SHUTDOWN state, but
* we can only terminate if, after seeing that it is empty, we see
* that workerCount is 0 (which sometimes entails a recheck -- see
* below).
(检测从SHUTDOWN到TIDYING的转换不如你想要的那么简单,因为在非空状态之后队列可能变空,反之亦然在SHUTDOWN状态期间,但是我们只能在看到它为空之后终止,我们看到workerCount为0。)
*/