[concurrency-interest] ThreadPoolExecutor

David Holmes dholmes at dltech.com.au
Wed Oct 5 20:17:37 EDT 2005


Patrick,

Your requirements can't be met directly by the current implementation. The
current design works, as you know, by having a core-pool that can either be
pre-started or lazily created, and which always stays around (in 1.6 idle
core threads can timeout but that just takes you back to the lazy startup
mode). Once you have your core pool you only create more threads (up to max)
if your queue fills up - if the queue never fills then no more threads will
be created. If you reach the maximum size with a full queue the task is
rejected.

> 1) {infinite | bounded} linked list request queue, where the bound is
> really high to avoid OOM (100000+)
> 2) MIN of 0 threads in pool
> 3) MAX # of threads in pool, configurable on pool create
> 4) reuse existing threads if available
> 5) allow pool to shrink to zero threads if there are no outstanding
> requests

Requirement (4) requires a way to detect if threads are "available". But
what does this mean? A worker thread is either executing a task or blocked
waiting for a task to appear in the BlockingQueue. If it is blocked then it
is "available", but to use it you have to submit your task to the queue. To
know if it is blocked you need to keep track of it, which is presumably what
this code is doing:

> //NOTE: Added this for less thread-crazy behaviour
> if (workersBlocked.get() > 0) {
> 	if(workQueue.offer(command)) {
> 		//HACK: this should protect against the pool shrinking,
> should be very rare...
> 		if (poolSize == 0)
> 			addIfUnderCorePoolSize(new Runnable(){ public
> void run() {} });
>
> 		return;
> 	}
> }

However this sort of check requires atomicity that isn't normally present in
the ThreadPoolExecutor. So to do this right requires additional locking
otherwise two incoming tasks can see one available worker and assume the
worker will run their task, when it fact one task will remain in the queue
and the pool could have queued tasks but less than max (or even core)
threads.

So if you really want this you have to pay a price to get it.

> Everything is working correctly as per the docs AFAIK, just seemingly
> counterintuitively. It seems quite pointless and lower performant to be
> creating new threads while existing ones are idle and available to
> process work.

The assumption is that the core pool will be quite steady so if you don't
create a core thread this time, the expected usage means you are going to
create it very soon anyway. If you pre-start the core then you don't create
new threads until your queue is full.

> Is this just a bad interaction between ThreadPoolExecutor
> and LinkedBlockingQueue?  Is there another queue type that will work for
> me or thread pool option I am missing?

It seems to me - and I could be misunderstanding things - that what you want
is a "dual-mode" queue. Set the core size at zero and what you want is to
submit the task to the queue, if a thread is waiting, else create a thread.
This is where you need a synchronous queue - it has no capacity, so if no
thread is waiting then offer() will fail and from the pool's perspective the
queue is "full" and so a new thread will be created if under max. But when
max is reached you now want a queue that has capacity. You could use the
RejectedExecutionHandler to then make your queue (you'd need to define a
custom queue for this) switch to act as a (finite) linked blocking queue. If
the normal queue detects it is empty then it switches back to synchronous
mode. I *think* that would meet your requirements *but* I don't know if what
I just described can actually be implemented. Interesting to think about it
anyway :-)

Cheers,
David Holmes



More information about the Concurrency-interest mailing list