[concurrency-interest] ThreadPoolExecutor

Patrick Eger peger at automotive.com
Wed Oct 5 15:24:18 EDT 2005


Hi, great work on the util.concurrent package, we have converted from to
Oswego package entirely, however there remains one missing piece that I
could not figure out how to do. Basically, we need a ThreadPoolExecutor
with the following properties:
 
1) {infinite | bounded} linked list request queue, where the bound is
really high to avoid OOM (100000+)
2) MIN of 0 threads in pool
3) MAX # of threads in pool, configurable on pool create
4) reuse existing threads if available
5) allow pool to shrink to zero threads if there are no outstanding
requests
 

This was possible in the old Oswego package, but now in the JDK, these
all seem possible except for #4, which conflicts with the documentation
(and class ThreadPoolExecutor) here:

"When a new task is submitted in method execute(java.lang.Runnable)
<http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ThreadPool
Executor.html#execute%28java.lang.Runnable%29> , and fewer than
corePoolSize threads are running, a new thread is created to handle the
request, even if other worker threads are idle. If there are more than
corePoolSize but less than maximumPoolSize threads running, a new thread
will be created only if the queue is full"



Because LinkedBlockingQueue is never full (or when it is its already
HUGE), no combination of corePoolSize, maximumPoolSize seems to allow
this.

Basically, the problem is:

If corePoolSize == MAX, idle threads are ignored, and we will always
climb up to MAX threads very quickly, even though we only wan MAX
threads under heavy load
If corePoolSize < MAX, with an infinite (or effectively infinite)
request queue, we have effectively reduced MAX to corePoolSize.
 

Currently we have a hacked-up JDK 1.6 ThreadPoolExecutor to give us #4
(with corePoolSize==MAX and allowCoreThreadTimeout==true), IE reuse
existing idle threads before trying to create new ones. I'm not certain
if our implementation is correct, it is most certainly ugly:


Hacked into "public void execute(Runnable command)" right after the "if
(runState != RUNNING)" block:
------------------------------------------------------------------------
------------------
//NOTE: Added this for less thread-crazy behaviour
if (workersBlocked.get() > 0) {
	if(workQueue.offer(command)) {
		//HACK: this should protect against the pool shrinking,
should be very rare... 
		if (poolSize == 0)
			addIfUnderCorePoolSize(new Runnable(){ public
void run() {} });

		return;
	}
}
------------------------------------------------------------------------
------------------
 
 

Everything is working correctly as per the docs AFAIK, just seemingly
counterintuitively. It seems quite pointless and lower performant to be
creating new threads while existing ones are idle and available to
process work. Is this just a bad interaction between ThreadPoolExecutor
and LinkedBlockingQueue?  Is there another queue type that will work for
me or thread pool option I am missing?

 

Thanks in advance and please let me know if I am off base or incorrect
here.


Best regards,

Patrick




More information about the Concurrency-interest mailing list