[concurrency-interest] java.util.concurrent.ThreadPoolExecutor doesnot execute jobs in some cases

Gregg Wonderly gregg at cytetech.com
Fri Apr 24 09:54:22 EDT 2009

David Holmes wrote:
> Gregg Wonderly writes:
>> The current strategy differs overload handling until the queue is
>> full.
> Yes. As I explained the queue is to buffer transient overflow; the increase
> to max threads is to try and recover from excessive overflow. The queue
> being full is the signal that the overload is now excessive and additional
> threads are needed to try and recover. (Whether they will or not depends on
> whether the arrival rate drops back to normal levels).
> But throwing more threads at the problem only gets more jobs "in progress"
> it doesn't mean that they will necessarily complete in a reasonable or
> expected time. As you state, by throwing in more threads you can simply
> create contention, both in the TPE/Queue and in the application code that
> gets executed. Hence the programmer has to choose that this is the right
> thing by setting core, max and the queue size appropriately.
>> The purpose of the queue should be to differ work that is beyond
>> maxPoolSize it seems to me.
> Hmmm I think that is because you have a different view of what maxPoolSize
> represents. It seems to me that in your view of how things should be,
> corePoolSize is really minPoolSize and there is no distinction between a
> core and non-core thread. So the pool lets threads get created and idle-out
> between min and max, but once max is reached it starts buffering. The way
> "max" operates here actually corresponds to present corePoolSize, but we
> don't have a direct equivalent of minPoolSize.
> That's certainly an alternative model.
>> The current strategy really does favor late response to an already bad
>> situation.
> I don't agree.
>> Instead, I believe a thread pool should be working
>> hard to avoid a bad situation by running all threads that can be run
>> as soon as possible, and  then enqueuing the work that is truly overflow
> work.
> But if the pool always ramps up to max threads before doing any buffering
> then you are more likely to induce contention and additional latencies, and
> cause more harm.

I think that you might be saying this because you are thinking of thread pools 
as a solution for mostly CPU bound tasks, whereas in my world, most of the time 
I am using them to parallelize latency on largely long running (relative to the 
time it takes to start a thread) high latency operations such as interactions 
with databases, and over the network trips to other parts of the systems.

In the case of CPU bound, then you want maxPoolSize to be the limit on available 
CPU resources on the local machine, and corePoolSize to represent your best 
guess at what it takes to get the normal work load done.  Instead, I have 100 
thread pools talking to 10s of different databases, and I need the minimum 
number of threads to be running as is needed to get the job done.

The maxPoolSize value would represent the loading factor that the remote system 
can take to handle the work load that I am distributing to it.

This is largely why I don't use TPE for these kinds of tasks, and instead use 
the ones that I've created over the years that aggressively create threads to 
get work started.  The tasks never come in "together", but do come in "fast 
enough" (25/second) that I need to dispatch them ASAP or things degrade because 
of the simultaneous contention for database resources and other things in other 
parts of the system.  When the work is not happening, I'd like for the thread 
resources to go away.

It's a different need than what TPE was targeted for I think, but I also still 
contend that if you all are hearing problems with too many threads created, that 
it is because people don't understand how corePoolSize vs maxPoolSize works.

Gregg Wonderly

More information about the Concurrency-interest mailing list