[concurrency-interest] ThreadPoolExecutor

David Holmes dholmes at dltech.com.au
Wed Oct 5 23:44:31 EDT 2005


Patrick,

> There are definite race issues here as well, but it think it is safe in
> that the only problem would be if the pool shrinks to zero size after we
> offer it to the queue.  "if (poolSize == 0)" clause below protects
> against this case I think...

The approximation involved in using the workersBlocked count would
invalidate one of the basic guarantees regarding the creation of core
threads - ie that one will always be created if needed. Without this, for
example, barrier designs that use a pool wouldn't work because tasks may be
left in the queue with all existing workers blocked on the barrier waiting
for the other tasks to arrive - which they won't because they are in the
queue and less than coreSize threads have been created. You would have to
guarantee atomicity using locks.

People have enough trouble understanding the relationship between coreSize,
queue length and maxSize, as it is! Adding another variable to the mix would
be a recipe for potential disaster. That's not to say that there couldn't be
a different TPE that worked as you would like, but it seems unlikely to be a
j.u.c class. I wish there were a clean way to abstract these policy choices
out so that subclasses could modify them, but I can't see a way of doing
this. So cut-paste-modify seems the simplest solution.

> Your queue approach is also interesting but sound much more complex and
> error prone, plus it would really complexify the (currently simple)
> interface between the thread pool and the queue.

Hmmm. There wouldn't be any change in the interface between them - as far as
I can see. You give the pool your special queue and install the queue's
rejected execution handler, then everything "just works". I'm not sure how
the task that triggers the switch from synchronous to non-synchronous mode
gets resubmitted: maybe a simple MyQueue.this.put(), or if necessary hook
the queue to the TPE and let the handler reinvoke execute. I don't *think*
it is that complicated but I won't know unless I try to implement it.

> P.S. I have confirmed via a small test case that a corePoolSize of zero
> will result in submitted tasks *never* executing.

As you would expect - threads beyond the core size only get created when the
queue is full. If the queue can't fill then no threads ever get created. Do
you think this combination should be detected and prohibited?

> P.P.S. Am I still confused and not knowing what I want?  I assumed this
> behaviour is what most people would want for a dynamic work queue
> (0-1000 threads) with bursty request patterns (0-1 million+ on the queue
> ant any given point), but I cannot find much in the archives...

The current design assumes that the core is set such that the expected load
is accommodated, with room for increasing the worker set under heavy load,
by limiting the buffering done by the queue. Having large numbers of threads
in the pool is generally detrimental to overall performce, but may be
necessary if the tasks are expected to block for reasonable periods.
Otherwise with a small core size you would not generally be concerned about
the startup overhead of these threads - whether done eagerly or on demand.

Do you really expect 1000 active threads? How many CPU's are you running on?
Even on a 384-way Azul box I wouldn't expect to need a pool that large. :)

Cheers,
David Holmes

PS. Doug Lea is obviously incommunicado right now or else I'm certain he
would have chipped in - and I expect he will when he can.



More information about the Concurrency-interest mailing list