[concurrency-interest] ThreadPoolExecutor

Patrick Eger peger at automotive.com
Thu Oct 6 17:34:02 EDT 2005

Sorry forgot to include the list

... Snipped
> > P.S. I have confirmed via a small test case that a corePoolSize of 
> > zero will result in submitted tasks *never* executing.
> As you would expect - threads beyond the core size only get created 
> when the queue is full. If the queue can't fill then no threads ever 
> get created. Do you think this combination should be detected and 
> prohibited?

Just a little surprising is all, though I'm not sure how the
ThreadPoolExecutor could know that a queue is non-synchronous?  Actually
isn't corePoolSize==0 an invalid usage pattern if the queue has *any*
capacity at all?  Otherwise the risk of tasks sitting on the queue never
being executed (or later being executed after a possibly lengthy wait)
is very high as I see it.  Once the queue overflows though (ie offer()
returns false), a core thread will be created and will flush out the
queue.  Not sure if this has bitten anyone but perhaps it has and no one
has noticed...

> > P.P.S. Am I still confused and not knowing what I want?  I assumed 
> > this behaviour is what most people would want for a dynamic
> work queue
> > (0-1000 threads) with bursty request patterns (0-1 million+ on the 
> > queue ant any given point), but I cannot find much in the
> archives...
> The current design assumes that the core is set such that the expected

> load is accommodated, with room for increasing the worker set under 
> heavy load, by limiting the buffering done by the queue. Having large 
> numbers of threads in the pool is generally detrimental to overall 
> performce, but may be necessary if the tasks are expected to block for

> reasonable periods.
> Otherwise with a small core size you would not generally be concerned 
> about the startup overhead of these threads - whether done eagerly or 
> on demand.
> Do you really expect 1000 active threads? How many CPU's are you 
> running on?
> Even on a 384-way Azul box I wouldn't expect to need a pool that 
> large. :)

Would love to play with such a beast but unfortunately no, we're working
on bog-standard ia32 & amd64 ;-)

What we have is lots of threads active at a time doing things like
waiting for a database or network response so our pools need to be
pretty big to accommodate all these IO blocking calls.  I'd love to move
to an event driven non-blocking IO model but unfortunately key
technologies such as JDBC don't provide any such options.  Hence my only
real method of not artificially limiting throughput is lots of threads.
Problem with the current threadpool is that the first thousand (for
example, with corePoolSize=1000) requests will each get their own new
thread, regardless of whether these requests ever had the need to
execute concurrently or not.  IE I'm wasting a bunch of threads that
potentially other request queues in the system will need (we have
multiple pools per instance).  We'd like the multiple pools within the
system to be reasonably bounded by the # of concurrent requests (with a
little leeway for thread keepalives of course), while still maintaining
maximum concurrency in the face of a large spike of requests or a
slowdown in our dependent IO operations.

I hope this all made sense & thanks for all the help.  If you don't see
generic usefulness in the core libs its no problem, I'll just maintain a
modified version for myself (and anyone else if interested, just let me

> Cheers,
> David Holmes
> PS. Doug Lea is obviously incommunicado right now or else I'm certain 
> he would have chipped in - and I expect he will when he can.

Thanks again!

Best regards,


More information about the Concurrency-interest mailing list