[concurrency-interest] java.util.concurrent.ThreadPoolExecutordoesnot execute jobs in some cases

David Holmes davidcholmes at aapt.net.au
Thu Apr 23 22:36:23 EDT 2009

I should probably add that Gregg's scheme could probably be realized just by
introducing a minCoreSize. (Though then you'd have to decide if
preStartCoreThreads starts minCore or core threads.)


> -----Original Message-----
> From: concurrency-interest-bounces at cs.oswego.edu
> [mailto:concurrency-interest-bounces at cs.oswego.edu]On Behalf Of David
> Holmes
> Sent: Friday, 24 April 2009 12:29 PM
> To: gregg.wonderly at pobox.com
> Cc: Ashwin Jayaprakash; concurrency-interest at cs.oswego.edu
> Subject: Re: [concurrency-interest]
> java.util.concurrent.ThreadPoolExecutordoesnot execute jobs in some
> cases
> Gregg Wonderly writes:
> > The current strategy differs overload handling until the queue is
> > full.
> Yes. As I explained the queue is to buffer transient overflow;
> the increase
> to max threads is to try and recover from excessive overflow. The queue
> being full is the signal that the overload is now excessive and additional
> threads are needed to try and recover. (Whether they will or not
> depends on
> whether the arrival rate drops back to normal levels).
> But throwing more threads at the problem only gets more jobs "in progress"
> it doesn't mean that they will necessarily complete in a reasonable or
> expected time. As you state, by throwing in more threads you can simply
> create contention, both in the TPE/Queue and in the application code that
> gets executed. Hence the programmer has to choose that this is the right
> thing by setting core, max and the queue size appropriately.
> > The purpose of the queue should be to differ work that is beyond
> > maxPoolSize it seems to me.
> Hmmm I think that is because you have a different view of what maxPoolSize
> represents. It seems to me that in your view of how things should be,
> corePoolSize is really minPoolSize and there is no distinction between a
> core and non-core thread. So the pool lets threads get created
> and idle-out
> between min and max, but once max is reached it starts buffering. The way
> "max" operates here actually corresponds to present corePoolSize, but we
> don't have a direct equivalent of minPoolSize.
> That's certainly an alternative model.
> > The current strategy really does favor late response to an already bad
> > situation.
> I don't agree.
> > Instead, I believe a thread pool should be working
> > hard to avoid a bad situation by running all threads that can be run
> > as soon as possible, and  then enqueuing the work that is truly overflow
> work.
> But if the pool always ramps up to max threads before doing any buffering
> then you are more likely to induce contention and additional
> latencies, and
> cause more harm. No, let me re-state that because it depends on how you've
> determined what are suitable values for core and max. In the model I
> described, core is the number of threads you need to adequately handle the
> expected load. If that number fully utilizes your resources then going
> beyond that will be detrimental. So in that view, ramping up to max is to
> use a number of threads beyond the ability of the system to handle. Of
> course if you designate max as being the limit of utilization - or
> equivalently core is well below the limit of the system, then
> continuing to
> ramp up to max should (in the absence of other factors) improve things. So
> it really depends on your understanding of core and max, and
> defining their
> values appropriately. If the system can handle max threads then buffering
> can certainly seem inappropriate, but then you want a next stage to handle
> your "truly overflow work".
> I think that your scheme can be achieved by chaining pools together. The
> initial pool uses a Synchronous queue so that it effectively grows past
> coreSize to maxSize. When maxSize is reached the RejectedExceutionHandler
> kicks in and submits the job to an "overflow" pool. This "overflow" pool
> could set a core size of zero, or one, and so effectively start buffering
> straightaway.
> Cheers,
> David
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

More information about the Concurrency-interest mailing list