[concurrency-interest] java.util.concurrent.ThreadPoolExecutor doesnot execute jobs in some cases

Gregg Wonderly gregg at cytetech.com
Thu Apr 23 21:46:28 EDT 2009


David Holmes wrote:
> Gregg,
> 
> With the right choices of corePoolSize, maxPoolSize, maximum queue length,
> and the idle timeouts (for core and non-core threads) you can achieve a vast
> range of policies - not all of course, but many. For example, if you want
> aggressive thread creation then increase the core size and/or reduce the
> queue size - and if I recall correctly using a SynchronousQueue effectively
> removes the queue and allows you to grow to maxCoreSize without buffering.

The current strategy differs overload handling until the queue is full.  The 
purpose of the queue should be to differ work that is beyond maxPoolSize it 
seems to me.  Having a fixed number of threads (corePoolSize) ready to run and 
handle inflow that is normal, is and okay concept.  If new threads are created 
early, instead of waiting for the queue to overflow, then the application would 
not see a huge work load dumped on the available threads as it does now.

There is some time always required for the threads to run to completion. 
Overlapping that time with available bandwidth (network, CPU or other I/O) 
really is the best choice, or we're all working on multi-thread apps for no 
reason.  Delaying execution for any length of time is always going to be a 
direct impetus for a queue overflow on the next addition to the executor.

When the queue overflows and all the threads are started, you have even more 
latency injected as they all fight for locks and resources that they all 
typically share, so the startup latency will be amplified into a sequential 
delay which is often quite visible.

The current strategy really does favor late response to an already bad 
situation. Instead, I believe a thread pool should be working hard to avoid a 
bad situation by running all threads that can be run as soon as possible, and 
then enqueuing the work that is truly overflow work.  I.e. if it make sense to 
run it in a thread pool, in parallel at all, then starting sooner, rather than 
later, can only help manage resource usage for optimal results.

If you pick too large of a maxPoolSize, than you can see a problem.  I'm 
guessing that most people really don't understand this behavior of the TPE, and 
so instead of increasing corePoolSize to get better responsivity, they increase 
maxPoolSize, and everytime the queue overflows, they see the life sucked out of 
their application as way too many threads end up running.

Gregg Wonderly

> The main complaint with current TPE is that it creates threads too
> aggressively - preferring to create a core thread rather than hand-off to an
> idle core thread. This is being addressed for JDK7.
> 
> The overall strategy is all based on queuing theory and expected workload
> given the nature of tasks and their expected arrival rates etc:
> - core threads represent the steady-state number of threads needed to
> maintain response times to an acceptable level given the expected workload
> - the queue allows buffering of work under transient overload conditions
> - the expansion to maxPoolSize when the queue is full is a recovery attempt
> to deal with excessive overload
> - rejected execution is the ultimate overload defence
> 
> Cheers,
> David Holmes
> 
>> -----Original Message-----
>> From: Gregg Wonderly [mailto:gregg at cytetech.com]
>> Sent: Friday, 24 April 2009 8:46 AM
>> To: dholmes at ieee.org
>> Cc: Ashwin Jayaprakash; concurrency-interest at cs.oswego.edu
>> Subject: Re: [concurrency-interest]
>> java.util.concurrent.ThreadPoolExecutor doesnot execute jobs in some
>> cases
>>
>>
>>
>> For high latency threads, this strategy doesn't allow aggressive
>> thread creation
>> to occur to try and make sure things work as responsively as
>> possible.  All of
>> the thread pools that I've ever created have always aimed at
>> maxPoolSize until
>> all threads were running, and corePoolSize would just be the
>> minimum threads
>> that we'd fall back to in an idle state.
>>
>> For network applications where there are database's and server's
>> latency to deal
>> with, a very aggressive thread creation strategy has always helped me to
>> minimize latency.  For CPU bound things of short life, the
>> current TPE strategy
>> will work, but still favors some delays that don't have to exist,
>> and create
>> queue overflow situations more often than if maxPoolSize was
>> aimed at always.
>>
>> Gregg Wonderly
>>
>> David Holmes wrote:
>>> I don't think the issue here is "corePoolSize is less than
>>> maximumPoolSize" I think this is a particular issue with
>>> corePoolSize==0, and looking at the current JDK7 code I'm pretty sure
>>> this has been addressed so there is always at least one thread even if
>>> corePoolSize is zero.
>>>
>>> Remember the way this is supposed to work is that on a submission the
>>> executor creates a thread if there are less than corePoolSize threads,
>>> else the task is queued. If the queue is bounded and is full then a
>>> thread is again created, provided the number of threads is less than
>>> maximumPoolSize.
>>>
>>> So unless you use a bounded queue that gets full, you will not see the
>>> number of threads grow from corePoolSizse to mnaxPoolSize.
>>>
>>> David Holmes
>>>
>>>     -----Original Message-----
>>>     *From:* concurrency-interest-bounces at cs.oswego.edu
>>>     [mailto:concurrency-interest-bounces at cs.oswego.edu]*On Behalf Of
>>>     *Ashwin Jayaprakash
>>>     *Sent:* Thursday, 23 April 2009 3:33 AM
>>>     *To:* concurrency-interest at cs.oswego.edu
>>>     *Subject:* [concurrency-interest]
>>>     java.util.concurrent.ThreadPoolExecutor doesnot execute jobs in some
>>>     cases
>>>
>>>     Hi, I've raised a bug in the Sun Bug Database. It's still under
>>>     review. But here it is:
>>>
>>>     java.util.concurrent.ThreadPoolExecutor does not execute jobs in
>>>     some cases.
>>>
>>>     If the corePoolSize if less than the maximumPoolSize, then the
>>>     thread pool just does not execute the submitted jobs. The jobs just
>>>     sit there.
>>>
>>>     If the corePoolSize is set to 1 instead of 0, then only 1 thread
>>>     executes the jobs sequentially.
>>>
>>>     ==================================================
>>>     When the corePoolSize is 0:
>>>     ==================================================
>>>
>>>     Submitting job 1
>>>     Submitting job 2
>>>     Submitting job 3
>>>     Submitting job 4
>>>     Shutting down...
>>>     Waiting for job to complete.
>>>
>>>     (Program never exits)
>>>
>>>
>>>     ==================================================
>>>     When the corePoolSize is 1:
>>>     ==================================================
>>>     Submitting job 1
>>>     Submitting job 2
>>>     Submitting job 3
>>>     Submitting job 4
>>>     Shutting down...
>>>     Waiting for job to complete.
>>>     Starting job: temp.Temp$StuckJob at 140c281
>>>     Finished job: temp.Temp$StuckJob at 140c281
>>>     Waiting for job to complete.
>>>     Starting job: temp.Temp$StuckJob at a1d1f4
>>>     Finished job: temp.Temp$StuckJob at a1d1f4
>>>     Waiting for job to complete.
>>>     Starting job: temp.Temp$StuckJob at 1df280b
>>>     Finished job: temp.Temp$StuckJob at 1df280b
>>>     Waiting for job to complete.
>>>     Starting job: temp.Temp$StuckJob at 1be0f0a
>>>     Finished job: temp.Temp$StuckJob at 1be0f0a
>>>     Shut down completed.
>>>
>>>     REPRODUCIBILITY :
>>>     This bug can be reproduced always.
>>>
>>>     ---------- BEGIN SOURCE ----------
>>>     public class Temp {
>>>        public static void main(String[] args) throws ExecutionException,
>>>     InterruptedException {
>>>            ThreadPoolExecutor tpe = new ThreadPoolExecutor(0, 512,
>>>                    3 * 60, TimeUnit.SECONDS,
>>>                    new LinkedBlockingQueue<Runnable>(),
>>>                    new SimpleThreadFactory("test"));
>>>
>>>            LinkedList<Future> futures = new LinkedList<Future>();
>>>
>>>            System.out.println("Submitting job 1");
>>>            futures.add(tpe.submit(new StuckJob()));
>>>
>>>            System.out.println("Submitting job 2");
>>>            futures.add(tpe.submit(new StuckJob()));
>>>
>>>            System.out.println("Submitting job 3");
>>>            futures.add(tpe.submit(new StuckJob()));
>>>
>>>            System.out.println("Submitting job 4");
>>>            futures.add(tpe.submit(new StuckJob()));
>>>
>>>            System.out.println("Shutting down...");
>>>
>>>            for (Future future : futures) {
>>>                System.out.println("Waiting for job to complete.");
>>>                future.get();
>>>            }
>>>
>>>            tpe.shutdown();
>>>            System.out.println("Shut down completed.");
>>>        }
>>>
>>>        public static class StuckJob implements Runnable {
>>>            public void run() {
>>>                try {
>>>                    System.out.println("Starting job: " + this);
>>>                    Thread.sleep(5000);
>>>                    System.out.println("Finished job: " + this);
>>>                }
>>>                catch (InterruptedException e) {
>>>                    e.printStackTrace();
>>>                }
>>>            }
>>>        }
>>>     }
>>>
>>>     ---------- END SOURCE ----------
>>>
>>>     Workaround:
>>>     Set the corePoolSize to be equal to the maximumPoolSize. But this is
>>>     scary because if the pool ever reaches its max limit then all those
>>>     threads will just sit there instead of shutting down after the idle
>>>     time out.
>>>
>>>     Ashwin (http://javaforu.blogspot.com)
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
> 
> 
> 
> 



More information about the Concurrency-interest mailing list