[concurrency-interest] ForkJoinPool spawning much more workers than its target parallelism

Antoine CHAMBILLE ach at quartetfs.com
Mon Oct 4 03:27:47 EDT 2010

Hello Doug,

Thank you for looking into that issue.

The test platform is a Dell Inspiron T7500 workstation:

2x Intel Xeon 5560 CPU (Nehalem architecture, quad-core, hyperthreading, 16
hardware threads)
48GB or DDR3 memory (1333MHz)
Windows Seven Professional
JDK 1.6.0_21 x64

I will contact you directly to follow up on the troubleshooting.

While I am still in the concurrency mailing list, does anyone else notice
this new behaviour where a lot of spare worker threads are created by the
fork join pool?


-----Original Message-----
From: concurrency-interest-bounces at cs.oswego.edu
[mailto:concurrency-interest-bounces at cs.oswego.edu] On Behalf Of Doug Lea
Sent: 01 October 2010 14:21
To: concurrency-interest at cs.oswego.edu
Subject: Re: [concurrency-interest] ForkJoinPool spawning much more workers
than its target parallelism

On 10/01/10 07:57, Antoine CHAMBILLE wrote:
> We test for performance on an Intel Xeon Nehalem platform (2 sockets, 
> 4-cores CPUs, hyperthreading, so 16 hardware threads).

Could you tell me exactly what OS platform and JDK version?

> We have always seen the fork join pool spawning one or two more 
> workers than the target parallelism. It has been explained that this 
> is the way the fork join pool keeps close to its target parallelism. 
> On our test platform we usually see 18 running workers for our target
parallelism of 16.

Yes. Creating or restarting enough internal spares to maintain target
parallelism and avoid starvation is intrinsically heuristic (it is
impossible to know for sure whether lack of progress is due to insufficient
threads vs momentary unrelated stalls), so will often transiently overshoot.

> But the last time we updated from the jsr166y trunk (just after the 
> recent openJDK synchronization) we notice a strong change in this 
> behaviour: The fork join pool now creates tons of worker, up to 3 
> times the target parallelism.
> Is that an expected behaviour? Do you have an idea of the code change 
> that creates it?

This is surprising; thanks for reporting it!
It would be helpful if you could send me (off-list) a test case showing
this. The changes last month mainly entail using a backoff/timeout to smooth
over false alarms due to transient loads, so should (and does for our tests)
generally result in fewer spare threads, not more. However, it does
introduce new dependencies on JVM timed wait mechanics, that might account
for this.

Concurrency-interest mailing list
Concurrency-interest at cs.oswego.edu

More information about the Concurrency-interest mailing list