[concurrency-interest] The need for a default ForkJoinPool

Gregg Wonderly gregg at cytetech.com
Wed Aug 18 17:06:55 EDT 2010

Doug Lea wrote:
> A few notes on this:
> ForkJoinPool is entirely geared to maximizing throughput,
> not responsiveness/latency/fairness. In default configuration,
> it tries to fully employ/saturate all CPUs/cores on the machine.
> Because of the FIFO submission admission policy, it will normally
> perform submissions (which are expected to burst into many
> subtasks) one at a time unless there is insufficient parallelism
> or stalls with the current submission, in which case another
> may be accepted to overlap. In general, this will maximize
> overall throughput.
> So, in pure throughput-oriented programming, you really do
> want only one ForkJoinPool per program. Having more than one
> would waste resources and create useless contention. (Note
> however that the work-stealing techniques used internally
> are still better than alternatives even when there is CPU
> contention, since they adapt to cases where some worker
> threads progress much more quickly than others, so long
> as users use fine-grained enough tasks for this to kick in.)
> Even if there were one global pool, we'd still allow
> construction of others for nichy purposes of using
> ForkJoinWorkerThread subclasses that add execution context,
> or special UncaughtException handlers, or locally-fifo
> processing (thus these non-default constructors.)
> But otherwise, the only user-level decision I know of
> is whether to configure the ForkJoinPool to use all available
> CPUS/cores, or whether to use a smaller number to
> improve chances of better responsiveness of other parts of a
> system. I'm not positive that even this decision is best
> left to users though. In the future (with better JVM/OS
> support) we might be able to do a better job than
> users could by internally automating dynamic sizing/throttling
> of default-constructed or global ForkJoinPools. If
> JDK libraries start using ForkJoinPool for parallel
> apply-to-all, map, reduce, etc, we will surely need
> to further explore doing this.
> Note: it is possible, and even common for ForkJoinPool
> to create more threads than the given parallelism
> level to ensure liveness while preserving join dependencies.
> (As I've said before, this is a feature, not a bug!)
> However, this results in only transient periods in which
> more than the target parallelism level number of threads
> are actually running.

So, the specific use case that I think makes the most sense to stare at, is 
where an existing API has no use of FJ, but decides to change to use it, in the 
background for work dispatch.  When you place such code into an existing app 
with FJ use, along with other threads, there is no difference between FJ use and 
thread pools now, from the perspective of unmanageable (because the problem can 
grow to be intractable in complexity) CPU contention.

Doug is correct that there doesn't seem to be a "general rule" that simply 
solves the problem of CPU contention.  But, I think that falls out of all the 
control that Java provides for letting threads get created and in how we design 
APIs with continuous execution as a "mechanism" for managing the complexity of 
multi-staged operations.

If FJPool did provide a mechanism for providing a default pool, it still seems 
that it would be possible for it to look at which pool the current thread was 
in, and use such if it knew.

enum { FJ_DEFAULT_POOL, FJ_CONTEXT_POOL } FJPoolSelectionMode;

public static FJPool getContextPool() {
	return poolCurrentThreadIsIn();

public static FJPool getDefaultPool() {
	return selMode == FJ_DEFAULT_POOL ?
		defaultPool :

public static FJPool setPoolSelectionMode( FJPoolSelectionMode mode ) {
	selMode = mode;

Some other choices, or fewer methods with the enum as a parameter would work as 

It just seems to me that choosing to return a singleton as the only 
implementation is not really a solution at all, because it doesn't let the 
application segregate "general computation" from "real work".

It could divide the CPU usage between multiple pools, and then use the context 
mechanism to keep related work using the same pool where stealing can be most 

Gregg Wonderly

More information about the Concurrency-interest mailing list