[concurrency-interest] Should old ForkJoinWorkerThread die if starting a new thread fails?
jarkko.miettinen at relex.fi
Tue Jun 6 11:32:10 EDT 2017
This does seem like something that would've been discussed before here,
but I could not find anything in the archives or a bug report.
In any case, currently if starting a new thread in
ForkJoinPool#createWorker fails with an exception (OutOfMemoryError
being the most common), the thread that tries to start that new thread
dies too. In specific situations this can lead to all threads in the
ForkJoinPool dying out which does seem strictly worse than running just
those threads and not spawning new ones.
I think OutOfMemoryError is generally be considered something that
should not be recovered from. But might we here make a different choice
as Thread#start can throw an OOM if it runs into process limits that
prevent starting new threads (why, oh why). This also happens in very
tightly controlled situation and we might want to just continue working
on the tasks. At least if Thread#start has not been overridden.
As code in ForkJoinPool is a bit dense, I am not quite sure what are the
exact required conditions. I just know that there should be both tasks
in the pool and still be room for additional threads in the pool.
The problem will then manifest in stack traces such as this (Oracle JDK
Exception in thread "ForkJoinPool-3983-worker-33"
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
The little I looked in the latest jsr166 version in the CVS, the
situation seems to be the same even if the methods have changed quite a
My question is: Is there any way to prevent this and would such
prevention would be beneficial in some or all cases?
At least naively it would seem that Thread#start fails with OOM, we
could just return false and let the existing thread continue. But this
probably is not something that's always wanted and can mask other, more
More information about the Concurrency-interest