[concurrency-interest] ForkJoinPool cap on number of threads

Michael Spiegel michael.m.spiegel at gmail.com
Wed Mar 25 13:40:24 EDT 2015


Maybe I am misunderstanding the question but a simple solution is just to
use a semaphore with a fixed number of permits and use it to limit the
number of tasks submitted.

On Wed, Mar 25, 2015 at 11:24 AM, Luke Sandberg <lukeisandberg at gmail.com>
wrote:

> Thanks!
>
> Looks like i can have my ForkJoinThreadFactory return null to limit active
> threads. (it would be nice if that was documented on ForkJoinThreadFactory)
>
> In that snapshot it looks like the maxSpares limitation only applies to
> the common pool, is there a plan to add a standard maxSize mechanism for
> user pools?
>
> On Wed, Mar 25, 2015 at 2:06 AM, Viktor Klang <viktor.klang at gmail.com>
> wrote:
>
>> Hi Luke,
>>
>> the newest version of FJP has a tighter bound on max number of threads
>> (search for maxSpares here:
>> http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/main/java/util/concurrent/ForkJoinPool.java?view=markup
>> )
>>
>> Another way of limiting is to have the ThreadFactory return null when no
>> more threads should be created.
>>
>> On Tue, Mar 24, 2015 at 8:35 PM, Luke Sandberg <lukeisandberg at gmail.com>
>> wrote:
>>
>>> Hi,
>>> I'm trying to use ForkJoinPool and I am concerned that the maximum
>>> number of threads it will create is capped at 32767.  Whereas I would like
>>> to set a much lower cap (4-16).
>>>
>>> You can configure the FJP with a target parallelism but i don't know if
>>> it is possible to actually put a cap on the max parallelism.
>>>
>>> In my specific case i will just be using it as an ExecutorService
>>> (basically execute() and shutdown()) with the 'asyncMode' bit set to
>>> 'true', and will _not_ be using the Fork/Join framework.  (I want to see if
>>> my application can get better performanced via decreased contention and
>>> better cache locality).  Based on my reading of the docs i think this means
>>> that it will not create any compensating threads since i would not be using
>>> the managed blocking facilities.
>>>
>>> Does anyone have any experience with using this in production?  I am
>>> concerned that 'compensating' threads could cause my servers to run out of
>>> memory for thread stacks (at 1MB a piece they aren't cheap)' and my
>>> applications run in a shared hosting environment with strict caps on per
>>> process ram usage (If i go over my reservation, my server will be killed).
>>> Currently, we manage this with ThreadPoolExecutor by exposing our maxSizes
>>> to our configuration and using that to set reservations.
>>>
>>> Thanks,
>>> Luke
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>>
>>
>>
>> --
>> Cheers,
>>>>
>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20150325/d9351984/attachment.html>


More information about the Concurrency-interest mailing list