[concurrency-interest] ForkJoinPool seems lead to a worselatencythan traditional ExecutorServices

Gregg Wonderly gregg at cytetech.com
Tue Apr 17 10:03:54 EDT 2012


On 4/17/2012 8:51 AM, √iktor Ҡlang wrote:
>
>
> 2012/4/17 Gregg Wonderly <gregg at cytetech.com <mailto:gregg at cytetech.com>>
>
>     On 4/17/2012 8:22 AM, √iktor Ҡlang wrote:
>
>
>
>         On Tue, Apr 17, 2012 at 2:51 PM, David Holmes <davidcholmes at aapt.net.au
>         <mailto:davidcholmes at aapt.net.au>
>         <mailto:davidcholmes at aapt.net.__au <mailto:davidcholmes at aapt.net.au>>>
>         wrote:
>
>             __
>
>             Sorry that was somewhat terse.
>             ForkJoinPool is not a drop-in replacement as an arbitrary
>         ExecutorService.
>             It is specifically design to efficiently execute tasks that implement
>             fork/join parallelism. If your tasks don't perform fork/join
>         parallelism but
>             are plain old Runnables/callables that do blocking I/O and other
>         "regular"
>             programming operations then they will not likely see any benefit
>         from using
>             a ForkJoinPool.
>
>
>         I disagree:
>
>         http://letitcrash.com/post/__20397701710/50-million-__messages-per-second-on-a-__single-machine
>         <http://letitcrash.com/post/20397701710/50-million-messages-per-second-on-a-single-machine>
>
>
>     That has nothing to do with the use of ForkJoin it appears to me.  It is
>     simply a thread use efficiency change that causes a scheduled thread to do
>     enough work that the latency of scheduling becomes a small enough component
>     that it disappears from view because the other thread is running (2 are
>     available per core) while scheduling occurs.
>
>     This would happen no matter what kind of thread pool was used, with
>     appropriate timings/thread-scheduling that created the same effect.
>
>
> No, the scalability of the ForkJoinPoll is extremely much better than other
> j.u.c implementations:

The example you pointed at, had nothing to do with using forkjoin.  It merely 
demonstrated that if you have twice as many threads, as cores, that you could 
start to hide the scheduling overhead/context switch latency by performing work 
in one thread while another thread had encountered some form of scheduling 
latency.  That example showed that somewhere around 50 messages could be 
processed in the time that it took to switch to another thread.  Once you hit 
that wall, no more progress is made.

 > http://letitcrash.com/post/17607272336/scalability-of-fork-join-pool

Yes, this page does say:

"When using thread pool executor (java.util.concurrent.ThreadPoolExecutor) the 
benchmark didn’t scale beyond 12 parallel actors. "

and I will agree that there are efficiency issues indicated in that statement, 
but without source code to look at, it's not really possible to understand where 
there might be problems in the benchmark code.

ForkJoin is about efficiency for many classes of problems, but this problem in 
particular, is not one that I would use ForkJoin for.  It would, of used my own 
thread pool, specifically because I know all about many inefficiencies and 
undesirable side effects of using TPE to just schedule a bunch of threads for 
parallelism.  I only use TPE as a means to throttle thread use against bursty 
loads that work much better with TPE and a queue that blocks on full insert 
attempts.

Gregg

 >Cheers,
 >√


More information about the Concurrency-interest mailing list