[concurrency-interest] Class striped/ordered Thread Pool

Aleksey Shipilev aleksey.shipilev at gmail.com
Sat May 12 13:39:32 EDT 2012


On Sat, May 12, 2012 at 8:53 PM, √iktor Ҡlang <viktor.klang at gmail.com> wrote:
>
>
> On Sat, May 12, 2012 at 6:48 PM, Aleksey Shipilev
> <aleksey.shipilev at gmail.com> wrote:
>>
>> Yes, that is why I have general issue with solving this problem with
>> map-queue designs. Why would one mess with shared state, instantiate
>> queues per striped object, when tasks can just be multiplexed over
>> single-threaded executors? OP did not forced the constraint for the
>> tasks to be executed in a batch, there's only the constraint for
>> same-class tasks to be executed in order. That is why I believe
>> stateless design is better: it is much more fool-proof and as
>> efficient. Is there something I miss there?
>
>
> A consistent-hashing approach as you described is a good one as well, the
> problem is that when the stripes are unevenly distributed you end up with a
> single threaded program.

Granted, I am from "thou shalt have a good hashcode" camp.

In the other case, when stripes are unevenly distributed, e.g. lots of
tasks from single class dominate the execution, map-queue design is
effectively serial as well. That's not the fault of the
implementations in both cases, it is the task stream itself does not
expose enough parallelism. One could do quick back-envelope
calculation on maximum parallelism available in incoming stream, as
well as by how far both designs can capitalize on that. I'd speculate
map-queue design would run at optimal; while hashing design would be
negligibly slower than optimal.

> Also worth mentioning is that the map-queue approach does not need to do batch-execution, I only added that because it was trivial and usually gives better performance as it removes pressure from the submission queue of the "real" Executor.

Yup, I can see that. Neat trick. That trades off stricter inter-class
FIFO a bit, but that should be the fair price for performance
advantages of batching.

-Aleksey.



More information about the Concurrency-interest mailing list