[concurrency-interest] Backpressure in ExecutorService with uneven length jobs
henri.tremblay at gmail.com
Tue Oct 3 11:04:51 EDT 2017
Not going reactive, I would see two ways to go.
One way to indeed to manage the back pressure some other way. With a
blocking queue for instance. But there might be a better solution. I'm not
an expert on the topic. But you seem to already do that so it would juste
mean to remove the CallerRunsPolicy (and lose a bit of throughput)
The other way would be to pause during long running tasks. So if a task is
ran by the caller thread, make pauses after a moment to dispatch some other
tasks to the worker threads. To prevent them from starving.
On 3 October 2017 at 04:36, Shevek <shevek at anarres.org> wrote:
> Dear list, I cannot be the first to ask...
> In a previous discussion, it was suggested that using CallerRunsPolicy
> with a queue of (say) ncpus * 10 is the canonical way to achieve
> backpressure. However, when the jobs are of particularly uneven sizes,
> sometimes the caller gets a job which runs for so long that all the worker
> threads exhaust the queue and sit idle. I don't have a good bound for the
> unevenness of jobs, and I have a 32-way Xeon.
> Is there a good solution to this? Perhaps blocking the caller in submit()
> until space is available in the queue? Would I better write my own queue
> implementation, or wrap the executor in a semaphore?
> It's not obviously amenable to an FJP: It's 3 million independent tasks,
> one per node in a graph, although if the answer is "Yes, make an FJP, and
> this is roughly how to make task-stealing do re-balancing", I'll be equally
> Thank you.
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Concurrency-interest