[concurrency-interest] Quest for the optimal queue

√iktor Ҡlang viktor.klang at gmail.com
Sun May 13 07:25:17 EDT 2012

On Sun, May 13, 2012 at 1:00 PM, Michael Barker <mikeb01 at gmail.com> wrote:

> >> With most of the experimentation done on the Disruptor, the thing that
> >> has the biggest impact on latency is how you notify the consumer
> >> thread.
> >
> >
> > There's no notification needed. If the consumer is currently active,
> i.e. is
> > being executed by a thread, we want handoffs to be as cheap as possible.
> Cool, you get to avoid the hard problem :-).
> One possible thing to try would be to have some sort of batch based
> dequeue operation rather than a single element dequeue.  If you
> process a mailbox and there is 10 messages in the queue then you could
> remove all 10 and update the head pointer only once.  I'm assuming
> that the CAS/Volatile operations will be the biggest cost.  It won't
> bring much of a single message latency reduction, but will have a
> better profile under a heavy load or burst conditions.

The problem with batch dequeues is that if one of the messages fail we need
to be at a stable state, and doing that would mean store away the remaining
batch, which would bump the processors size by atleast a reference, which
can be expensive if you have millions of them.

It's an interesting problem though. I've been thinking about how to handle
producer conflicts as cheap as possible, as there are no consumer conflicts.


> Mike.

Viktor Klang

Akka Tech Lead
Typesafe <http://www.typesafe.com/> - The software stack for applications
that scale

Twitter: @viktorklang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20120513/2b6b4c36/attachment.html>

More information about the Concurrency-interest mailing list