[concurrency-interest] Queue quest

√iktor Ҡlang viktor.klang at gmail.com
Tue Apr 15 09:02:20 EDT 2014


Fair enough!

I'll take it for a spin :)


On Tue, Apr 15, 2014 at 2:48 PM, Stanimir Simeonoff <stanimir at riflexo.com>wrote:

> The 1st real queue should do fine then. Cut off the unnecessary
> notification mechanism and it should be good enough, just pad the head and
> tail to avoid the false sharing.
> If you feel the queue doesn't perform good enough under contention, you
> can try the with a slack tail and I don't think it can be done any better.
> There was a small bug in the nightly post (it was 1am for me too) which
> has been fixed (during offer the real head was read instead of the local
> variable).
>
> Technically, drain can work fine too. Since there is a single consumer,
> you can store the result in the stack itself (ArrayDeque for instance) - if
> the deque is empty then drains it all, stores is in the dequeue, return
> dequeue.poll(). Need another volatile dequeueSize to be added to Node.size.
> In the end it still takes a volatile write on the consumer size to update
> dequeueSize, so there is no real benefit compared to the 1st queue approach.
>
>
> Stanimir
>
>
>
> On Tue, Apr 15, 2014 at 3:32 PM, √iktor Ҡlang <viktor.klang at gmail.com>wrote:
>
>> Hi Stanimir,
>>
>> On Tue, Apr 15, 2014 at 2:23 PM, Stanimir Simeonoff <stanimir at riflexo.com
>> > wrote:
>>
>>> If you can tolerate bulk drain(), i.e. "Collection<E> drain()" instead
>>> of "E poll()" there is even a simpler implementation - using a stack and
>>> reverse it on poll(). Although the stack would perform worse under
>>> contention as the "head" is contended by both producers and the consumer,
>>> yet the implementation is super simple.
>>> http://pastebin.com/4FpvHPEy
>>>
>>> I've been using bounded SC/MP queues/stacks with blocking on the
>>> consumer for quite some time, hence the inclusion of park/unpark.
>>>
>>
>> We've already got one approach with bulk drains but it introduces a lot
>> of extra complication for the consumer as you need to store away the batch
>> in case something goes wrong when processing something in the batch, so I'd
>> like to avoid it.
>>
>> I know when there's something in the queue (i get signalled when the
>> "first" element is added, idempotently) so a poll should always return
>> something on the first try, subsequent ones not required (given that the
>> queue is empty).
>>
>>
>>>
>>>
>>> Stanimir
>>>
>>>
>>>
>>>
>>> On Tue, Apr 15, 2014 at 2:04 PM, √iktor Ҡlang <viktor.klang at gmail.com>wrote:
>>>
>>>> Alright, cool, I won't need that.
>>>>
>>>>
>>>> On Tue, Apr 15, 2014 at 12:07 AM, Stanimir Simeonoff <
>>>> stanimir at riflexo.com> wrote:
>>>>
>>>>> Park/Unpark is only for the consumer size if you specify timeout to
>>>>> wait for an element. The producer never waits besides the CAS loop.
>>>>>
>>>>> Stanimir
>>>>>
>>>>>
>>>>> On Tue, Apr 15, 2014 at 1:05 AM, √iktor Ҡlang <viktor.klang at gmail.com>wrote:
>>>>>
>>>>>> Hi Stanimir,
>>>>>>
>>>>>> It's really late here, but the presence of park/unpark tells me that
>>>>>> this isn't non-blocking.
>>>>>>
>>>>>>
>>>>>> On Mon, Apr 14, 2014 at 11:59 PM, Stanimir Simeonoff <
>>>>>> stanimir at riflexo.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Mon, Apr 14, 2014 at 4:00 PM, Oleksandr Otenko <
>>>>>>>> oleksandr.otenko at oracle.com> wrote:
>>>>>>>>
>>>>>>>>>  Yes, but capacity availability is tricky to define.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Absolutely, I am open to suggestions!
>>>>>>>>
>>>>>>>>
>>>>>>> Here is an attempt with the described counter:
>>>>>>> http://pastebin.com/fY1AudYY
>>>>>>>
>>>>>>> Basically it is a linked queue w/o CAS on the head due to single
>>>>>>> consumer only, each node has an extra field *long counter *and the
>>>>>>> current size is 1+tail.counter-head.counter.
>>>>>>> poll(long nanos) may return spuriously as it doesn't bother with
>>>>>>> measuring time. Usually that's ok for single consumer queues as they go
>>>>>>> back to poll() if they don't have other tasks to do.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Stanimir
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Cheers,
>>>>>>>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Cheers,
>>>>>>>>
>>>
>>>
>>
>>
>> --
>> Cheers,
>>>>
>
>


-- 
Cheers,
√
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20140415/e7fde93a/attachment-0001.html>


More information about the Concurrency-interest mailing list