[concurrency-interest] AtomicReference get vs. getAcquire; get/set Opaque

Vitaly Davidovich vitalyd at gmail.com
Sat Sep 24 16:50:52 EDT 2016


On Saturday, September 24, 2016, Martin Buchholz <martinrb at google.com>
wrote:

>
>
> On Sat, Sep 24, 2016 at 6:19 AM, Dávid Karnok <akarnokd at gmail.com
> <javascript:_e(%7B%7D,'cvml','akarnokd at gmail.com');>> wrote:
>
>> I have a one element single-producer single-consumer "queue" implemented
>> as this:
>>
>> boolean offer(AtomicReference<T> ref, T value) {
>>     Objects.requireNonNull(value);
>>     if (ref.get() == null) {
>>         ref.lazySet(value);
>>         return true;
>>     }
>>     return false;
>> }
>>
>> T poll(AtomicReference<T> ref) {
>>     T v = ref.get();
>>     if (v != null) {
>>        ref.lazySet(null);
>>     }
>>     return v;
>> }
>>
>> Is it okay to turn get() into getAcquire() and lazySet into setRelease()
>> (I can see lazySet delegates to setRelease)?
>>
>
> Yes, but ... the poll and offer operations will no longer be part of the
> global sequentially consistent order of synchronization actions.
>
How so?

Volatile load (get()) prevents subsequent stores and loads from moving
above it - so does getAcquire.

lazySet never really had a formal definition, certainly wasn't part of
JMM.  Most people assumed it ensured prior stores didn't move past the
lazySet - nothing more.  setRelease appears to prevent prior loads and
stores from moving past it.  Given lazySet delegates to setRelease, it
would imply that lazySet also didn't allow prior loads to move past it, not
just stores.

So talking about the global synchronization order when there was already
lazySet, rather than volatile store, seems iffy.

>
> More generally, are there consequences turning CAS loops of
>> get()+compareAndSet into getAcquire() + weakCompareAndSetRelease?
>>
>
>
>
>> In addition, are there examples (or explanations) of when it is okay to
>> use getOpaque/setOpaque  and would they work with the concurrent "queue"
>> above at all?
>>
> I've asked for more explanation of opaque before, not sure that's
happened.  In particular, it's unclear if atomicity is guaranteed (e.g.
load/store a long on 32bit archs).  It would seem it's mirroring
memory_order_relaxed in C++ otherwise.

For the single queue in this thread, it's unclear if it would work.  An
object published via setOpaque may have its stores happen after since no
ordering, other than program, is guaranteed - CPU can reorder internally
though.

One way to think about the opaque operations is to pretend they happen
inside a method the JIT didn't inline.  It therefore cannot
optimize across it because it doesn't know if the method would invalidate
its operations.  So it's kind of a full compiler fence in that sense.

My personal impression is that opaque is useful for ensuring atomicity of a
read/store only, like the memory_order_relaxed in C++, although the
atomicity hasn't been clarified, AFAIK.

>
>> (I guess there is not much difference for x86 but my code may end up on
>> weaker platforms eventually.)
>>
>
> I think there is a difference on x86.  Conceptually, set/volatile write
> "drains the write buffer", while setRelease does not.
>
Minor nit: nothing "drains the write buffer" - normal CPU machinery drains
it on its own.  What the volatile set would end up doing is preventing
subsequent instructions from executing until the store buffer is drained.
So it's a pipeline bubble/hazard.




-- 
Sent from my phone
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20160924/d6601ad6/attachment.html>


More information about the Concurrency-interest mailing list