[concurrency-interest] Unsynchronized lazy conditions

Jonas Konrad me at yawk.at
Thu May 31 07:21:36 EDT 2018


But it doesn't? It's just a normal, non-volatile read. Or is this the 
same on x86?

- Jonas

On 05/31/2018 12:32 PM, Alex Otenko via Concurrency-interest wrote:
> 
>> On 31 May 2018, at 11:27, Aleksey Shipilev <shade at redhat.com> wrote:
>>
>> On 05/31/2018 12:19 PM, Alex Otenko wrote:
>>> I don’t get this advice. Do the simple thing, declare it volatile. Optimize further (learning
>>> curve + operational subtleties) when that is not fast enough.
>> My original reply was about that: what OP has does appear to work.
>>
>> It does not mean OP should use it, though, instead of doing the idiomatic shape: do AtomicX, gain
>> CAS capability, have fast-path test, on slow-path do CAS to perform the action exactly once.
>> Optimize from that, if you prove that idiom is not working for you.
>>
>>
>>> (VH.get cannot be different from volatile load on x86, can it?..)
>>
>> Of course it can,
> 
> By what means? If VH.get guarantees observing cache-coherent value, how can it do that observing faster than volatile load on x86?
> 
> 
> Alex
> 
>> it is the magic of VarHandles: use-site, not declaration-site memory semantics.
>> So
>> you can have volatile field and do non-volatile read over it, or you can have non-volatile field and
>> do volatile read or CAS over it.
>>
>>
>> -Aleksey
>>
> 
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
> 


More information about the Concurrency-interest mailing list