[concurrency-interest] Unsynchronized lazy conditions
oleksandr.otenko at gmail.com
Thu May 31 06:32:23 EDT 2018
> On 31 May 2018, at 11:27, Aleksey Shipilev <shade at redhat.com> wrote:
> On 05/31/2018 12:19 PM, Alex Otenko wrote:
>> I don’t get this advice. Do the simple thing, declare it volatile. Optimize further (learning
>> curve + operational subtleties) when that is not fast enough.
> My original reply was about that: what OP has does appear to work.
> It does not mean OP should use it, though, instead of doing the idiomatic shape: do AtomicX, gain
> CAS capability, have fast-path test, on slow-path do CAS to perform the action exactly once.
> Optimize from that, if you prove that idiom is not working for you.
>> (VH.get cannot be different from volatile load on x86, can it?..)
> Of course it can,
By what means? If VH.get guarantees observing cache-coherent value, how can it do that observing faster than volatile load on x86?
> it is the magic of VarHandles: use-site, not declaration-site memory semantics.
> you can have volatile field and do non-volatile read over it, or you can have non-volatile field and
> do volatile read or CAS over it.
More information about the Concurrency-interest