[concurrency-interest] Unsynchronized lazy conditions
oleksandr.otenko at gmail.com
Thu May 31 09:00:48 EDT 2018
Actually, “totally ordered” is the wrong word for reads here. “totally ordered” would imply other CPUs agree on the order of reads, too. But it is just “not reordered by a single CPU”.
Still, the instructions for “normal” reads don’t differ from the instructions for “volatile”.
> On 31 May 2018, at 13:13, Alex Otenko <oleksandr.otenko at gmail.com> wrote:
> On x86 all reads are totally ordered with respect to each other, and stores never go ahead of reads. So all reads have the same semantics as volatile reads on x86 - once the Java code is translated to the actual instructions, that is. The JVM is free to not translate some reads appearing in Java code into the actual instructions, as is permitted by the JMM.
> Volatile reads also have implications on what the JVM can do to the rest of the code. Aleksey is being a bit mysterious about what exactly can be done, but what he is getting at, is something like:
> * Once you declare a read as volatile, you also force the JVM to materialize all reads that are after it in program order (can’t use the values still in registers anymore), and forces the JVM to not reorder stores ahead of the volatile read.
> * If you are able to tell the JVM that it’s ok to reorder and what not, that all that you need is for that one read to not be eliminated, then you allow for all other code movement and optimizations to occur.
>> On 31 May 2018, at 12:21, Jonas Konrad via Concurrency-interest <concurrency-interest at cs.oswego.edu> wrote:
>> But it doesn't? It's just a normal, non-volatile read. Or is this the same on x86?
>> - Jonas
>> On 05/31/2018 12:32 PM, Alex Otenko via Concurrency-interest wrote:
>>>> On 31 May 2018, at 11:27, Aleksey Shipilev <shade at redhat.com> wrote:
>>>> On 05/31/2018 12:19 PM, Alex Otenko wrote:
>>>>> I don’t get this advice. Do the simple thing, declare it volatile. Optimize further (learning
>>>>> curve + operational subtleties) when that is not fast enough.
>>>> My original reply was about that: what OP has does appear to work.
>>>> It does not mean OP should use it, though, instead of doing the idiomatic shape: do AtomicX, gain
>>>> CAS capability, have fast-path test, on slow-path do CAS to perform the action exactly once.
>>>> Optimize from that, if you prove that idiom is not working for you.
>>>>> (VH.get cannot be different from volatile load on x86, can it?..)
>>>> Of course it can,
>>> By what means? If VH.get guarantees observing cache-coherent value, how can it do that observing faster than volatile load on x86?
>>>> it is the magic of VarHandles: use-site, not declaration-site memory semantics.
>>>> you can have volatile field and do non-volatile read over it, or you can have non-volatile field and
>>>> do volatile read or CAS over it.
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu
More information about the Concurrency-interest