[concurrency-interest] AtomicXXX.lazySet and happens-before reasoning

Ruslan Cheremin cheremin at gmail.com
Mon Oct 10 10:25:09 EDT 2011


I do not know much about it. But there is already lazySet and weakCAS here (and
being used in production code!). And by any way we need some way of reasoning
about such code. So even if Fences itself is still under question, kind of JMM
enchancement introducing such fences-like constructions and how to deal
with them is definitely required. As far as I understand, such efforts currently
going around Fences API.

2011/10/10 Vitaly Davidovich <vitalyd at gmail.com>:
> Did I miss something in this thread (or elsewhere) that said Fences API is
> coming? I thought Doug's attempt at this didn't go anywhere last time.
>
> Thanks
>
> On Oct 10, 2011 9:50 AM, "Ruslan Cheremin" <cheremin at gmail.com> wrote:
>>
>> > I'm aware that the type of instruction and whether a fence is a no-op is
>> > arch specific, but I was curious how the compilers (c1/c2) make
>> > decisions
>> > around these things,  such as if they track pairs of read/write of same
>> > volatile across method bounds ( I believe they don't at the moment).
>>
>> As far, as I understand, JIT does not need to track pairs -- since
>> StoreLoad is required to
>> keep volatiles access visible in same order for any thread (in
>> synchronization order), so
>> you can't remove it anyway.
>>
>> I totally agree what it'll be very helpfull and interesting to see
>> some kind of "improved
>> JSR-133 cookbook" with slightely more detailed description about
>> ordering fences
>> issued for supporting varios kinds of JMM requirements, and their roles.
>> Keeping
>> in mind forecoming Fences it can help to answer many arising questions
>> like
>> "what Fence should I add to lazySet to get exactly volatile store and
>> why".
>>
>>
>>
>> > On Oct 10, 2011 9:14 AM, "Doug Lea" <dl at cs.oswego.edu> wrote:
>> >>
>> >> On 10/10/11 08:56, Vitaly Davidovich wrote:
>> >>>
>> >>> I agree that the way StoreLoad is implemented ensures that volatile
>> >>> reads
>> >>> of
>> >>>  different memory location don't move before the store, but I think
>> >>> the
>> >>> JMM
>> >>> only talks about needing this for loads of same memory location (see
>> >>> Doug's
>> >>> cookbook web page description).
>> >>
>> >> Additionally, the "synchronization order" (roughly, any trace of
>> >> all volatile accesses) is required to be a total order, which
>> >> is the main constraint that forces the Dekker example to work.
>> >> This is not made clear enough in cookbook, which should be improved.
>> >>
>> >>> By the way, would be great if someone from the Hotspot
>> >>> runtime/compiler
>> >>> team
>> >>> could shed some light on how Hotspot handles these, with the caveat
>> >>> that
>> >>> people shouldn't necessarily base their code on it if it makes
>> >>> stronger
>> >>> guarantees than the JMM :).
>> >>
>> >> The main visible cases are processor- not JVM- based, and
>> >> most are only visible in giving you more consistency than required
>> >> for non-volatile accesses. In general, x86 and sparc are
>> >> stronger than POWER and ARM, with a few others
>> >> (Azul, IA64) in the middle.
>> >>
>> >> -Doug
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> Concurrency-interest mailing list
>> >> Concurrency-interest at cs.oswego.edu
>> >> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>> >
>> > _______________________________________________
>> > Concurrency-interest mailing list
>> > Concurrency-interest at cs.oswego.edu
>> > http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>> >
>> >
>



More information about the Concurrency-interest mailing list