[concurrency-interest] AtomicXXX.lazySet and happens-beforereasoning

Vitaly Davidovich vitalyd at gmail.com
Mon Oct 10 19:18:13 EDT 2011


Thanks David.  I've actually seen hotspot combine two successive volatile
writes with just 1 fence after them on x86.
On Oct 10, 2011 5:25 PM, "David Holmes" <davidcholmes at aapt.net.au> wrote:

> **
> The VM augments volatile loads and stores with pre- and post- actions that
> ensure the requirements for all pairings are met. This means that in some
> cases the "barriers" between memory accesses are stronger than needed. The
> VM does not track the pairings so can't issue exact barriers needed. However
> C2 will combine/elide redundant barriers to some extent.
>
> If you need more detail feel free to read the source code :)
>
> David
>
> -----Original Message-----
> *From:* concurrency-interest-bounces at cs.oswego.edu [mailto:
> concurrency-interest-bounces at cs.oswego.edu]*On Behalf Of *Vitaly
> Davidovich
> *Sent:* Monday, 10 October 2011 11:20 PM
> *To:* Doug Lea
> *Cc:* concurrency-interest at cs.oswego.edu
> *Subject:* Re: [concurrency-interest] AtomicXXX.lazySet and
> happens-beforereasoning
>
> Thanks Doug.
>
> I'm aware that the type of instruction and whether a fence is a no-op is
> arch specific, but I was curious how the compilers (c1/c2) make decisions
> around these things,  such as if they track pairs of read/write of same
> volatile across method bounds ( I believe they don't at the moment).
> On Oct 10, 2011 9:14 AM, "Doug Lea" <dl at cs.oswego.edu> wrote:
>
>> On 10/10/11 08:56, Vitaly Davidovich wrote:
>>
>>> I agree that the way StoreLoad is implemented ensures that volatile reads
>>> of
>>>  different memory location don't move before the store, but I think the
>>> JMM
>>> only talks about needing this for loads of same memory location (see
>>> Doug's
>>> cookbook web page description).
>>>
>>
>> Additionally, the "synchronization order" (roughly, any trace of
>> all volatile accesses) is required to be a total order, which
>> is the main constraint that forces the Dekker example to work.
>> This is not made clear enough in cookbook, which should be improved.
>>
>> By the way, would be great if someone from the Hotspot runtime/compiler
>>> team
>>> could shed some light on how Hotspot handles these, with the caveat that
>>> people shouldn't necessarily base their code on it if it makes stronger
>>> guarantees than the JMM :).
>>>
>>
>> The main visible cases are processor- not JVM- based, and
>> most are only visible in giving you more consistency than required
>> for non-volatile accesses. In general, x86 and sparc are
>> stronger than POWER and ARM, with a few others
>> (Azul, IA64) in the middle.
>>
>> -Doug
>>
>>
>>
>>
>>
>>
>> ______________________________**_________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.**oswego.edu <Concurrency-interest at cs.oswego.edu>
>> http://cs.oswego.edu/mailman/**listinfo/concurrency-interest<http://cs.oswego.edu/mailman/listinfo/concurrency-interest>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20111010/66f2dd65/attachment.html>


More information about the Concurrency-interest mailing list