[concurrency-interest] Semantics of compareAndSwapX

Oleksandr Otenko oleksandr.otenko at oracle.com
Wed Feb 26 18:16:59 EST 2014

Yes, it seems all of these conditions are met.


Chapter 5.2.8

A load-acquire is a load where it is guaranteed that all loads and 
stores appearing in program order after the load-
acquire will be observed by each observer after that observer observes 
the load-acquire, but says nothing about
loads and stores appearing before the load-acquire.

A store-release will be observed by each observer after that observer 
observes any loads or stores that appear in
program order before the store-release, but says nothing about loads and 
stores appearing after the store-release.

In addition, a store-release followed by a load-acquire will be observed 
by each observer in program order.

Practically, the definition of volatile accesses in JMM.


On 26/02/2014 22:39, David Holmes wrote:
> Hans,
> > But all of these x86 fence placements are gross overkill, in that they order ALL memory accesses,
> >when they only need to order VOLATILE accesses.
> A volatile store has to ensure ordering of all stores prior to the 
> volatile store, so that a read of a volatile flag ensures access to 
> non-volatile data.
> David
>     -----Original Message-----
>     *From:* concurrency-interest-bounces at cs.oswego.edu
>     [mailto:concurrency-interest-bounces at cs.oswego.edu]*On Behalf Of
>     *Hans Boehm
>     *Sent:* Thursday, 27 February 2014 4:11 AM
>     *To:* Andrew Haley
>     *Cc:* concurrency-interest at cs.oswego.edu; Stephan Diestelhorst
>     *Subject:* Re: [concurrency-interest] Semantics of compareAndSwapX
>     I think there's some confusion between the Java memory model
>     requirements and common implementation techniques based on fences.
>      The latter are sufficient to implement the former, but clearly
>     not required.
>     On x86, a volatile store is normally implemented by adding a
>     trailing fence to a store.  That fence is required only to prevent
>     reordering with a subsequent VOLATILE load; it can actually appear
>     anywhere between the volatile store and the next volatile load.
>      Putting it before volatile loads would also work, but is almost
>     always suboptimal.  In a better world, ABIs would specify one or
>     the other, and both Java and C should follow those ABIs to ensure
>     interoperability.
>     But all of these x86 fence placements are gross overkill, in that
>     they order ALL memory accesses, when they only need to order
>     VOLATILE accesses.
>     On ARMv8, I would expect a volatile store to be compiled to a
>     store release, and a volatile load to be compiled to a load
>     acquire.  Period.  Unlike on Itanium, a release store is ordered
>     with respect to a later acquire load, so the fence between them
>     should not be needed.  Thus there is no a priori reason to expect
>     that a CAS would require a fence either.
>     I would argue strongly that a CAS to a thread-private object
>     should not be usable as a fence. One of the principles of the Java
>     memory model was that synchronization on thread-private objects
>     should be ignorable.
>     I'm hedging a bit here, because the original Java memory model
>     doesn't say anything about CAS, and I don't fully understand the
>     details of the ARMv8 model, particularly the interaction between
>     acquire/release loads and stores and traditional ARM fences.
>     Hans
>     On Wed, Feb 26, 2014 at 3:22 AM, Andrew Haley <aph at redhat.com
>     <mailto:aph at redhat.com>> wrote:
>         On 02/26/2014 03:18 AM, Hans Boehm wrote:
>         > I think that's completely uncontroversial.  ARMv8 load
>         acquire and store
>         > release are believed to suffice for Java volatile loads and
>         stores
>         > respectively.
>         No, that's not enough: we emit a StoreLoad barrier after each
>         volatile store
>         or before each volatile load.
>         > Even the fence-less implementation used a release store
>         > exclusive.  Unless I'm missing something, examples like this
>         should be
>         > handled correctly by all proposed implementations, whether
>         or not fences
>         > are added.
>         >
>         > As far as I can tell, the only use case that require the
>         fences to be added
>         > are essentially abuses of CAS as a fence.
>         Well, yes, which is my question: is abusing CAS as a fence
>         supposed to work?
>         Andrew.
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20140226/18984eae/attachment-0001.html>

More information about the Concurrency-interest mailing list