[concurrency-interest] AtomicReference.updateAndGet() mandatory updating

Andrew Haley aph at redhat.com
Tue May 30 13:10:18 EDT 2017

On 30/05/17 17:31, Gregg Wonderly wrote:
>> On May 29, 2017, at 3:04 AM, Andrew Haley <aph at redhat.com> wrote:
>> On 28/05/17 02:27, Gregg Wonderly wrote:
>>>> On May 26, 2017, at 10:05 AM, Andrew Haley <aph at redhat.com> wrote:
>>>> On 26/05/17 14:56, Doug Lea wrote:
>>>>> "As well as possible" may be just to unconditionally issue fence,
>>>>> at least for plain CAS; maybe differently for the variants.
>>>> I doubt that: I've done some measurements, and it always pays to branch
>>>> conditionally around a fence if it's not needed.
>>> Since the fence is part of the happens before controls that
>>> developers encounter, how can a library routine know what the
>>> developer needs, to know how to “randomly” optimize with a branch
>>> around the fence?  Are you aware of no software that exists where
>>> developers are actively counting MM interactions trying to minimize
>>> them?  Here you are trying to do it yourself because you “See” an
>>> optimization that is so localized, away from any explicit code
>>> intent, that you can’t tell ahead of time (during development of
>>> your optimization), what other developers have actually done around
>>> the fact that this fence was unconditional before right?
>>> Help me understand how you know that no software that works
>>> correctly now, will start working randomly, incorrectly, because
>>> sometimes the fence never happens.
>> It's in the specification.  If a fence is required by the
>> specification, we must execute one. If not, the question is whether
>> it's faster to execute a fence unconditionally or to branch around
>> it.
> But that’s not my point.  My point is that once there is a fence,
> and since now developers are having to program according to “fences”
> explicit or implicit in the API implementation, you are going to
> find developers counting and demanding specific fences to be in
> specific places, because they create happens before events which are
> precisely what developers must manage.  And, just like you are
> adamant that performance can be improved by not always providing
> this fence, developers and engineers are trying to do exactly the
> same thing by looking at the complete picture of their application
> (which you have no view into from the point of this optimization).

I will always meet the specification as well as I can.  That involves
generating whatever code is necessary, but no more.

> They are saying to themselves, hey, theirs a write fence in this
> API, so if we use that to, for example assign a value via CAS, as a
> work item counter, then we don’t have to worry about all the other
> state before that, it will be visible.

There isn't a write fence in this API: there is only the rule that we
always have a volatile read and a volatile write.  (Although I'm not
entirely sure that there even is that, given the way the specification
has always been worded.)

> As soon as you take out the fence, they now have to put in
> synchronization themselves,

No they don't, because if the CAS succeeds we have a volatile write
anyway.  The fence is only proposed for the case when the CAS fails.

> Would you be happy to break their code with an optimization that is
> awesome for super racy code, but can break code that is not super
> racy and ends up with a 1 in 10000 event failure mode that no-one
> can figure out?

I'm doing no such thing: I'm saying that because there is a volatile
store on CAS success we don't need an unconditional fence as well.
That's all.

Andrew Haley
Java Platform Lead Engineer
Red Hat UK Ltd. <https://www.redhat.com>
EAC8 43EB D3EF DB98 CC77 2FAD A5CD 6035 332F A671

More information about the Concurrency-interest mailing list