[concurrency-interest] "Volatile-like" guarantees

David Holmes davidcholmes at aapt.net.au
Thu Feb 3 01:42:13 EST 2011


Niko Matsakis writes:
> David Holmes wrote:
> > So does this mean you are happy that Brian's explanation is correct?
> >
> No, it does not really change the question that I had.

That's a shame as I had a response to that email nearly finished when your
clarification came in. :) But I'll try to respond to how you have expressed
things below. For reference let me rewrite the code:

        void setF(int x) {
            f = x;    // Wf
            v = 0;    // Wv
        }
        int getF() {
           int y = v;    // Rv
           return f;     // Rf

> Clearly, the
> volatile read in the getter (Rv) may precede the volatile write in the
> setter (Wv), in which case there would be no happens-before
> relationship.

Correct.

> However, unless I am mistaken, in that case it is still
> possible that the non-volatile read (Rf) sees the write from the
> non-volatile write (Wf).  Intuitively, in some execution, the reads in
> the getter may occur in between the two writes in the setter.  In that
> scenario, the write to f occurs first, then both reads, then the
> volatile write to v.  So there is no happens-before relationship, but
> the read may see the write to f (it also may not, as there is no
> happens-before relationship to force the issue).

Correct. Writes can become immediately visible and be totally ordered.
Barriers are only needed to establish visibility and ordering properties
when that is not the case.

> Now reads which follow
> the getter are not guaranteed to the see writes which preceded the setter.

I see. Yes the transitive HB relationship does not apply. But you have
arbitrary racing between the two threads so I don't see how you define
correctness here.

> The difference between this scenario and the normal volatile example is
> that the volatile write is not the "significant" write.  In other words,
> the intended usage of volatile as I understand is that one performs
> various writes W, then writes a value to the volatile field that serves
> as a signal to the reader.  The reader, seeing that value, performs
> various dependent reads that would not be safe had the writes W not been
> completed.

Correct.

> In this scenario, though, it is not the write to the
> volatile that signals the reader, but rather the write to the
> non-volatile field f.  The question then is can a volatile read/write be
> used purely as a supplement and, if so, what is the right way to do it.

But the non-volatile write can't be used as a signal to the reader as it is
unordered - only the volatile accesses impose a limited ordering on the
accesses in the program.

> To motivate the question, I am working on a compiler for a parallel
> language.  The language generally guarantees data-race freedom, but in
> some cases it allows the user to signal that they permit data-races; in
> those cases I would still like to guarantee sequential consistency.

Given that the Java Memory Model does not guarantee sequential consistency
in the face of data races, I don't see how you can construct a
sequentially-consistent but racy implementation on top of it.

But this is something better discussed on the JMM mailing list.

Cheers,
David Holmes

 The
> problem is that a single class may be used both in a racy and non-racy
> context.  The conservative thing to do then is to mark all fields
> volatile, but that penalizes the non-racy context, which is by far the
> most common.  I could generate two versions of the class, and maybe
> eventually I will, but for the moment I'd like to use a simpler scheme.
> A final option, then, is to have a delegate class which optionally uses
> a volatile field in the way that I have shown here to add in memory
> barriers.  So, in that case, writes to a field "f" would be compiled as
>
>      this.f = ...;
>      this.delegate.synchronizeWrite();
>
> where synchronizeWrite() would either do nothing (non-ract case) or
> write a dummy value to a volatile field (racy case).  The question is,
> will a scheme like this work?  (Of course, it may also prove to be slow,
> if the cost of the method call is too high, but that could be optimized
> in various ways)
>
>
>
> Niko
>



More information about the Concurrency-interest mailing list