[concurrency-interest] Relativity of guarantees provided by volatile

David Holmes davidcholmes at aapt.net.au
Fri Aug 17 19:28:55 EDT 2012


Well we just have to agree to disagree. If nothing in the program order or
synchronization order establishes that a read must happen between a pair of
those writes, then the read need not happen until after the last write. As I
said externally you may be able to see that the optimization was performed
but the program can not tell - and that is what counts.

David
  -----Original Message-----
  From: Vitaly Davidovich [mailto:vitalyd at gmail.com]
  Sent: Saturday, 18 August 2012 9:22 AM
  To: dholmes at ieee.org
  Cc: Marko Topolnik; concurrency-interest at cs.oswego.edu
  Subject: RE: [concurrency-interest] Relativity of guarantees provided by
volatile


  Two volatile writes emit a store-store barrier in between, which to me
means they cannot be collapsed and must be made visible in that order (on
non-TSO this would require a h/w fence).  In other words, I don't think
compiler can remove the redundant stores as if this was a non-volatile
field, where it's a perfectly valid (and good) optimization.

  Sent from my phone

  On Aug 17, 2012 7:18 PM, "David Holmes" <davidcholmes at aapt.net.au> wrote:

    How does it violate the JMM? There is nothing to establish that any read
has to have occurred prior to w=3. An external observer may say "hey if we'd
actually written w=1 at this point then the read would see 1" but that is
irrelevant. The program can not tell the other writes did not occur.

    David
      -----Original Message-----
      From: Vitaly Davidovich [mailto:vitalyd at gmail.com]
      Sent: Saturday, 18 August 2012 9:12 AM
      To: dholmes at ieee.org
      Cc: Marko Topolnik; concurrency-interest at cs.oswego.edu
      Subject: Re: [concurrency-interest] Relativity of guarantees provided
by volatile


      I don't think the writes to w can be reduced to just the last one as
it would violate the JMM.  R may only see the last one due to interleaving
though. Not sure if that's what you meant.

      Sent from my phone

      On Aug 17, 2012 7:03 PM, "David Holmes" <davidcholmes at aapt.net.au>
wrote:

        Hi Marko,

        I think the "surprise" is only in the way you formulated this. Said
another
        way a write takes a finite amount of time from when the instruction
starts
        to execute to when the store is actually available for a read to
see.
        (Similarly a read takes a finite amount of time.) So depending on
those two
        times a read and write that happen "around the same time" may appear
to have
        occurred in either order. But when you program with threads you
never know
        the relative interleavings (or should never assume) so it makes no
        difference how the program perceives the order compared to how some
external
        observer might perceive it.

        As for your optimization to "chunk" volatile writes, I don't see a
problem
        here if you are basically asking if given:

        w = 1;  // w is volatile
        w = 2;
        w = 3;

        that this could be reduced to the last write alone? I see no reason
why not.
        Without some additional coordination between a reader thread and the
writer
        thread, reading w==3 is a legitimate outcome. If you are thinking
about how
        the hardware might chunk things then that is a different matter. We
have to
        use the hardware in a way that complies with the memory model - if
the
        hardware can't comply then you can't run Java on it.

        David Holmes
        ------------

        > -----Original Message-----
        > From: concurrency-interest-bounces at cs.oswego.edu
        > [mailto:concurrency-interest-bounces at cs.oswego.edu]On Behalf Of
Marko
        > Topolnik
        > Sent: Saturday, 18 August 2012 7:24 AM
        > To: concurrency-interest at cs.oswego.edu
        > Subject: [concurrency-interest] Relativity of guarantees provided
by
        > volatile
        >
        >
        > Consider the following synchronization order of a program
        > execution involving a total of two threads, R and W:
        >
        > - thread R begins;
        >
        > - thread R reads a volatile int sharedVar several times. Each
        > time it reads the value 0;
        >
        > - thread R completes;
        >
        > - thread W begins;
        >
        > - thread W writes the sharedVar several times. Each time it
        > writes the value 1;
        >
        > - thread W completes.
        >
        > Now consider the wall-clock timing of the events:
        >
        > - thread R reads 0 at t = {1, 4, 7, 10};
        > - thread W writes 1 at t = {0, 3, 6, 9}.
        >
        > As far as the Java Memory Model is concerned, there is no
        > contradiction between the synchronization order and the
        > wall-clock times, as the JMM is wall-clock agnostic. However, I
        > have yet to meet a single Java professional who wouldn't at least
        > be very surprised to hear that the specification allows this.
        >
        > I understand that the SMP architecture that dominates the world
        > of computing today practically never takes these liberties and
        > makes the volatile writes visible almost instantaneously. This
        > may change at any time, however, especially with the advent of
        > massively parrallel architectures that seem to be the future. For
        > example, an optimization technique may choose to chunk many
        > volatile writes together and make them visible in a single bulk
        > operation. This can be safely done as long as there are no
        > intervening read-y actions (targets of the synchronizes-with
        > edges as defined by JLS/JSE7 17.4.4).
        >
        > Now, my questions are:
        >
        > 1. Is there a loophole in my reasoning?
        >
        > 2. If there is no loophole, is there anything to worry about,
        > given that practically 100% developers out there consider as
        > guaranteed something that isn't?
        >
        >
        > -Marko
        >
        >
        >
        >
        > _______________________________________________
        > Concurrency-interest mailing list
        > Concurrency-interest at cs.oswego.edu
        > http://cs.oswego.edu/mailman/listinfo/concurrency-interest
        >

        _______________________________________________
        Concurrency-interest mailing list
        Concurrency-interest at cs.oswego.edu
        http://cs.oswego.edu/mailman/listinfo/concurrency-interest
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20120818/512863f4/attachment.html>


More information about the Concurrency-interest mailing list