[concurrency-interest] Relativity of guarantees provided by volatile

David Holmes davidcholmes at aapt.net.au
Fri Aug 17 19:16:34 EDT 2012


I think this is looking at things the wrong way. IF t2 is subsequent to t1
THEN the JMM says the read must see the write. The JMM doesn't, and can't
define "subsequent" in an abstract sense.

David
  -----Original Message-----
  From: Yuval Shavit [mailto:yshavit at akiban.com]
  Sent: Saturday, 18 August 2012 9:10 AM
  To: dholmes at ieee.org
  Cc: Marko Topolnik; concurrency-interest at cs.oswego.edu
  Subject: Re: [concurrency-interest] Relativity of guarantees provided by
volatile


  The more I think about Marko's problem, the more it bugs me. I don't think
it's that the three writes can be reduced to the last one -- it's that
there's no requirement for this to ever be seen. That is, given threads t1
and t2, such that by clock time:


     (A) t1: write to volatileField
     (B) t2:  read from volatileField


  The JLS states that if A happens-before B, then B must see A. It defines
hb(A, B) as synchronized-with(A, B), which is true if t2 is subsequent to
t1. But "subsequent" isn't defined as clock time. It's left undefined in JLS
17, except for twice in 17.4.4 in which it's defined as "according to the
synchronization order" -- which seems like a tautology!


  In other words, I think it comes down to the definition of "subsequent,"
which is undefined. There's nothing preventing a JVM from deciding that even
though A happened before B in clock time, A is subsequent to B.


  On Fri, Aug 17, 2012 at 6:58 PM, David Holmes <davidcholmes at aapt.net.au>
wrote:

    Hi Marko,

    I think the "surprise" is only in the way you formulated this. Said
another
    way a write takes a finite amount of time from when the instruction
starts
    to execute to when the store is actually available for a read to see.
    (Similarly a read takes a finite amount of time.) So depending on those
two
    times a read and write that happen "around the same time" may appear to
have
    occurred in either order. But when you program with threads you never
know
    the relative interleavings (or should never assume) so it makes no
    difference how the program perceives the order compared to how some
external
    observer might perceive it.

    As for your optimization to "chunk" volatile writes, I don't see a
problem
    here if you are basically asking if given:

    w = 1;  // w is volatile
    w = 2;
    w = 3;

    that this could be reduced to the last write alone? I see no reason why
not.
    Without some additional coordination between a reader thread and the
writer
    thread, reading w==3 is a legitimate outcome. If you are thinking about
how
    the hardware might chunk things then that is a different matter. We have
to
    use the hardware in a way that complies with the memory model - if the
    hardware can't comply then you can't run Java on it.

    David Holmes
    ------------


    > -----Original Message-----
    > From: concurrency-interest-bounces at cs.oswego.edu
    > [mailto:concurrency-interest-bounces at cs.oswego.edu]On Behalf Of Marko
    > Topolnik
    > Sent: Saturday, 18 August 2012 7:24 AM
    > To: concurrency-interest at cs.oswego.edu
    > Subject: [concurrency-interest] Relativity of guarantees provided by
    > volatile
    >
    >
    > Consider the following synchronization order of a program
    > execution involving a total of two threads, R and W:
    >
    > - thread R begins;
    >
    > - thread R reads a volatile int sharedVar several times. Each
    > time it reads the value 0;
    >
    > - thread R completes;
    >
    > - thread W begins;
    >
    > - thread W writes the sharedVar several times. Each time it
    > writes the value 1;
    >
    > - thread W completes.
    >
    > Now consider the wall-clock timing of the events:
    >
    > - thread R reads 0 at t = {1, 4, 7, 10};
    > - thread W writes 1 at t = {0, 3, 6, 9}.
    >
    > As far as the Java Memory Model is concerned, there is no
    > contradiction between the synchronization order and the
    > wall-clock times, as the JMM is wall-clock agnostic. However, I
    > have yet to meet a single Java professional who wouldn't at least
    > be very surprised to hear that the specification allows this.
    >
    > I understand that the SMP architecture that dominates the world
    > of computing today practically never takes these liberties and
    > makes the volatile writes visible almost instantaneously. This
    > may change at any time, however, especially with the advent of
    > massively parrallel architectures that seem to be the future. For
    > example, an optimization technique may choose to chunk many
    > volatile writes together and make them visible in a single bulk
    > operation. This can be safely done as long as there are no
    > intervening read-y actions (targets of the synchronizes-with
    > edges as defined by JLS/JSE7 17.4.4).
    >
    > Now, my questions are:
    >
    > 1. Is there a loophole in my reasoning?
    >
    > 2. If there is no loophole, is there anything to worry about,
    > given that practically 100% developers out there consider as
    > guaranteed something that isn't?
    >
    >
    > -Marko
    >
    >
    >
    >
    > _______________________________________________
    > Concurrency-interest mailing list
    > Concurrency-interest at cs.oswego.edu
    > http://cs.oswego.edu/mailman/listinfo/concurrency-interest
    >

    _______________________________________________
    Concurrency-interest mailing list
    Concurrency-interest at cs.oswego.edu
    http://cs.oswego.edu/mailman/listinfo/concurrency-interest


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20120818/4febd35c/attachment.html>


More information about the Concurrency-interest mailing list