[concurrency-interest] Relativity of guarantees provided by volatile

Wolfgang Baltes wolfgang.baltes at laposte.net
Sat Aug 18 11:48:17 EDT 2012

The memory model is just that: a model. It is not a hardware spec, nor a 
prediction of what happens in which order in the future. It allows to 
interpret observations and draw some conclusions.

The simple programming model that all programmers assume is program 
order. For example, in the following lines, everyone can expect x to 
hold the value 1  and y the value 2.

int a = 1;
int b = 2;
int y = b;
int x = a;

However, we also know that optimizations can happen. So for example, 
there is no guarantee that a is set to 1 before b is set to 2. The only 
guarantee we have is that - once a write operation has appeared in 
program code - a (subsequent in program order) read operation is 
guaranteed to succeed with the expected value, independent of any 
optimization. In this example, we do not know when a is set to 1, except 
we know that whenever a is read after the assignment instruction in 
program order, a will have a value of 1. For example, an optimization 
could consist of reordering the write to a after y is assigned the value 
of b. This reordering would not be observable in this thread, and 
therefore does not change the reasoning of what the program 
accomplishes. The program order guarantee is there to allow the 
programmer to reason about a program in simple terms, such as "I put 
instructions in this order to achieve this result" and the program order 
rule guarantees that the result can be observed, no matter which 
optimizations are involved.

The memory model is nothing more than a similar guarantee regarding 
program order when multiple threads are involved. The memory model does 
not say what will happen sometimes in the future. It only allows to draw 
conclusions about what happened in the past >if< certain observations 
are made.

For example,

Thread A:
int a = 1;
volatile int b = 2;
int y = b;
int x = a;

Thread B:
y = b; // volatile read of b.
x = a;

If we apply the program order rule for single threads, we cannot be sure 
which value is assigned to x; is a set to 1 or does it have the value 0? 
Using the memory model, we expand the program order rules by the 
volatile rule: when thread B performs a volatile read, then there are 
certain guarantees as follows:
- When thread B reads the value of b and it finds 0, then the conclusion 
is that thread A has not reached its volatile write to b. Nothing else 
can be concluded.
- If thread B reads a value of 2 for b, then we are allowed to conclude 
that instructions of thread A that appear before the volatile write in 
program order have "happened-before". This rule allows us to conclude 
that >if< we observe a value of 2 for b, then we are sure to observe a 
value of 1 for variable a. (There is nothing in the memory model of 
 >when exactly< the value 1 has to be written to a, just that we can 
count on the observation.)

We have the JVM's guarantee that these conclusions are permitted, 
despite any optimizations that are ongoing. This makes concurrent 
programming almost as easy to reason about as single threaded programming.

However, in the example given, the memory model does not >predict< in 
any way whatsoever when thread B will reach the volatile read 
instruction relative in time to the volatile write in thread A.

Note also that the memory model does not deal with the extend in time 
that it takes to perform operations. For the examples above, it doesn't 
matter when assignment operations start, it only matters when they 
finish. And for reads, it only matters when they start. This is why the 
memory model uses the concept (and terminology) of synchronization: a 
trailing volatile write edge is considered synchronized with a leading 
read edge >if< the read operation observes the result of the write.


On 2012-08-18 06:02, Marko Topolnik wrote:
> Yuval, rereading the earlier posts I noticed this one from you:
>> That said, 17.4.3 does imply that the reads will be viewable in a wall-clock-sequential way, albeit informally
>>      Sequential consistency is a very strong guarantee that is made about visibility and ordering in an execution of a program. Within a sequentially consistent execution, there is a total order over all individual actions (such as reads and writes) which is consistent with the order of the program, and each individual action is atomic and is immediately visible to every thread.
>> (emphasis on "is immediately visible")
> The major point to note is that the JLS **does not** enforce sequential consistency! It even spells it out directly below your quote:
> "If we were to use sequential consistency as our memory model, many of the compiler and processor optimizations that we have discussed would be illegal."
> The whole model of happens-before revolves around making sequentially INCONSISTENT executions APPEAR to be consistent, as observed by all executing threads, thus allowing all the optimizations that are discussed on this mailing list.
> -Marko
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

More information about the Concurrency-interest mailing list