[concurrency-interest] question the about the JMM

Endre Stølsvik online at stolsvik.com
Thu Dec 6 20:13:04 EST 2007


Larry Riedel wrote:
>> [...] your next paragraph illustrates exactly what's wrong
>> with this mental model, and that is, there is almost
>> always that hidden assumption of sequential consistency.
>> "Before" and "after" simply have no meaning under the JMM
>> except as defined by happens-before.
> 
> I am inclined to expect any intuitive model based on
> the JMM, with a local clock and "happens-before", will
> be homomorphic to a graph for a model based on a global
> clock and local caches (using a typical cache coherence
> protocol), with meaningful and well-defined "before" and
> "after".  But I would rather not try to prove it. (-:

I think the problem might be that one can end up with ideas that "*all* 
cache will be flushed to main memory upon *any* synch exit", and "*all* 
cache will be invalidated upon *any* synch entry", and that this means 
that as long as all threads in the VM are doing some synching or some 
access on volatiles, all shared variables will sooner or later be 
transferred between all threads.

This is not the case, however: threads must establish a happens-before 
edge between a *shared* object, or read/write on a *shared* volatile 
variable for the "transfer" to occur. But as long as this holds, then at 
least I agree that the cache logic mentioned above hold (this because 
the logic where "happens-before are transitive": everything written 
anywhere BEFORE the edge on the one thread will be visible AFTER the 
edge on the other thread).

The crucial point that the "happens-before" logic conveys is that the 
two threads in question must synch on some *common* object/volatile 
variable - the "transfer" won't happen on just any random synch.

Lets say we have four threads, where two and two of these threads are 
communicating properly with lots of synchs and volatile variables. 
However, the two groups also share a common (non-volatile) variable or 
object - but the two groups *never* synchs on that, or any other, common 
object, nor read-write on any shared volatile variable.

Then, the two groups might still see fully different values for the 
shared (non-volatile) variables and objects. The simple cache logic 
discussed above would at least in my head not allow this to happen, but 
the happens-before model will allow it just fine: there is never 
established any happens-before edge between the two groups of threads, 
and hence they aren't required to communicate/"transfer" the values of 
those shared variables.

Endre.


More information about the Concurrency-interest mailing list