[concurrency-interest] question the about the JMM

Jeremy Manson jmanson at cs.umd.edu
Sat Dec 8 16:36:38 EST 2007


Doug,

My feeling about this has always been that your cookbook is an excellent 
way for implementors to understand the barriers that need to be 
injected, but that the only way for a programmer to understand this is 
to talk about happens-before.

Otherwise, presumably, if you know enough to know why you need fences, 
you would also know enough to know what straight-line compiler 
optimization can do.  Most fences I've seen are associated with some 
form of compiler barrier -- in gcc, for example, there is a asm volatile 
that prevents the compiler from doing reordering around it.  If the 
programmer is that clever, then the only thing you really need to 
describe is when fences can be removed.

You could presumably say something as simple as the idea that a fence 
can be removed if it can be determined that the results of the [final, 
volatile, synchronized] access associated with that fence can never be 
seen by another thread.  So, for example, if a write to a volatile is 
never read by another thread, then the fence can be removed.

Another example would be an unlocked lock that is not subsequently 
locked by another thread (at least, not without being acquired by the 
same thread first).  If no thread "sees" the unlock, then you don't have 
to perform the associated fence.

So, if you have

synchronized(new Object()) { ... }

the compiler can prove no other thread will see that unlock, and remove 
the associated fence.  Further, if you have:

synchronized (a) {
   synchronized (a) {
   }
}

Then the unlock in the inner block won't occur, therefore no one will 
see it, therefore the fence can be removed.

(It's worth pointing out that if the same thread writes to the volatile 
again, and *then* it is read by another thread, then the results of the 
initial write aren't read by another thread.  So, if you have a program 
that consists entirely of:

volatile int x, y;

Thread 1:
x = 1;
y = 1;
x = 2;
y = 2;

Thread 2:
r1 = y;
if (r1 == 2) {
   r2 = x;
}

Then you know that you don't have to perform the barrier after the first 
write to x, because no one will read the value written there.)

					Jeremy


Doug Lea wrote:
> A couple of thoughts on this discussion. Maybe they'd be
> better on JMM list, but I think membership overlaps a lot.
> 
> Even though the JMM stands alone, regardless of underlying
> caches, fences, and other memory system control, many people
> seem to want a way to reconcile what they know about memory
> systems with the JMM. Some of these people know less about
> memory systems than is required to understand how they do
> fit together. In which case, suggesting they ignore memory
> system control and focus on JMM rules is good advice.
> 
> But some do know enough, and know what they want in terms
> of memory control, but don't know how to achieve it in Java code.
> So there ought to be a good high-level account of the JMM
> targeted to such people. In addition to mapping reads
> and writes of final, volatile, plain fields to fences
> etc (as in my http://gee.cs.oswego.edu/dl/jmm/cookbook.html),
> this mainly revolves around how/why optimization via program
> analysis adds to the straight mapping story.
> For example, that if a compiler sees
> "local a = x.field; local b = x.field;" that it is normally
> allowed (but not required) to transform to: "local a = x.field;
> local b = a;". Or maybe kill "b" entirely and just use "a".
> Or maybe not even  actually perform the x.field read at all
> if its value is known for all possible sequential executions
> (in which case it need not even actually write it).
> And so on. Plus the fun/weird causal cycle issues such
> analyses can encounter. Is there a simple way to characterize
> these to arrive at a good short answer to questions from
> this sort of audience?
> 
> -Doug
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at altair.cs.oswego.edu
> http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest



More information about the Concurrency-interest mailing list