[concurrency-interest] concurrency puzzle

David Holmes dcholmes at optusnet.com.au
Mon Sep 11 22:25:50 EDT 2006


Peter,

> But what about the behaviour of the synchronized statement? If a lock
> is obtained, local memory (for that thread) is invalidated and the
> next read is going to access main memory.

As Jeremy stated you are looking at this from the wrong perspective. The new
JMM isn't about local vs. global memory, or about invalidation - it defines
happens-before relationships and how they define the legal set of values
that a read of a variable can return. In that sense the behaviour of a
synchronized statement is quite simple:
 - releasing a lock happens-before any subsequent (in time) acquisition of
that lock

The interesting thing is forming the chain of happens-before relationships
to see how a write in one thread is related to a read in another.

> That is why the assignment of x=20 could get lost. If the x=20 isn't
> written to main memory (and in this case there is no need to) but only
> to cache, the value is 'dropped' when the cache is invalidated.

You could look at it this way but it isn't necessary to do so. In hardware
terms the write x=20 could go straight to main memory, but that could be
followed by the write of x=10 from the constructor which has been in the
write buffer on another processor. The first thread then acquires the lock
and reads the value of x and sees 10.

In terms of the memory model there is no happens-before ordering between the
write x=10 and the write x=20 so when x is read, the read can return either
of those values.

Thinking about the memory model in terms of a conceptual "cache" sometimes
serves as a useful model for descriptive purposes, but it falls apart if
people then try to apply hardware caching behaviour to the conceptual
model - it doesn't work that way.

Cheers,
David Holmes



More information about the Concurrency-interest mailing list