[concurrency-interest] Re: synchronized vs ReentrantLock semantic

Gregg Wonderly gregg at cytetech.com
Mon Jun 13 16:23:55 EDT 2005



Michael Smith wrote:
> Since this seems like the fundamental use-case for this API, we can only 
> assume that we are blind and just aren't seeing something in the code.  
> Can someone provide more details about how and where thread memory and 
> main memory are synchronized when using ReentrantReadWriteLock's locks?

The purpose of this type of lock it to optimize read access.  Without 
these new locks, you'd have to use synchronized(), and then you'd be 
flushing cache/TLB on every read!

So, this lock provides guards that will prohibit read while a write is 
in progress, but otherwise let reads go through unfettered.

When you use this lock to protect reading and writing of other values, 
those values must be volatile if you want their changed state to 
propagate between threads/processors etc.

The JMM has not changed.  synchronized flushes the value of everything 
referenced inside of it.  This is expensive.  volatile writes are always 
flushed.  However, volatile does not produce atomic operations (which 
synchronized also provides) that you need this type of lock to create.

So, the programming method has changed slightly, and more responsibility 
is on you to make sure you are using volatile.  By using volatile, and 
this type of lock, you get atomic code for read/write access, but 
without cache/TLB flush on read.

If you are doing this type of stuff and your read/write mix is close to 
10%/90%, you will probably not see much of a gain, because of the 
flushes that will be happening  on the writes no matter what.  But, if 
the mix is slanted the other way, then you'll win due to all the flushes 
that aren't happening on reads...

Gregg Wonderly


More information about the Concurrency-interest mailing list