[concurrency-interest] Volatile / Sychronized?

Jeremy Manson jmanson at cs.umd.edu
Tue Dec 19 22:15:31 EST 2006

First Last wrote:
> Question #1:


> What does this mean? Should I take this to mean that the following example 
> won't work?
> class BadUseOfVolatileExample {
>   volatile int x = 0;
>   volatile boolean v = false;
>   public void writer() {
>     x = 42;
>     v = true;
>   }
>   public void reader() {
>     if (v == true) {
>       //uses x - guaranteed to see 42.
>     }
>   }
> }
> If it is true the example above won't work, why does it work when x
> is not volaitle. If it is false that the example won't work, can you please
> show me an example that demonstrates the pitfall the Important Note
> warns us of? Along with an explaination of why it is so?

No, this example works, as you perform a write to v, followed by a read 
of v.  Making x volatile has no real effect on this code in isolation. 
The paragraph you quoted is meant to prevent people from assuming that a 
write to a volatile will act as a memory barrier; writing to a volatile 
variable won't necessarily flush values to memory unless there is a 
subsequent read of that same volatile variable.

> Question #2:


> Is this neccessarry in this example? If not what sort of situation is it 
> neccessary
> in? I could rephrase this whole question as, "Is there ever a circumstance 
> where
> a volaitle variable must be written to from within a synchronized block?" 
> (and if so
> please demonstrate the pitfall for my understanding.

The synchronization is unnecessary in this example.  You can fairly 
easily contrive examples where you would use locking together with 
volatiles, but you will generally be using them for two different 
purposes.  The canonical example is double-checked locking:

   class Foo {
         private volatile Helper helper = null;
         public Helper getHelper() {
             if (helper == null) {
                 synchronized(this) {
                     if (helper == null)
                         helper = new Helper();
             return helper;

This allows the reader not to synchronize when the object has already 
been constructed.  The synchronization is necessary for mutual 
exclusion; it makes sure that the helper field is null when the object 
is constructed and the field is assigned its value.

> Question #3:
> I have seen it suggested that one scenario which the synchronized is 
> required
> would be to ensure the completion of a constructor before the assignment. 


> Is this a valid concern, or does a constructor give you a happens-before 
> garuntee?

The end of the constructor happens-before the assignment to buffer in 
this example.  In fact, any statements that get executed in this thread 
before the assignment to buffer happen-before the assignment to buffer 
-- this includes calls not just the constructor, but calls to other 
methods and so on.

Just looking at this code in isolation, I can't think of a reason you 
would need the synchronized block.

> Question #4:
> Is there any use for AtomicBoolean or AtomicReference if we don't care about 
> CAS semantics?
> In other words, if all I want is to not use the word synchronized is all I 
> need to do add "volatile",
> kick back and enjoy?

It shouldn't be a problem for you to do this right now.

> Question #5:
> How much am I saving by dropping synchronized keywords anyways? I've heard 
> things like JSE5
> and JSE6 are just a lot better at optimizing locks. That sounds great, but I 
> can't quantify this kind
> of assement. Does this mean its half as slow as it used to be? Does this 
> mean its 2 times worse than
> using a synchronized keyword? Does it mean there could theoretically be no 
> locking at all because through some magic the JVM has optimized away the 
> lock?

In general, it means that the JVM is very good at reducing the cost of 
uncontended locks (such as the cost of synchronizing on a Vector or 
Hashtable that is never shared).  There are a lot of locks that will 
never be contended in a lot of Java programs.  OTOH, if you have a lot 
of lock contention, reducing it usually will give you a performance boost.

Other performance tips:

1) Always, always make sure you understand what you are writing, and 
that it is correct.  Lock-free and non-blocking algorithms are great, 
but it takes a lot of time and energy and a complete understanding of 
synchronization mechanisms to get them right.

2) The best bet is probably start by using java.util.concurrent 
whereever possible.

3) There is a lot of unnecessary synchronization in Java code.  Don't 
use Vector or Hashtable, and use java.nio where possible.

4) Use immutable objects a lot.  If you aren't sharing mutable objects 
across threads, then you don't need to worry about locking.  Final 
fields are your friend.  Read the stuff on immutability in JCiP carefully.

5) Avoid excess lock contention by reducing lock scopes and lock durations.

> On a similar note, what is the cost of volatile field access? Compared to 
> regular field access? There
> must be something going on to ensure the happens-before relationships hold - 
> and it can't be free
> but again its one of those things I'm more generally assured the JVM is 
> "better" at doing than it
> used to be.

They are more expensive, but what that means depends on your 
architecture.  On x86, volatile reads are very cheap, but writes cost 
somewhat more.

Phew!  Hope that helps.


More information about the Concurrency-interest mailing list