[concurrency-interest] Lock-free mania

Chris Purcell chris.purcell.39 at gmail.com
Wed Apr 18 08:11:53 EDT 2007

> Very well. Then, I give some support for my statements in the form of
> some working test in the TMC framework:

>             do {
>                 v = value.get();
>                 for (int i = 0; i<10000; ++i) i = i+2-2; // delay for the test
>             } while (!value.compareAndSet(v, v + 1));

>         public synchronized int increment() {
>             for (int i = 0; i<10000; ++i) i = i+2-2; // delay for the test
>             return ++value;
>         }

Wow. That's a really pointless benchmark. In the lock-free case,
you've put a large delay between reading the value and incrementing
it, pretty much guaranteeing that half the work you do is wasted.
Depending on the relative sizes of scheduling quanta, you could waste
up to two-thirds of your effort.

Doing more analysis, in the lock-based system, you've placed each lock
release and subsequent lock obtain right next to each other -- no
delay. That means the processor will still have the lock in exclusive
mode in cache, allowing it to immediately relock it. That prevents
cachelines ping-ponging between the cores, since threads serialize to
a large extent. The lock-free system, on the other hand, is
deliberately *losing* exclusive mode after each update, and then
racing with the other threads to regain it.

> I have inserted some delay into both of the algorithms. If the delay
> is not inserted, there is a slight advantage for the lock-free
> algorithm...Is that some support for the suppositions?

So you deliberately crippled the algorithm, and then it went slower.
How does this lend support to any of your suppositions? Lock-free
systems are useful when they're designed with care to achieve good
performance and scalability, not thrown together in anger by someone
out to prove they don't work. The only position your benchmark
supports is that naively creating lock-free algorithms usually results
in worse performance than naively creating blocking ones, which I
doubt anyone would disagree with. Lock-freedom is a non-trivial
subject. Indeed, from the very publication you cite:

"Nonblocking algorithms can be extremely difficult to design and
implement, but they can offer better throughput and greater resistance
to liveness problems such as deadlock and priority inversion. In this
installment of Java theory and practice, concurrency guru Brian Goetz
illustrates how several of the simpler nonblocking algorithms work."

Notice the keywords: "extremely difficult", "can offer", and "guru".

> What about livelocks?

"Lock-free" by definition means livelocks cannot happen.

> 2: When there are frequent conflicts between threads, i.e. under heavy
> load conditions, it becomes a busy waiting loop and, consequently,
> it becomes very inefficient just at the critical moment.

I think you're getting "lock-free algorithms" and "spinlocks"
confused. Lock-free algorithms don't busy-wait.


More information about the Concurrency-interest mailing list