[concurrency-interest] Software Transactional Memory

Thomas Hawtin tackline at tackline.plus.com
Mon Mar 26 08:16:14 EDT 2007


Doug Lea wrote:
> Joseph Seigh wrote:
>>
>> Question about ConcurrentLinkedQueue.  It uses AtomicReferenceFieldUpdater
>> rather than AtomicReference.  Why is that?  T
> 
> The two choices have different tradeoffs. Using fieldUpdaters does
> have more per-call overhead, but in principle most of it
> is optimizable away, and on some platforms and some contexts,
> it often is.

I got a figure of around 80 cycles per getAndSet (when warmed up) in a 
microbenchmark. That's quite significant on my platform (single core AMD 
64).

>              Using a separate AtomicReference in essence doubles
> the length of a list (every second indirection is just a
> pointer holding the real pointer). And for small nodes as used
> here, nearly doubles the footprint -- even a one-field object
> has object header etc overhead, and increases GC overhead.

ConcurrentLinkedQueue.Node has two volatile fields with updaters: next 
and item. Node has Object as base class, so it might make sense to 
opportunistically extend AtomicReference for the next field. OTOH, that 
would presumably may tend to make methods like contains(Object) slightly 
slower (not tested).

ConcurrentLinkedQueue.head and tail also have updaters. Here only many 
small queues would be significantly affected by the overhead of extra 
AtomicReferences. So it's still a trade off on how often CAS is used and 
how important size is. (Similarly, frequency of CAS vs frequency of get 
led to JDK7 java.nio.channels.SelectionKey using an updater for attach.)

Tom Hawtin


More information about the Concurrency-interest mailing list