[concurrency-interest] MRULinkedBlockingQueue

Mike Quilleash mike.quilleash at subexazure.com
Wed Apr 18 17:58:29 EDT 2007


Ah I see what you mean when there are more threads contending than there
are buffer slots available.  You are correct I believe, if three threads
contend for two slots then one slot will get updated twice, and not
necessarily in the order that add() was called.

I think that's just part-and-parcel of concurrency in situations like
this, if lots of threads arrive at a critical point at the same time, in
the abscense of any fairness logic, then "lost updates" can sometimes
occur (e.g. ConcurrentMap.put()).  I guess we just have to make
judgements on whether this is acceptable or not for a particular
scenario.  In this instance this is ok for me as the size will be 20+
and the concurrent thread count should not be more than 5.

Cheers.

Mike.

-----Original Message-----
From: Szabolcs Ferenczi [mailto:szabolcs.ferenczi at gmail.com] 
Sent: 18 April 2007 21:50
To: Mike Quilleash 
Cc: Hanson Char; Concurrency-interest at cs.oswego.edu
Subject: Re: [concurrency-interest] MRULinkedBlockingQueue

On 18/04/07, Mike Quilleash <mike.quilleash at subexazure.com> wrote:

> @Szabolcs - I don't see any data race problems.  I think your scenario

> goes like this...
>
> T1 - Calls add() enters nextPos(), gets current = 0 and newValue = 1
> T2 - ditto
> T1 - compareAndSet( 0, 1 ), succeeds and returns 0.
> T3 - Calls add() enters nextPos(), gets current = 1 and newValue = 2
> T3 - compareAndSet( 1, 2 ), succeeds and returns 1.
> T2 - compareAndSet( 0, 1 ), fails and loops.
> T2 - current = 2 and newValue = 3
> T2 - compareAndSet( 2, 3 ), succeeds and return 2.
>
> Unless I'm missing or misunderstanding something.

I started the story with a limited capacity array of size 2. I hope it
is a legal assumption. Then the story goes a bit different. Also note
that I have assumed an indefinite long delay from the return of
nextpos(). It is again legitimate assumption. That is part of the game
in a concurrent situation just like race conditions, isn't it?  It is
the beauty of the lock-free algorithms that you can legally assume any
long delay between any two atomic operations and the algorithm should
keep the same functionality, if I might remark.

Szabolcs


 This e-mail is bound by the terms and conditions described at http://www.subexazure.com/mail-disclaimer.html




More information about the Concurrency-interest mailing list