[concurrency-interest] fast semaphore

Joseph Seigh jseigh_cp00 at xemaps.com
Fri Apr 6 20:22:37 EDT 2007


Szabolcs Ferenczi wrote:

>
> On 05/04/07, Joseph Seigh <jseigh_cp00 at xemaps.com> wrote:
>
>> Here's the fast pathed semaphore I used with ConcurrentLinkedQueue.
>
>
> I have some comments about it.
>
> If this semaphore is faster, I think, it is faster for a
> release-acquire sequence. And it is faster because the check-and-swap
> (CAS) operation on an atomic variable is faster than a corresponding
> semaphore operation. This also means that in these cases the semaphore
> operations are shortcutted or blocked by the CAS operations on an
> atomic variable.
>
> On the other hand, the `faster' semaphore is slower for the
> acquire-release sequence because of the overhead of the shortcutting
> wrapper code. However, in this case the delay is neglectable, since
> the thread is suspended anyway.
>
> So, it means that coding the blocking version of the
> ConcurrentLinkedQueue with help of this variant of the semaphore has
> some advantage only in the case when the consumer is not blocked.
>
> Do you agree with my comments?


Basically yes.  Fast pathed refers to the non-blocking case.  A thread 
blocked on
a fast pathed semaphore is just as fast as a thread blocked on a regular 
semaphore.
Although some of the benefit is from a possibly shorter code path, the 
main benefit
is from being lock-free on the fast path.  No thread will block and be 
suspended
waiting for another thread holding a lock.  Adding thread suspend/resume 
overhead
for locking can add considerable delay.

--
Joe Seigh


More information about the Concurrency-interest mailing list