[concurrency-interest] lockfree programming and threads of different priorities

Nathan Reynolds nathan.reynolds at oracle.com
Thu Jul 3 14:30:27 EDT 2014


On a uniprocessor machine, when a priority 5 thread is unblocked, then 
all priority 4 threads are not allowed to run until thread 5 blocks.  
So, if your lock simply has the priority 5 thread spin, then the thread 
of priority 4 will never be able to run and unlock.

One could think that calling Thread.yield() will let the priority 4 
threads run.  Unfortunately, this simply puts the priority 5 thread at 
the end of the run queue with respect to other priority 5 threads.  The 
priority 4 threads won't be allowed to run.

The only way to get priority 5 threads to give up the CPU is to have 
them block (e.g. park, wait for I/O, etc).

Let's say a multi-core machine has N cores.  If N priority 5 threads are 
running, then all priority 4 threads won't be allowed to run until one 
of the priority 5 threads block.

 > If this is all true, then how are lockfree queues implemented in the 
first place, as they seem to be the basic of a proper parking lock?

A simple approach is to use ConcurrentLinkedList.  This uses atomic 
operations to push and pop data into the collection.

A simple lock will have the thread use an atomic to acquire the lock.  
If that fails, it then pushes its Thread into a lock-free queue and 
tries to acquire the lock again.  If that fails a second time, it then 
calls Thread.park().

-Nathan

On 7/3/2014 10:39 AM, Andy Nuss wrote:
> None of the code in any of the critical sections of my type of lock 
> (which does not use parking) tries to ever acquire other locks.  Still:
>
> what if a thread at priority 4 calling my remove() method of an object 
> enters the lock() function for the lock owned by the instance 
> implementing remove().  That critical section starts calling a 
> HashMap.remove(...).  At some instant before unlock() is called (while 
> remove()) is in the critical section, a thread of priority 5 wakes up 
> and begins calling the add() method for that same object.  It tries to 
> enter the critical section for the lock being held by the thread of 
> priority 4 (my guess is that that should be suspended now in some 
> multicpu scenarios due to the thread of priority 5), and because the 
> thread of priority 5 is trying to enter this unparking lock, thread 4 
> never leaves its critical section.
>
> If this is all true, then how are lockfree queues implemented in the 
> first place, as they seem to be the basic of a proper parking lock?
>
>
> On Thursday, July 3, 2014 9:26 AM, Nathan Reynolds 
> <nathan.reynolds at oracle.com> wrote:
>
>
> A true deadlock requires a circular wait, no preemption, resource 
> holding and mutual exclusion.  Your lock satisfies the last 3 
> requirements.  So, you simply need to find a circular wait.  This 
> would mean 2 locks and 2 threads which each thread holding a lock and 
> waiting on a lock.  This is easily revealed by taking a stack trace of 
> all of the threads once the deadlock happens.  You then have figure 
> out the N locks and N threads that are in a circular wait.
>
> When implementing locks, threads will park and unpark.  This means a 
> queue of blockers for the lock.  If there is a slight mistake, then 
> threads could end up stuck in the queue because they missed being 
> woken up.  Depending upon the queue implementation, then threads could 
> think they are in the queue but end up being overwritten.  I find that 
> dealing with blocking to be the most tricky part of implementing a lock.
> -Nathan
> On 7/3/2014 7:03 AM, Andy Nuss wrote:
>> Hi,
>>
>> I am using a class that I wrote called SimpleLock.  It holds one 
>> AtomicBoolean and a public lock() and unlock() method as would make 
>> sense.
>>
>> All my datastructures that have unthreadsafe members that are 
>> modified and/or read together (often both) use the model:
>>
>> myprivatelock.lock();
>> try {
>>     ... do some work on unthread safe private members
>> } finally {
>> myprivatelock.unlock();
>> }
>>
>> I am seeing some deadlocks in the SimpleLock.lock() function in my 
>> various pools of threads. My model is to use threadpools, all threads 
>> within each running at the same priority.  However, the various 
>> threadpools run at different priorities.
>>
>> Currently, it is possible in a class:
>>
>> class MyDeadLockingClass {
>>
>> public add ()
>>       {
>> ...use SimpleLock's lock and unlock
>>       }
>>
>> private remove ()
>>       {
>> ...use SimpleLock's lock and unlock
>>       }
>> }
>>
>> For the add() and remove() functions to be called within different 
>> threadpool threads, and thus at different thread priority.  Could 
>> this be a big problem, or is it still safe?  My guess is that it is 
>> safe, and I have to look for the cause of deadlocking elsewhere.
>>
>> Andy
>>
>>
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu  <mailto:Concurrency-interest at cs.oswego.edu>
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu 
> <mailto:Concurrency-interest at cs.oswego.edu>
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20140703/9f414f27/attachment-0001.html>


More information about the Concurrency-interest mailing list