[concurrency-interest] Enforcing ordered execution of critical sections?

Hanson Char hanson.char at gmail.com
Sat Dec 20 18:59:02 EST 2014


This one looks interesting and educational.  The use of condition enables
us to suspend and resume specifically targeted threads without resorting to
the use of LockSupport.{part, unpark}.  Instead of using a PriorityQueue,
perhaps we can use a simple Map<Integer, Condition> instead so we can get
away without the extra Waiter inner class?

Regards,
Hanson

On Sat, Dec 20, 2014 at 3:12 PM, Joe Bowbeer <joe.bowbeer at gmail.com> wrote:

> To round out the selection, here's an implementation using a PriorityQueue
> of Conditions:
>
> class OrderedExecutor<T> {
>
>   static class Waiter implements Comparable<Waiter> {
>
>     final int order;
>     final Condition condition;
>
>     Waiter(int order, Condition condition) {
>       this.order = order;
>       this.condition = condition;
>     }
>
>     @Override
>     public int compareTo(Waiter waiter) {
>       return order - waiter.order;
>     }
>   }
>
>   final Lock lock = new ReentrantLock();
>   final Queue<Waiter> queue = new PriorityQueue<>();
>   int nextOrder; // 0 is next
>
>   public T execCallableInOrder(int order, Callable<T> callable) throws
> Exception {
>     assert order >= 0;
>     awaitTurn(order);
>     try {
>       return callable.call();
>     } finally {
>       signalNext(order + 1);
>     }
>   }
>
>   void awaitTurn(int order) {
>     lock.lock();
>     try {
>       Condition condition = null;
>       while (nextOrder != order) {
>         if (condition == null) {
>           condition = lock.newCondition();
>           queue.add(new Waiter(order, condition));
>         }
>         condition.awaitUninterruptibly();
>       }
>     } finally {
>       lock.unlock();
>     }
>   }
>
>   void signalNext(int nextOrder) {
>     lock.lock();
>     try {
>       this.nextOrder = nextOrder;
>       Waiter waiter = queue.peek();
>       if (waiter != null && waiter.order == nextOrder) {
>         queue.remove();
>         waiter.condition.signal();
>       }
>     } finally {
>       lock.unlock();
>     }
>   }
> }
>
> On Sat, Dec 20, 2014 at 11:21 AM, Joe Bowbeer <joe.bowbeer at gmail.com>
> wrote:
>>
>> Suman,
>>
>> I would advise against using notify/wait.  It raises a red flag for a lot
>> of reviewers, including me.
>>
>> The problems I see in this implementation are:
>>
>> 1. Pre-allocation of locks is prohibited by (revised) problem statement.
>>
>> Note that if pre-allocation were allowed, then an array would be more
>> efficient than a Map.
>>
>> 2. Access to currentAllowedOrder is not thread-safe but should be.
>>
>>
>> On Sat, Dec 20, 2014 at 6:04 AM, suman shil <suman_krec at yahoo.com> wrote:
>>>
>>> I have modified my solution to avoid notifyAll(). Let me know your
>>> feedback.
>>>
>>>
>>> public class OrderedExecutor
>>> {
>>> private int maxOrder;
>>> private int currentAllowedOrder;
>>> private Map<Integer, Object> map = new HashMap<Integer, Object>();
>>>     public OrderedExecutor(int n)
>>>     {
>>>          this.maxOrder = n;
>>>          this.currentAllowedOrder = 0;
>>>     for(int i = 0 ; i < this.maxOrder ; i++)
>>>     {
>>>     map.put(i, new Object());
>>>     }
>>>     }
>>>
>>>     public  Object execCriticalSectionInOrder(int order,
>>>                                               Callable<Object> callable)
>>>                                               throws Exception
>>>     {
>>> if (order >= this.maxOrder)
>>> {
>>> throw new Exception("Exceeds maximum order "+ maxOrder);
>>> }
>>>
>>> while(order != currentAllowedOrder)
>>> {
>>> synchronized (this.map.get(order))
>>> {
>>> this.map.get(order).wait();
>>> }
>>> }
>>>  try
>>> {
>>> return callable.call();
>>> }
>>>        finally
>>> {
>>>             currentAllowedOrder = currentAllowedOrder+1;
>>> synchronized (this.map.get(order+1))
>>>             {
>>>                 this.map.get(order+1).notify();
>>>             }
>>>        }
>>>     }
>>> }
>>>
>>>
>>>   ------------------------------
>>>  *From:* Joe Bowbeer <joe.bowbeer at gmail.com>
>>> *To:* concurrency-interest <concurrency-interest at cs.oswego.edu>
>>> *Sent:* Friday, December 19, 2014 3:33 AM
>>>
>>> *Subject:* Re: [concurrency-interest] Enforcing ordered execution of
>>> critical sections?
>>>
>>> That would be "Tom" Cargill; link to paper:
>>>
>>> http://www.profcon.com/profcon/cargill/jgf/9809/SpecificNotification.html
>>>
>>>
>>>
>>> On Thu, Dec 18, 2014 at 1:32 PM, Joe Bowbeer <joe.bowbeer at gmail.com>
>>> wrote:
>>>
>>> I frown on use of notify[All]/wait because they make the code hard to
>>> maintain.
>>>
>>> In this case, with potentially lots of waiting threads, I would check
>>> out the "Specific Notification" pattern if I were determined to go the
>>> wait/notify route:
>>>
>>> Tim Cargill's paper is dated but still worth reading.
>>>
>>> Also see chapter 3.7.3 Specific Notifications in Doug Lea's CPiJ and
>>> Peter Haggar's article:
>>>
>>> http://www.ibm.com/developerworks/java/library/j-spnotif.html
>>>
>>> On Thu, Dec 18, 2014 at 1:02 PM, Oleksandr Otenko <
>>> oleksandr.otenko at oracle.com> wrote:
>>>
>>>  Yes, no one said it is a good idea to always do that. When it is
>>> contended, most of the threads will wake up to only go back to sleep.
>>>
>>> The pattern you are after is usually called sequencer. You can see it
>>> used in TCP. I am not sure why it wasn't implemented in j.u.c. - maybe not
>>> that popular.
>>>
>>> The best solution will be lock-like, but the waiter nodes will contain
>>> the value they are waiting for - so only the specific threads get woken up.
>>> The solution with concurrent map is very similar, only with larger overhead
>>> from storing the index the thread is waiting for.
>>>
>>>
>>> Alex
>>>
>>>
>>>
>>> On 18/12/2014 20:21, Hanson Char wrote:
>>>
>>> Less overhead and simpler are a nice properties, even though at the
>>> expense of having to wake up all waiting threads just to find out the one
>>> with the right order to execute.  Still, this seems like a good tradeoff.
>>>
>>>  Thanks,
>>> Hanson
>>>
>>> On Wed, Dec 17, 2014 at 11:43 PM, Peter Levart <peter.levart at gmail.com>
>>> wrote:
>>>
>>> On 12/17/2014 08:15 PM, Oleksandr Otenko wrote:
>>>
>>> No, there is no difference. Peter didn't spot your entire method is
>>> synchronized, so spurious wakeup won't make progress until the owner of the
>>> lock exits the method.
>>>
>>> You could split the synchronization into two blocks - one encompassing
>>> the wait loop, the other in the finally block; but it may make no
>>> difference.
>>>
>>> Alex
>>>
>>>
>>> You're right, Alex. I'm so infected with park/unpark virus that I missed
>>> that ;-)
>>>
>>> Peter
>>>
>>>
>>> On 17/12/2014 18:36, suman shil wrote:
>>>
>>> Thanks peter for your reply. You are right. I should have incremented
>>> currentAllowedOrder in finally block.
>>>
>>> Suman
>>>  ------------------------------------------------------------------------
>>> *From:* Peter Levart <peter.levart at gmail.com>
>>> *To:* suman shil <suman_krec at yahoo.com>; Oleksandr Otenko <
>>> oleksandr.otenko at oracle.com>; Concurrency-interest <
>>> concurrency-interest at cs.oswego.edu>
>>> *Sent:* Wednesday, December 17, 2014 11:54 PM
>>> *Subject:* Re: [concurrency-interest] Enforcing ordered execution of
>>> critical sections?
>>>
>>> On 12/17/2014 06:46 PM, suman shil wrote:
>>>
>>> Thanks for your response. Will notifyAll() instead of notify() solve the
>>> problem?
>>>
>>>
>>> It will, but you should also account for "spurious" wake-ups. You should
>>> increment currentAllowedOrder only after return from callable.call (in
>>> finally block just before notifyAll()).
>>>
>>> Otherwise a nice solution - with minimal state, providing that not many
>>> threads meet at the same time...
>>>
>>> Regards, Peter
>>>
>>>  RegardsSuman
>>>        From: Oleksandr Otenko<oleksandr.otenko at oracle.com> <mailto:
>>> oleksandr.otenko at oracle.com>
>>>   To: suman shil<suman_krec at yahoo.com> <mailto:suman_krec at yahoo.com>;
>>> Concurrency-interest<concurrency-interest at cs.oswego.edu> <mailto:
>>> concurrency-interest at cs.oswego.edu>    Sent: Wednesday, December 17,
>>> 2014 9:55 PM
>>>   Subject: Re: [concurrency-interest] Enforcing ordered execution of
>>> critical sections?
>>>       There is no guarantee you'll ever hand over the control to the
>>> right thread upon notify()
>>>     Alex
>>>
>>> On 17/12/2014 14:07, suman shil wrote:
>>>       Hi, Following is my solution to solve this problem. Please let me
>>> know if I am missing something.
>>>    public class OrderedExecutor {  private int currentAllowedOrder = 0;
>>> private int maxLength = 0;  public OrderedExecutor(int n)  {
>>> this.maxLength = n;  } public synchronized Object
>>> execCriticalSectionInOrder( int order, Callable<Object> callable)
>>>                        throws Exception  { if (order >= maxLength)  {
>>> throw new Exception("Exceeds maximum order "+ maxLength);  }    while(order
>>> != currentAllowedOrder)  {  wait();  }    try  { currentAllowedOrder =
>>> currentAllowedOrder+1;  return callable.call();  }  finally  {  notify();
>>> }  } }
>>>    Regards Suman
>>>         From: Peter Levart<peter.levart at gmail.com> <mailto:
>>> peter.levart at gmail.com>
>>>   To: Hanson Char<hanson.char at gmail.com> <mailto:hanson.char at gmail.com>
>>>   Cc: concurrency-interest<concurrency-interest at cs.oswego.edu> <mailto:
>>> concurrency-interest at cs.oswego.edu>    Sent: Sunday, December 14, 2014
>>> 11:01 PM
>>>   Subject: Re: [concurrency-interest] Enforcing ordered execution of
>>> critical sections?
>>>           On 12/14/2014 06:11 PM, Hanson Char wrote:
>>>      Hi Peter,
>>>    Thanks for this proposed idea of using LockSupport. This begs the
>>> question: which one would you choose if you had all three (correct)
>>> implementation available?  (Semaphore, CountDownLatch, or LockSupport)?
>>>    Regards, Hanson
>>>     The Semaphore/CountDownLatch variants are equivalent if you don't
>>> need re-use. So any would do. They lack invalid-use detection. What happens
>>> if they are not used as intended? Semaphore variant acts differently than
>>> CountDownLatch variant. The low-level variant I  proposed detects invalid
>>> usage. So I would probably use this one. But the low level variant is
>>> harder to reason about it's correctness. I think it is correct, but you
>>> should show it to somebody else to confirm this.
>>>     Another question is whether you actually need this kind of
>>> synchronizer. Maybe if you explained what you are trying to achieve,
>>> somebody could have an idea how to do that even more elegantly...
>>>     Regards, Peter
>>>                On Sun, Dec 14, 2014 at 9:01 AM, Peter Levart<
>>> peter.levart at gmail.com> <mailto:peter.levart at gmail.com>  wrote:
>>>    Hi Hanson,
>>>     This one is more low-level, but catches some invalid usages and is
>>> more resource-friendly:
>>>       public class OrderedExecutor {
>>>         public <T> T execCriticalSectionInOrder(
>>>           final int order,
>>>           final Supplier<T> criticalSection
>>>       ) throws InterruptedException {
>>>           if (order < 0) {
>>>                throw new IllegalArgumentException("'order' should be >=
>>> 0");
>>>           }
>>>           if (order > 0) {
>>>               waitForDone(order - 1);
>>>           }
>>>           try {
>>>               return criticalSection.get();
>>>           } finally {
>>>               notifyDone(order);
>>>           }
>>>       }
>>>         private static final Object DONE = new Object();
>>>       private final ConcurrentMap<Integer, Object> signals = new
>>> ConcurrentHashMap<>();
>>>         private void waitForDone(int order) throws InterruptedException {
>>>           Object sig = signals.putIfAbsent(order,
>>> Thread.currentThread());
>>>           if (sig != null && sig != DONE) {
>>>               throw new IllegalStateException();
>>>           }
>>>           while (sig != DONE) {
>>>               LockSupport.park();
>>>               if (Thread.interrupted()) {
>>>                   throw new InterruptedException();
>>>               }
>>>               sig = signals.get(order);
>>>           }
>>>       }
>>>         private void notifyDone(int order) {
>>>           Object sig = signals.putIfAbsent(order, DONE);
>>>           if (sig instanceof Thread) {
>>>               if (!signals.replace(order, sig, DONE)) {
>>>                   throw new IllegalStateException();
>>>               }
>>>               LockSupport.unpark((Thread) sig);
>>>           } else if (sig != null) {
>>>               throw new IllegalStateException();
>>>           }
>>>       }
>>>   }
>>>       Regards, Peter
>>>     On 12/14/2014 05:08 PM, Peter Levart wrote:
>>>        On 12/14/2014 04:20 PM, Hanson Char wrote:
>>>      Hi Peter,
>>>    Thanks for the suggestion, and sorry about not being clear about one
>>> important  detail: "n" is not known a priori when constructing an
>>> OrderedExecutor.  Does this mean the use of CountDownLatch is ruled out?
>>>     If you know at least the upper bound of 'n', it can be used with
>>> such 'n'. Otherwise something that dynamically re-sizes the array could be
>>> devised. Or you could simply use a ConcurrentHashMap instead of array where
>>> keys are 'order' values:
>>>       public class OrderedExecutor<T> {
>>>         private final ConcurrentMap<Integer, CountDownLatch> latches =
>>> new ConcurrentHashMap<>();
>>>         public T execCriticalSectionInOrder(final int order,
>>>                                           final Supplier<T>
>>> criticalSection) throws InterruptedException {
>>>           if (order > 0) {
>>>               latches.computeIfAbsent(order - 1, o -> new
>>> CountDownLatch(1)).await();
>>>           }
>>>           try {
>>>               return criticalSection.get();
>>>           } finally {
>>>               latches.computeIfAbsent(order, o -> new
>>> CountDownLatch(1)).countDown();
>>>           }
>>>       }
>>>   }
>>>       Regards, Peter
>>>           You guessed right: it's a one-shot object for a particular
>>> OrderedExecutor  instance, and "order" must be called indeed at most once.
>>>    Regards, Hanson
>>>     On Sun, Dec 14, 2014 at 2:21 AM, Peter Levart<peter.levart at gmail.com>
>>> <mailto:peter.levart at gmail.com>  wrote:
>>>    Hi Hanson,
>>>     I don't think anything like that readily exists  in
>>> java.lang.concurrent, but what you describe should be possible to  achieve
>>> with composition of existing primitives.  You haven't given any additional
>>> hints to what your OrderedExecutor  should behave like. Should it be a
>>> one-shot object (like CountDownLatch) or a re-usable one (like
>>> CyclicBarrier)? Will execCriticalSectionInOrder() for a particular
>>> OrderedExecutor instance and 'order' value be called at most once? If yes
>>> (and I think that only a one-shot object  makes sense here), an array of
>>> CountDownLatch(es) could be used:
>>>     public class OrderedExecutor<T> {
>>>       private final CountDownLatch[] latches;
>>>         public OrderedExecutor(int n) {
>>>           if (n < 1) throw new IllegalArgumentException("'n'  should be
>>> >= 1");
>>>           latches = new CountDownLatch[n - 1];
>>>           for (int i = 0; i < latches.length; i++) {
>>>               latches[i] = new CountDownLatch(1);
>>>           }
>>>       }
>>>         public T execCriticalSectionInOrder(final int order,
>>>                                            final Supplier<T>
>>> criticalSection) throws InterruptedException {
>>>           if (order < 0 || order > latches.length)
>>>               throw new IllegalArgumentException("'order' should be
>>> [0..." +  latches.length + "]");
>>>           if (order > 0) {
>>>               latches[order - 1].await();
>>>           }
>>>           try {
>>>               return criticalSection.get();
>>>           } finally {
>>>               if (order < latches.length) {
>>>                   latches[order].countDown();
>>>               }
>>>           }
>>>       }
>>>   }
>>>       Regards, Peter
>>>     On 12/14/2014 05:26 AM, Hanson Char wrote:
>>>          Hi, I am looking for a construct that can  be used to
>>> efficiently enforce  ordered execution of multiple critical sections, each
>>> calling from a  different thread. The calling threads may run in  parallel
>>> and may call the execution method out of order. The  perceived construct
>>> would therefore be responsible for re-ordering the execution of those
>>> threads, so that their critical  sections (and only the critical section)
>>> will be executed in order. Would something  like the following API already
>>> exist? /** * Used to enforce ordered execution of critical sections calling
>>> from multiple *  threads, parking and unparking the  threads as necessary.
>>> */ public class OrderedExecutor<T> { /** * Executes a critical section at
>>> most once with the given order, parking * and  unparking the current thread
>>> as  necessary so that all critical * sections executed  by different
>>> threads using this  executor take place in * the order from 1 to n
>>> consecutively. */ public T execCriticalSectionInOrder
>>> (  final int order, final Callable<T> criticalSection) throws
>>> InterruptedException; } Regards, Hanson
>>> _______________________________________________Concurrency-interest mailing
>>> listConcurrency-interest at cs.oswego.edu <mailto:
>>> Concurrency-interest at cs.oswego.edu>
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>> _______________________________________________
>>>   Concurrency-interest mailing list
>>>   Concurrency-interest at cs.oswego.edu <mailto:
>>> Concurrency-interest at cs.oswego.edu>
>>>   http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu <mailto:
>>> Concurrency-interest at cs.oswego.edu>
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu <mailto:
>>> Concurrency-interest at cs.oswego.edu>
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>>
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>>
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20141220/361665f8/attachment-0001.html>


More information about the Concurrency-interest mailing list