[concurrency-interest] Enforcing ordered execution of critical sections?

DT dt at flyingtroika.com
Mon Dec 29 22:34:22 EST 2014


Every time when ordering is enforced in concurrent problems it means 
that the special attention should be devoted to the delays of threads. I 
think that's the case, but I might be wrong.
If multiple threads need to execute concurrently in order (having the 
same shared memory) using a sequence to overcome ordering and delivery  
issues as it's done in tcp/ip stack,  it would mean that introduction of 
time delays is necessarily. For some systems having such delays within 1 
sec is fine, for others it is not acceptable + over complication of the 
solution ( such as reordering, buffer size limits, reinitialization )
In my opinion,  using existing primitives such as Semaphore, 
CountDownLatch will not provide an optimal solution. All of them are 
using the same syncer design ( AbstractQueuedSynchronizer + different 
policies -> fair, unfair, etc). I have also not encountered  
AbstractQueuedSynchronizer being used broadly outside of the j.u.c. .

However,  another option  to overcome delays would be to use an event 
based approach. I am not sure why event type of executors are not 
introduced in the j.u.c. package as a some  sort of concurrency 
channels. I am inclined to the event type of design just because it 
feels more natural. Just a little bit speculation - like in the nature 
all processes are continuous , there are no discrete processes. The 
ordered concurrent execution of those processes is limited by the 
energy, whether the discrete processes are bounded by the design 
limitations.
I would use an OrderedExecutor implementation only for a system that 
requires deterministic behavior. But in this case I would try to come up 
with a new synchronization primitive that does not require deterministic 
behavior.

Thanks,
Dmitry

On 12/29/2014 4:58 PM, Oleksandr Otenko wrote:
> ACK/NAK? I only meant that data packets arrive concurrently, but get 
> consumed in order.
>
> You could see NAK as false returned from tryAcquire by the producer, 
> but that is not the point. In the original thesis the sequencer is 
> used to control access to the receive buffer.
>
>
> Alex
>
>
> On 22/12/2014 22:21, DT wrote:
>> I think if the sequencer is used it would mean that both ACK/ NAK 
>> type of responses should be used and sequence has to be restarted . 
>> Restarting is ok but to deal with both ACK/NAK could bring extra 
>> complexity during contention.
>>
>> Thanks,
>> Dmitry
>>
>> Sent from my iPhone
>>
>> On Dec 18, 2014, at 1:02 PM, Oleksandr Otenko 
>> <oleksandr.otenko at oracle.com <mailto:oleksandr.otenko at oracle.com>> wrote:
>>
>>> Yes, no one said it is a good idea to always do that. When it is 
>>> contended, most of the threads will wake up to only go back to sleep.
>>>
>>> The pattern you are after is usually called sequencer. You can see 
>>> it used in TCP. I am not sure why it wasn't implemented in j.u.c. - 
>>> maybe not that popular.
>>>
>>> The best solution will be lock-like, but the waiter nodes will 
>>> contain the value they are waiting for - so only the specific 
>>> threads get woken up. The solution with concurrent map is very 
>>> similar, only with larger overhead from storing the index the thread 
>>> is waiting for.
>>>
>>>
>>> Alex
>>>
>>>
>>> On 18/12/2014 20:21, Hanson Char wrote:
>>>> Less overhead and simpler are a nice properties, even though at the 
>>>> expense of having to wake up all waiting threads just to find out 
>>>> the one with the right order to execute.  Still, this seems like a 
>>>> good tradeoff.
>>>>
>>>> Thanks,
>>>> Hanson
>>>>
>>>> On Wed, Dec 17, 2014 at 11:43 PM, Peter Levart 
>>>> <peter.levart at gmail.com <mailto:peter.levart at gmail.com>> wrote:
>>>>
>>>>     On 12/17/2014 08:15 PM, Oleksandr Otenko wrote:
>>>>
>>>>         No, there is no difference. Peter didn't spot your entire
>>>>         method is synchronized, so spurious wakeup won't make
>>>>         progress until the owner of the lock exits the method.
>>>>
>>>>         You could split the synchronization into two blocks - one
>>>>         encompassing the wait loop, the other in the finally block;
>>>>         but it may make no difference.
>>>>
>>>>         Alex
>>>>
>>>>
>>>>     You're right, Alex. I'm so infected with park/unpark virus that
>>>>     I missed that ;-)
>>>>
>>>>     Peter
>>>>
>>>>
>>>>         On 17/12/2014 18:36, suman shil wrote:
>>>>
>>>>             Thanks peter for your reply. You are right. I should
>>>>             have incremented currentAllowedOrder in finally block.
>>>>
>>>>             Suman
>>>>             ------------------------------------------------------------------------
>>>>             *From:* Peter Levart <peter.levart at gmail.com
>>>>             <mailto:peter.levart at gmail.com>>
>>>>             *To:* suman shil <suman_krec at yahoo.com
>>>>             <mailto:suman_krec at yahoo.com>>; Oleksandr Otenko
>>>>             <oleksandr.otenko at oracle.com
>>>>             <mailto:oleksandr.otenko at oracle.com>>;
>>>>             Concurrency-interest
>>>>             <concurrency-interest at cs.oswego.edu
>>>>             <mailto:concurrency-interest at cs.oswego.edu>>
>>>>             *Sent:* Wednesday, December 17, 2014 11:54 PM
>>>>             *Subject:* Re: [concurrency-interest] Enforcing ordered
>>>>             execution of critical sections?
>>>>
>>>>             On 12/17/2014 06:46 PM, suman shil wrote:
>>>>
>>>>                 Thanks for your response. Will notifyAll() instead
>>>>                 of notify() solve the problem?
>>>>
>>>>
>>>>             It will, but you should also account for "spurious"
>>>>             wake-ups. You should increment currentAllowedOrder only
>>>>             after return from callable.call (in finally block just
>>>>             before notifyAll()).
>>>>
>>>>             Otherwise a nice solution - with minimal state,
>>>>             providing that not many threads meet at the same time...
>>>>
>>>>             Regards, Peter
>>>>
>>>>                 RegardsSuman
>>>>                        From: Oleksandr
>>>>                 Otenko<oleksandr.otenko at oracle.com
>>>>                 <mailto:oleksandr.otenko at oracle.com>>
>>>>                 <mailto:oleksandr.otenko at oracle.com
>>>>                 <mailto:oleksandr.otenko at oracle.com>>
>>>>                   To: suman shil<suman_krec at yahoo.com
>>>>                 <mailto:suman_krec at yahoo.com>>
>>>>                 <mailto:suman_krec at yahoo.com
>>>>                 <mailto:suman_krec at yahoo.com>>;
>>>>                 Concurrency-interest<concurrency-interest at cs.oswego.edu
>>>>                 <mailto:concurrency-interest at cs.oswego.edu>>
>>>>                 <mailto:concurrency-interest at cs.oswego.edu
>>>>                 <mailto:concurrency-interest at cs.oswego.edu>>  
>>>>                 Sent: Wednesday, December 17, 2014 9:55 PM
>>>>                   Subject: Re: [concurrency-interest] Enforcing
>>>>                 ordered execution of critical sections?
>>>>                       There is no guarantee you'll ever hand over
>>>>                 the control to the right thread upon notify()
>>>>                     Alex
>>>>
>>>>                 On 17/12/2014 14:07, suman shil wrote:
>>>>                       Hi, Following is my solution to solve this
>>>>                 problem. Please let me know if I am missing something.
>>>>                    public class OrderedExecutor {  private int
>>>>                 currentAllowedOrder = 0;  private int maxLength =
>>>>                 0;  public OrderedExecutor(int n)  {         
>>>>                 this.maxLength = n;  } public synchronized Object
>>>>                 execCriticalSectionInOrder( int order,
>>>>                 Callable<Object> callable)                    
>>>>                  throws Exception  { if (order >= maxLength)  { 
>>>>                 throw new Exception("Exceeds maximum order "+
>>>>                 maxLength);  }    while(order !=
>>>>                 currentAllowedOrder)  {  wait();  }    try {
>>>>                 currentAllowedOrder = currentAllowedOrder+1; 
>>>>                 return callable.call();  }  finally  {  notify();
>>>>                 }  } }
>>>>                    Regards Suman
>>>>                        From: Peter Levart<peter.levart at gmail.com
>>>>                 <mailto:peter.levart at gmail.com>>
>>>>                 <mailto:peter.levart at gmail.com
>>>>                 <mailto:peter.levart at gmail.com>>
>>>>                   To: Hanson Char<hanson.char at gmail.com
>>>>                 <mailto:hanson.char at gmail.com>>
>>>>                 <mailto:hanson.char at gmail.com
>>>>                 <mailto:hanson.char at gmail.com>>   Cc:
>>>>                 concurrency-interest<concurrency-interest at cs.oswego.edu
>>>>                 <mailto:concurrency-interest at cs.oswego.edu>>
>>>>                 <mailto:concurrency-interest at cs.oswego.edu
>>>>                 <mailto:concurrency-interest at cs.oswego.edu>>  
>>>>                 Sent: Sunday, December 14, 2014 11:01 PM
>>>>                   Subject: Re: [concurrency-interest] Enforcing
>>>>                 ordered execution of critical sections?
>>>>                           On 12/14/2014 06:11 PM, Hanson Char wrote:
>>>>                      Hi Peter,
>>>>                    Thanks for this proposed idea of using
>>>>                 LockSupport. This begs the question: which one
>>>>                 would you choose if you had all three (correct)
>>>>                 implementation available? (Semaphore,
>>>>                 CountDownLatch, or LockSupport)?
>>>>                    Regards, Hanson
>>>>                     The Semaphore/CountDownLatch variants are
>>>>                 equivalent if you don't need re-use. So any would
>>>>                 do. They lack invalid-use detection. What happens
>>>>                 if they are not used as intended? Semaphore variant
>>>>                 acts differently than CountDownLatch variant. The
>>>>                 low-level variant I  proposed detects invalid
>>>>                 usage. So I would probably use this one. But the
>>>>                 low level variant is harder to reason about it's
>>>>                 correctness. I think it is correct, but you should
>>>>                 show it to somebody else to confirm this.
>>>>                     Another question is whether you actually need
>>>>                 this kind of synchronizer. Maybe if you explained
>>>>                 what you are trying to achieve, somebody could have
>>>>                 an idea how to do that even more elegantly...
>>>>                     Regards, Peter
>>>>                              On Sun, Dec 14, 2014 at 9:01 AM, Peter
>>>>                 Levart<peter.levart at gmail.com
>>>>                 <mailto:peter.levart at gmail.com>>
>>>>                 <mailto:peter.levart at gmail.com
>>>>                 <mailto:peter.levart at gmail.com>> wrote:
>>>>                    Hi Hanson,
>>>>                     This one is more low-level, but catches some
>>>>                 invalid usages and is more resource-friendly:
>>>>                       public class OrderedExecutor {
>>>>                         public <T> T execCriticalSectionInOrder(
>>>>                           final int order,
>>>>                           final Supplier<T> criticalSection
>>>>                       ) throws InterruptedException {
>>>>                           if (order < 0) {
>>>>                                throw new
>>>>                 IllegalArgumentException("'order' should be >= 0");
>>>>                           }
>>>>                           if (order > 0) {
>>>>                               waitForDone(order - 1);
>>>>                           }
>>>>                           try {
>>>>                               return criticalSection.get();
>>>>                           } finally {
>>>>                               notifyDone(order);
>>>>                           }
>>>>                       }
>>>>                         private static final Object DONE = new
>>>>                 Object();
>>>>                       private final ConcurrentMap<Integer, Object>
>>>>                 signals = new ConcurrentHashMap<>();
>>>>                         private void waitForDone(int order) throws
>>>>                 InterruptedException {
>>>>                           Object sig = signals.putIfAbsent(order,
>>>>                 Thread.currentThread());
>>>>                           if (sig != null && sig != DONE) {
>>>>                               throw new IllegalStateException();
>>>>                           }
>>>>                           while (sig != DONE) {
>>>>                               LockSupport.park();
>>>>                               if (Thread.interrupted()) {
>>>>                                   throw new InterruptedException();
>>>>                               }
>>>>                               sig = signals.get(order);
>>>>                           }
>>>>                       }
>>>>                         private void notifyDone(int order) {
>>>>                           Object sig = signals.putIfAbsent(order,
>>>>                 DONE);
>>>>                           if (sig instanceof Thread) {
>>>>                               if (!signals.replace(order, sig, DONE)) {
>>>>                                   throw new IllegalStateException();
>>>>                               }
>>>>                               LockSupport.unpark((Thread) sig);
>>>>                           } else if (sig != null) {
>>>>                               throw new IllegalStateException();
>>>>                           }
>>>>                       }
>>>>                   }
>>>>                       Regards, Peter
>>>>                     On 12/14/2014 05:08 PM, Peter Levart wrote:
>>>>                        On 12/14/2014 04:20 PM, Hanson Char wrote:
>>>>                      Hi Peter,
>>>>                    Thanks for the suggestion, and sorry about not
>>>>                 being clear about one important detail: "n" is not
>>>>                 known a priori when constructing an
>>>>                 OrderedExecutor.  Does this mean the use of
>>>>                 CountDownLatch is ruled out?
>>>>                     If you know at least the upper bound of 'n', it
>>>>                 can be used with such 'n'. Otherwise something that
>>>>                 dynamically re-sizes the array could be devised. Or
>>>>                 you could simply use a ConcurrentHashMap instead of
>>>>                 array where keys are 'order' values:
>>>>                       public class OrderedExecutor<T> {
>>>>                         private final ConcurrentMap<Integer,
>>>>                 CountDownLatch> latches = new ConcurrentHashMap<>();
>>>>                         public T execCriticalSectionInOrder(final
>>>>                 int order,
>>>>                 final Supplier<T> criticalSection) throws
>>>>                 InterruptedException {
>>>>                           if (order > 0) {
>>>>                 latches.computeIfAbsent(order - 1, o -> new
>>>>                 CountDownLatch(1)).await();
>>>>                           }
>>>>                           try {
>>>>                               return criticalSection.get();
>>>>                           } finally {
>>>>                 latches.computeIfAbsent(order, o -> new
>>>>                 CountDownLatch(1)).countDown();
>>>>                           }
>>>>                       }
>>>>                   }
>>>>                       Regards, Peter
>>>>                           You guessed right: it's a one-shot object
>>>>                 for a particular OrderedExecutor  instance, and
>>>>                 "order" must be called indeed at most once.
>>>>                    Regards, Hanson
>>>>                   On Sun, Dec 14, 2014 at 2:21 AM, Peter
>>>>                 Levart<peter.levart at gmail.com
>>>>                 <mailto:peter.levart at gmail.com>>
>>>>                 <mailto:peter.levart at gmail.com
>>>>                 <mailto:peter.levart at gmail.com>> wrote:
>>>>                    Hi Hanson,
>>>>                     I don't think anything like that readily
>>>>                 exists  in java.lang.concurrent, but what you
>>>>                 describe should be possible to  achieve with
>>>>                 composition of existing primitives. You haven't
>>>>                 given any additional hints to what your
>>>>                 OrderedExecutor  should behave like. Should it be a
>>>>                 one-shot object (like CountDownLatch) or a
>>>>                 re-usable one (like CyclicBarrier)? Will
>>>>                 execCriticalSectionInOrder() for a particular
>>>>                 OrderedExecutor instance and 'order' value be
>>>>                 called at most once? If yes (and I think that only
>>>>                 a one-shot object makes sense here), an array of
>>>>                 CountDownLatch(es) could be used:
>>>>                     public class OrderedExecutor<T> {
>>>>                       private final CountDownLatch[] latches;
>>>>                         public OrderedExecutor(int n) {
>>>>                           if (n < 1) throw new
>>>>                 IllegalArgumentException("'n'  should be >= 1");
>>>>                           latches = new CountDownLatch[n - 1];
>>>>                           for (int i = 0; i < latches.length; i++) {
>>>>                               latches[i] = new CountDownLatch(1);
>>>>                           }
>>>>                       }
>>>>                         public T execCriticalSectionInOrder(final
>>>>                 int order,
>>>>                  final Supplier<T> criticalSection) throws
>>>>                 InterruptedException {
>>>>                           if (order < 0 || order > latches.length)
>>>>                               throw new
>>>>                 IllegalArgumentException("'order' should be [0..."
>>>>                 +  latches.length + "]");
>>>>                           if (order > 0) {
>>>>                               latches[order - 1].await();
>>>>                           }
>>>>                           try {
>>>>                               return criticalSection.get();
>>>>                           } finally {
>>>>                               if (order < latches.length) {
>>>>                 latches[order].countDown();
>>>>                               }
>>>>                           }
>>>>                       }
>>>>                   }
>>>>                       Regards, Peter
>>>>                     On 12/14/2014 05:26 AM, Hanson Char wrote:
>>>>                         Hi, I am looking for a construct that can 
>>>>                 be used to efficiently enforce  ordered execution
>>>>                 of multiple critical sections, each calling from a
>>>>                 different thread. The calling threads may run in 
>>>>                 parallel and may call the execution method out of
>>>>                 order. The  perceived construct would therefore be
>>>>                 responsible for re-ordering the execution of those
>>>>                 threads, so that their critical  sections (and only
>>>>                 the critical section) will be executed in order.
>>>>                 Would something  like the following API already
>>>>                 exist? /** * Used to enforce ordered execution of
>>>>                 critical sections calling from multiple * threads,
>>>>                 parking and unparking the  threads as necessary. */
>>>>                 public class OrderedExecutor<T> { /** * Executes a
>>>>                 critical section at most once with the given order,
>>>>                 parking * and  unparking the current thread as 
>>>>                 necessary so that all critical * sections executed 
>>>>                 by different threads using this  executor take
>>>>                 place in * the order from 1 to n  consecutively. */
>>>>                 public T execCriticalSectionInOrder
>>>>                 (  final int order, final Callable<T>
>>>>                 criticalSection) throws InterruptedException; }
>>>>                 Regards, Hanson
>>>>                 _______________________________________________Concurrency-interest
>>>>                 mailing listConcurrency-interest at cs.oswego.edu
>>>>                 <mailto:listConcurrency-interest at cs.oswego.edu>
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu>>
>>>>                 http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>                 _______________________________________________
>>>>                   Concurrency-interest mailing list
>>>>                 Concurrency-interest at cs.oswego.edu
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu>
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu>>
>>>>                 http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>                 _______________________________________________
>>>>                 Concurrency-interest mailing list
>>>>                 Concurrency-interest at cs.oswego.edu
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu>
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu>>
>>>>                 http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>                 _______________________________________________
>>>>                 Concurrency-interest mailing list
>>>>                 Concurrency-interest at cs.oswego.edu
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu>
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu
>>>>                 <mailto:Concurrency-interest at cs.oswego.edu>>
>>>>                 http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>     _______________________________________________
>>>>     Concurrency-interest mailing list
>>>>     Concurrency-interest at cs.oswego.edu
>>>>     <mailto:Concurrency-interest at cs.oswego.edu>
>>>>     http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu 
>>> <mailto:Concurrency-interest at cs.oswego.edu>
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20141229/7cd01453/attachment-0001.html>


More information about the Concurrency-interest mailing list