[concurrency-interest] A new (?) concurrency primitive: WriterReaderPhaser

Pedro Ramalhete pramalhe at gmail.com
Wed Jan 14 07:17:27 EST 2015

Hi Alex,

Just, to make sure there are no misunderstandings, the paper on the
_has_ a proof of correctness for the Classic algorithm. This means that
developers and researchers can use the Classic variant as a building block
other synchronization mechanisms, with the guarantee of its correctness.


On Tue, Jan 13, 2015 at 11:01 PM, Oleksandr Otenko <
oleksandr.otenko at oracle.com> wrote:

> I guess the important thing in "looking under the hood" is to find a
> higher-order construct that things correspond to.
> Wouldn't it be cool to have a higher-order concurrency primitive, which
> represents a particular synchronization protocol - a particular graph of
> synchronizes-with - which one would only need to populate with
> program-order bits.
> It doesn't mean it's what "the users" would get. But even for the
> developer of "lower-order" primitives it would be helpful not having to
> prove the correctness.
> Alex
> On 25/11/2014 21:38, Gil Tene wrote:
>> Peter, Pedro,
>> I could draw a similar equivalence between a ReadWriteLock and a counting
>> Semaphore. After all, you can implement the ReadWriteLock-equivalent
>> "pattern" directly using the APIs of a counting semaphore that supports a
>> P(n) operation (e.g. Java's Semaphore). I could draw a translation table
>> for that too. So maybe that means ReadWriteLock is not a separate
>> synchronization primitive? After all, if we can map the calls in some way,
>> it is simply equivalent to and a variant of a semaphore, isn't it?
>> The reason that's not the case is that ReadWriteLock describes expected
>> use assumptions and provides guarantees that are met when those use
>> assumptions are followed, and those are useful for users that do not want
>> to reason about generic semaphores when what they need is a ReadWriteLock.
>> It is also not the case because ReadWriteLock can be implemented using
>> something other than a semaphore, and still meet it's required behaviors.
>> The same is true for WriterReaderPhaser and Left-Right. While
>> WriterReaderPhaser *could* be implemented using parts of Left-Right, it
>> does not present to it's user as Left-Right any more than it presents as a
>> common phased epoch pair (which would be simpler). It describes expected
>> use assumptions and provides guarantees that are met when those use
>> assumptions are followed. And those are useful for users that do not want
>> to reason about generic phased epochs or inside-out Left-Right parts.
>> Furthermore, using Left-Right under the hood is not the preferred
>> implementation of WriterReaderPhaser - it doesn't need the extra complexity
>> and levels of internal abstraction. It certainly doesn't need the
>> internally tracked data structure or the required double-write rules that
>> come with the prescribed use of Left-Right.
>> The equivalence that you try to draw between Left-Right and
>> WriterReaderPhaser is based on looking under the hood of specific
>> implementations and trying to deduce (unstated) guarantees from each, and
>> new valid ways of using the construct that contradict it's declared and
>> prescribed use (but may still be "ok" if you know what the implementation
>> actually does). A user that tries to do that may as well make direct use of
>> epoch counters and skip the higher level abstractions. Mapping the calls
>> with a translation table does not map the guarantees and use rules. It also
>> doesn't erase the "now we are doing something the instructions tell us to
>> never do" problem either. You'd have to list a new set of expected use
>> assumptions and guarantees provided by the calls (when only those new use
>> assumptions are followed) to make the mapping useful. And by the time
>> you've added that as an alternative valid use Left-right under certain
>> rules (which would contradict it's currently stated use rules, like the
>> requirement to write the same data twice on both sides of a toggle), you'll
>> probably find that you've documented a new and orthogonal use case that is
>> basically WriterReaderPhaser.
>> For an example of how far you have to stretch to isolate Left-Right's
>> desired parts from it's required use rules, take a look at how convoluted
>> the lambda expression (in your getCounters() example, the one that is
>> passed into LeftRight.modify) ends up being, just so that it can break the
>> rules "just right". The lambda-friendly Left-Right API clearly expects you
>> to perform the same modification operation twice (once on each or the left
>> and right data structures). That's what the lambda expression is for: to
>> avoid having you actually write the modification code twice. But your
>> lambda expression carefully checks to see which side it was asked to work
>> on, and performs a very different operation on each of the "sides". That's
>> quite clever, but is exactly what Left-Right tells you not to do. It works
>> because you've followed the code down to the bottom, and you know how it's
>> being used, and in what order. This extra logic also shows you why thinking
>> of this as "Left-Right protecting the active pointer" doesn't work in your
>> example. Had that been the use case, you would have been able to validly
>> update the "active pointer" to the same value in both calls into the lambda
>> expression (which would obviously break the required WriterReaderPhaser
>> behavior).
>> Rather than go for such a convoluted application of Left-Right, below is
>> simpler way to implement the same double buffered counts thing. This one is
>> neither Left-Right nor WriterReaderPhaser. It's just direct use of phased
>> epochs with no other primitives involved (unless someone wants to claim
>> that any use of phased epochs is a "variant" of Left-Right, which would be
>> amusing). This direct use of epochs works. It is correct. I've used this or
>> variants of it many times myself. But it requires the author to reason
>> about why this stuff actually works each time, and to carefully examine and
>> protect their logic against concurrency effects on the epochs. It is
>> missing a basic primitive that would provide well stated guarantees and
>> would save the work (and bugs) involved in reasoning through this each
>> time. WriterReaderPhaser captures the API and associated behavior
>> expectations and guarantees. That's what the primitive is all about.
>> -------------------------------------------
>> Direct use of phased epochs:
>> public class DoubleBufferedCountsUsingEpochs {
>>      private AtomicLong startEpoch = new AtomicLong(0);
>>      private AtomicLong evenEndEpoch = new AtomicLong(0);
>>      private AtomicLong oddEndEpoch = new AtomicLong(Long.MIN_VALUE);
>>      private long oddCounts[];
>>      private long evenCounts[];
>>      private final long accumulatedCounts[];
>>      public DoubleBufferedCountsUsingEpochs(int size) {
>>          oddCounts = new long[size];
>>          evenCounts = new long[size];
>>          accumulatedCounts = new long[size];
>>      }
>>      public void incrementCount(int iTh) {
>>          boolean phaseIsOdd = (startEpoch.getAndIncrement() < 0);
>>          if (phaseIsOdd) {
>>              oddCounts[iTh]++;
>>              oddEndEpoch.getAndIncrement();
>>          } else {
>>              evenCounts[iTh]++;
>>              evenEndEpoch.getAndIncrement();
>>          }
>>      }
>>      public synchronized long[] getCounts() {
>>          long sourceArray[];
>>          long startValueAtFlip;
>>          // Clear currently unused [next] phase end epoch and set new
>> startEpoch value:
>>          boolean nextPhaseIsEven = (startEpoch.get() < 0); // Current
>> phase is odd...
>>          if (nextPhaseIsEven) {
>>              evenEndEpoch.set(0);
>>              startValueAtFlip = startEpoch.getAndSet(0);
>>              sourceArray = oddCounts;
>>          } else {
>>              oddEndEpoch.set(Long.MIN_VALUE);
>>              startValueAtFlip = startEpoch.getAndSet(Long.MIN_VALUE);
>>              sourceArray = evenCounts;
>>          }
>>          // Spin until previous phase end epoch value catches up with
>> start value at flip:
>>          while ((nextPhaseIsEven && (oddEndEpoch.get() !=
>> startValueAtFlip)) ||
>>                  (!nextPhaseIsEven && (evenEndEpoch.get() !=
>> startValueAtFlip))) {
>>              Thread.yield();
>>          }
>>          // sourceArray is stable. Use it:
>>          for (int i = 0; i < sourceArray.length; i++) {
>>              accumulatedCounts[i] += sourceArray[i];
>>              sourceArray[i] = 0;
>>          }
>>          return accumulatedCounts.clone();
>>      }
>> }
>> Yes. You can use Left-Right per your description below (for LRCounters)
>> with the strange rule-breaking lambda expression, and you can use this
>> simple phased epoch approach above (or many other variants). But *do you
>> want to*? Does using it make it easier or harder to reason about? To me
>> (and to most people, I think) the direct use of epochs is much more
>> readable and easier to reason about for double buffered case than using
>> Left-Right while standing on your head (using it to protect an internal
>> fields, where all roles are reversed from their documented meanings). But
>> neither one of them relieves me of the need to figure out and derive the
>> concurrency behavior for myself, leaving me with unneeded effort and lots
>> of potential bug tails. Much like a ReaderWriterLock saves unneeded effort,
>> pain and bugs when compared to direct use of counting semaphores for the
>> same purpose, WriterReaderPhaser save this effort and pain for double
>> buffered uses looking for wait free writers.
>> The code below is not only easier to read, it is much easier to *write*,
>> design, and reason about. The author simply follows the recipe spelled out
>> in the WriterReaderPhaser JavaDoc. By carefully following the rules (and
>> without needing to understand why and how) the author knows that the access
>> patterns are guaranteed to be safe.
>> ----------------------------------
>> Direct use of WriterReaderPhaser:
>> public class DoubleBufferedCountsUsingWRP {
>>      WriterReaderPhaser phaser = new WriterReaderPhaser();
>>      private long activeCounts[];
>>      private long inactiveCounts[];
>>      private final long accumulatedCounts[];
>>      public DoubleBufferedCountsUsingWRP(int size) {
>>          activeCounts = new long[size];
>>          inactiveCounts = new long[size];
>>          accumulatedCounts = new long[size];
>>      }
>>      public void incrementCount(int iTh) {
>>          long criticalValue = phaser.writerCriticalSectionEnter();
>>          try {
>>              activeCounts[iTh]++;
>>          } finally {
>>              phaser.writerCriticalSectionExit(criticalValue);
>>          }
>>      }
>>      public synchronized long[] getCounts() {
>>          try {
>>              phaser.readerLock();
>>              // switch active and inactive data structures:
>>              long tmp[] = activeCounts;
>>              activeCounts = inactiveCounts;
>>              inactiveCounts = tmp;
>>              // flip WriterReaderPhaser phase:
>>              phaser.flipPhase();
>>              // use stable (newly inactive) data:
>>              for (int i = 0; i < inactiveCounts.length; i++) {
>>                  accumulatedCounts[i] += inactiveCounts[i];
>>                  inactiveCounts[i] = 0;
>>              }
>>          } finally {
>>              phaser.readerUnlock();
>>          }
>>          return accumulatedCounts.clone();
>>      }
>> }
>>  On Nov 25, 2014, at 8:52 AM, Pedro Ramalhete <pramalhe at gmail.com> wrote:
>>> Yes Peter, it makes absolute sense!
>>> To make it completely clear just "how equivalent" these two methods are,
>>> let me add a "translation table" for the APIs, from WriterReaderPhaser to
>>> Left-Right:
>>>          • writerCriticalSectionEnter   -> readIndicator.arrive()
>>>          • writerCriticalSectionExit      -> readIndicator.depart()
>>>          • readerLock                          -> writersMutex.lock()
>>>          • readerUnlock                      -> writersMutex.unlock()
>>>          • flipPhase                             ->
>>> toggleVersionAndScan()
>>> Thanks,
>>> Pedro
>>> On Tue, Nov 25, 2014 at 5:21 PM, Peter Levart <peter.levart at gmail.com>
>>> wrote:
>>> Hi Gil,
>>> I think Pedro is right in claiming that WriteReaderPhaser is a kind of
>>> Left-Right, but he's wrong in explaining that it is a Left-Right used
>>> backwards. In Left-Right terminology, WriteReaderPhaser is not protecting
>>> the mutable structure (mutated from multiple threads), but just
>>> coordinating access to the "active pointer" to the structure". From
>>> Left-Right perspective, multiple threads are just readers of the "active
>>> pointer" (what they do with the underlying structure is not relevant here -
>>> the underlying structure has it's own synchronization). The single thread
>>> (at a time) that wants to get access to the snapshot is the sole writer
>>> (or "swapper") of the "active pointer". Of course, the sole writer or
>>> "swapper" of the active pointer also exploits the fact that the "inactive
>>> pointer" is not being accessed by any current readers of the "active
>>> pointer" and so, the underlying structure is not touched by the "readers".
>>> I agree with all your statements below about WriteReaderPhaser and I can
>>> see how WriteReaderPhaser API is more suited to the pointer flipping
>>> scenarios, but WriteReaderPhaser and Left-Right can be used
>>> interchangeably. They are, in a sense equivalent.
>>> To illustrate, I'll use the lambda-enabled Left-Right API:
>>> http://cr.openjdk.java.net/~plevart/misc/LeftRight/LeftRight.java
>>> ...and try to re-create your example from below:
>>> public class LRCounters {
>>>      static class ArrayRef {
>>>          long[] array;
>>>          ArrayRef(long[] array) {
>>>              this.array = array;
>>>          }
>>>      }
>>>      private final LeftRight<ArrayRef> counts;
>>>      private long intervalCounts[];
>>>      private final long accumulatedCounts[];
>>>      public LRCounters(int size) {
>>>          intervalCounts = new long[size];
>>>          accumulatedCounts = new long[size];
>>>          counts = new LeftRight<>(
>>>              // this is the initially active ArrayRef
>>>              new ArrayRef(new long[size]),
>>>              // and this is the initially inactive one
>>>              new ArrayRef(null)
>>>          );
>>>      }
>>>      public void incrementCount(int iTh) {
>>>          counts.read(iTh, (i, arrayRef) -> {
>>>              long a[] = arrayRef.array; // this is the read operation
>>>              return ++a[i]; // never mind the racy increment (should do
>>> it with atomics)
>>>          });
>>>      }
>>>      public long[] getCounts() {
>>>          long[][] result = new long[1][];
>>>          counts.modify((arrayRef) -> {
>>>              if (arrayRef.array == null) {
>>>                  // we've got the previously inactive ArrayRef
>>>                  arrayRef.array = intervalCounts; // this is the 1st
>>> write operation
>>>              } else {
>>>                  // we've got the previously active ArrayRef
>>>                  // that has just been deactivated
>>>                  intervalCounts = arrayRef.array;
>>>                  arrayRef.array = null; // this is the "mirror" write
>>> operation
>>>                  // add interval counts to accumulatedCounts
>>>                  for (int i = 0; i < intervalCounts.length; i++) {
>>>                      accumulatedCounts[i] += intervalCounts[i];
>>>                      intervalCounts[i] = 0;
>>>                  }
>>>                  // return result
>>>                  result[0] = accumulatedCounts.clone();
>>>              }
>>>          });
>>>          return result[0];
>>>      }
>>> }
>>> Likewise, let's take an example that is more suited to LeftRight API:
>>> public class LRMap<K, V> {
>>>      private final LeftRight<Map<K, V>> lrMap = new LeftRight<>(new
>>> HashMap<>(), new HashMap<>());
>>>      public V get(K key) {
>>>          return lrMap.read(m -> m.get(key));
>>>      }
>>>      public void put(K key, V value) {
>>>          lrMap.modify(m -> m.put(key, value));
>>>      }
>>> }
>>> ...and try to implement is using WriteReaderPhaser:
>>> public class WRPMap<K, V> {
>>>      private final WriterReaderPhaser wrp = new WriterReaderPhaser();
>>>      private volatile Map<K, V> activeMap = new HashMap<>();
>>>      private volatile Map<K, V> inactiveMap = new HashMap<>();
>>>      public V get(K key) {
>>>          long stamp = wrp.writerCriticalSectionEnter();
>>>          try {
>>>              return activeMap.get(key);
>>>          } finally {
>>>              wrp.writerCriticalSectionExit(stamp);
>>>          }
>>>      }
>>>      public void put(K key, V value) {
>>>          wrp.readerLock();
>>>          try {
>>>              Map<K, V> m1 = inactiveMap;
>>>              Map<K, V> m2 = activeMap;
>>>              m1.put(key, value); // 1st write to inactive
>>>              // swap active <-> inactive
>>>              activeMap = m1;
>>>              inactiveMap = m2;
>>>              wrp.flipPhase();
>>>              m2.put(key, value); // mirror write to just deactivated
>>>          } finally {
>>>              wrp.readerUnlock();
>>>          }
>>>      }
>>> }
>>> Does this make any sense?
>>> Regards, Peter
>>> On 11/25/2014 08:39 AM, Gil Tene wrote:
>>> Pedro, I think you are confusing specific under-the-hood implementation
>>> choices (which are similar) with what the primitive is. I'm flattered at
>>> your naming of Left-Right GT, but Left-Right GT (and the LRTreeSetGT
>>> example) is a Left-Right variant with a different underlying (attributed to
>>> me) arrive-depart technique. It is not a WriterReaderPhaser.
>>> WriterReaderPhaser captures (in a clean synchronization primitive API
>>> form) a pattern I've had to use and re-invent myself several times, and I'm
>>> pretty sure many others that have faced the "I want to periodically
>>> report/analyze an actively updating data set" have too. The key here is
>>> capturing the guarantees and prescribed use such that end-users can use the
>>> primitive without needing to understand the underlying logic of an
>>> implementation. I do that in my blog entry (and include a definition and
>>> use example below).
>>> A WriterReaderPhaser is neither a ReadWriteLock nor a ReadWriteLock used
>>> backwards. It's also not Left-Right, or Left-Right used backwards. The
>>> qualities and guarantees a WriterReaderPhaser provides are not provided by
>>> reversing the meaning of "writer" and "reader" in those primitives. Even if
>>> you ignore the confusion that such upside-down use may cause the user,
>>> there are specific guarantees that the other primitives do not provide, and
>>> that a write-heavy double-buffered use case needs and gets from this
>>> primitive.
>>> And yes, there are several possible ways to implement a well behaving
>>> WriterReaderPhaser, one of which is listed in the links I provided. We can
>>> implement it with three atomic words and some clever combos of CAS and
>>> GetAndAdd ops, or in other ways. The implementation is not what makes the
>>> primitive what it is to it's users. It's the API and the behavior
>>> guarantees that do that. And I'm pretty sure these behavior guarantees are
>>> not spelled out or provided (even backwards) in Left-Right and variants.
>>> Some of them (like the data stability guarantee for readers even in the
>>> presence of wait-free write activity) would be un-natural to provide in
>>> reverse for writers (since readers are generally not expected to change
>>> stuff).
>>> Left-Right is cool (really cool), but it focuses purely on wait-free
>>> readers and blocking writers. While that use case may appear to be "the
>>> opposite" of wait-free writers with blocking readers, there are specific
>>> non-mirroring qualities that make that duality invalid. Here are specific
>>> differences between the two mechanisms that make "backwards" use
>>> inapplicable::
>>> - WriterReaderPhaser provides specific data stability guarantees to
>>> readers (after a flip while under a readLock), even in the presence of
>>> concurrent writer activity. Left-Right does not provide such a guarantee to
>>> writers "backwards". E.g. if Left-Right readers happened to write into the
>>> Left-Right protected data structure (as they would need to in a "backwards
>>> use" attempt like this), Left-Right says nothing about what writers can
>>> expect from that data structure in terms of data consistency or stability.
>>> Note that I'm not saying that no Left-Right implementation could
>>> accidentally provide this behavior without stating it. I'm saying that the
>>> Left-Right mechanism, as described and documented in Left-Right paper and
>>> the various APIs for it's existing variants makes no such guarantee to the
>>> caller, and that a valid Left-Right implementation may or may not provide
>>> this behavior. As such, the user cannot rely on it. And this is the main
>>> guarantee a typical WriterReaderPhaser user will be looking for.
>>> - Left-Right specifically prohibits readers from writing. E.g. "...To
>>> access in read-only mode do something like this:..." is stated in the
>>> documentation for a LeftRightGuard variants.  In contrast,
>>> WriterReaderPhaser allows it's writers (which would be the readers in a
>>> backwards mapping attempt) to, um, write...
>>> - Writers that use Left-Right are required to to write their updates
>>> twice (on the two sides of a "writerToggle" or equivalent "flip"). e.g.
>>> "...The exact same operations must be done on the instance before and after
>>> guard.writeToggle()." is stated in the documentation for a LeftRightGuard
>>> variants. In contrast, WriterReaderPhaser does not require reading twice or
>>> writing twice. The two APIs do not mirror each other in this critical
>>> aspect.
>>> - Left-Right (even when used to replace a Reader-Writer lock) manages
>>> the active and inactive data structures internally (leftInstance and
>>> rightInstance, or firstInstance and secondInstance), and users of
>>> Left-right [must] operate on data structures returned from Left-Right
>>> operations. In contrast, WriterReaderPhaser does not manage the active and
>>> inactive data structures in any way, leaving that job to readers and
>>> writers that operate directly on the shared data structures.
>>> To be specific, let me detail what a WriterReaderPhaser is (taken from
>>> an updated blog entry that now includes a definition):
>>> -----------------------------------------------------------------
>>> Definition of WriterReaderPhaser:
>>> A WriterReaderPhaser provides a means for wait free writers to
>>> coordinate with blocking (but guaranteed forward progress) readers sharing
>>> a set of data structures.
>>> A WriterReaderPhaser instance provides the following 5 operations:
>>>          • writerCriticalSectionEnter
>>>          • writerCriticalSectionExit
>>>          • readerLock
>>>          • readerUnlock
>>>          • flipPhase
>>> When a WriterReaderPhaser  instance is used to protect an active [set of
>>> or single] data structure involving [potentially multiple] writers and
>>> [potentially multiple] readers , the assumptions on how readers and writers
>>> act are:
>>>          • There are two sets of data structures (an "active" set and an
>>> "inactive" set)
>>>          • Writing is done to the perceived active version (as perceived
>>> by the writer), and only within critical sections delineated by
>>> writerCriticalSectionEnter and writerCriticalSectionExit operations.
>>>          • Only readers switch the perceived roles of the active and
>>> inactive data structures. They do so only while holding the readerLock, and
>>> only before execution a flipPhase.
>>>          • Readers do not hold onto readerLock indefinitely. Only
>>> readers perform readerLock and readerUnlock.
>>>          • Writers do not remain in their critical sections
>>> indefinitely. Only writers perform writerCriticalSectionEnter and
>>> writerCriticalSectionExit.
>>>          • Only readers perform flipPhase operations, and only while
>>> holding the readerLock.
>>> When the above assumptions are met, WriterReaderPhaser guarantees that
>>> the inactive data structures are not being modified by any writers while
>>> being read while under readerLock protection after a flipPhase operation.
>>> The following progress guarantees are provided to writers and readers
>>> that adhere to the above stated assumptions:
>>>          • Writers operations (writerCriticalSectionEnter and
>>> writerCriticalSectionExit) are wait free (on architectures that support
>>> wait-free atomic increment operations).
>>>          • flipPhase operations are guaranteed to make forward progress,
>>> and will only be blocked by writers whose critical sections were entered
>>> prior to the start of the reader's flipPhase operation, and have not yet
>>> exited their critical sections.
>>>          • readerLock only block for other readers that are holding the
>>> readerLock.
>>> -----------------------------------------------------------------
>>> Example use:
>>> Imagine a simple use case where a large set of rapidly updated counter
>>> is being modified by writers, and a reader needs to gain access to stable
>>> interval samples of those counters for reporting and other analysis
>>> purposes.
>>> The counters are represented in a volatile array of values (it is the
>>> array reference that is volatile, not the value cells within it):
>>> volatile long counts[];
>>> ...
>>> A writer updates a specific count (n) in the set of counters:
>>> writerCriticalSectionEnter
>>>      counts[n]++;
>>> writerCriticalSectionExit
>>> A reader gain access to a stable set of counts collected during an
>>> interval, reports on it, and accumulates it:
>>> long intervalCounts[];
>>> long accumulated_counts[];
>>> ...
>>> readerLock
>>>      reset(interval_counts);
>>>      long tmp[] = counts;
>>>      counts = interval_counts;
>>>      interval_counts = tmp;
>>> flipPhase
>>>      report_interval_counts(interval_counts);
>>>      accumulated_counts.add(interval_counts);
>>> readerUnlock
>>> -----------------------------------------------------------------
>>> Bottom line: the above is defines what a WriterReaderPhaser primitive
>>> is, and shows a simple example of using it. While I provide an example
>>> implementation, many are possible, and I'm sure another 17 will pop up. To
>>> avoid wrongly taking credit for this as a new primitive, I'm looking to see
>>> if there have been previously described primitives that explicitly provide
>>> these (or equivalent) qualities to their users. "Explicitly" being a key
>>> word (since no sane user would rely on an accidental implicit behavior of a
>>> specific implementation of a primitive that does not actually guarantee the
>>> given behavior).
>>> -- Gil.
>>> On Nov 24, 2014, at 3:28 PM, Pedro Ramalhete wrote:
>>> Hi Peter, If I was you I wouldn't bother with it. As I tried to explain
>>> to Gil, the WriterReaderPhaser uses the same concurrency control algorithm
>>> as the Left-Right, and as such it is a variant of the Left-Right (used
>>> "backwards") that uses a (new) ReadIndicator with a single ingress combined
>>> with versionIndex. This variant is not as good for scalability under high
>>> contention as the one you yourself have implemented some time ago, with the
>>> ReadIndicator of ingress/egress with LongAdders. You're better off using
>>> your own implementation, and just do the mutations in lrSet.read() and the
>>> read-only operation in lrSet.modify(), but of course, feel free to try both
>>> and let us your results ;)
>>> http://cr.openjdk.java.net/~plevart/misc/LeftRight/EnterExitWait.java
>>> http://cr.openjdk.java.net/~plevart/misc/LeftRight/LongAdderEEW.java
>>> http://cr.openjdk.java.net/~plevart/misc/LeftRight/LRTest.java
>>> http://concurrencyfreaks.com/2014/11/left-right-gt-variant.html Cheers,
>>> Pedro
>>> On Nov 24, 2014, at 3:28 AM, Peter Levart wrote:
>>> Hi Gil,
>>> What a coincidence. I was thinking of writing something like that myself
>>> in past couple of days for a similar purpose. It comes as a gift that you
>>> posted this here, thanks!
>>> My application is an asynchronous cache store implementation. A
>>> distributed cache (Coherence in my case) emits synchronous events when
>>> cache is updated from multiple threads. I want to batch updates and do
>>> asynchronous persistence in a background thread. Coherence already supports
>>> this by itself, but is rather limited in features, so I have to re-create
>>> this functionality and add missing features.
>>> Regards, Peter
>>> On 11/24/2014 06:54 AM, Gil Tene wrote:
>>> Yeah, Yeah, I know. A new concurrency primitive? Really? But I think
>>> this may actually be a new, generically useful primitive. Basically, if you
>>> ever needed to analyze or log rapidly mutating data without blocking or
>>> locking out writers, this thing is for you. It supports wait-free writers,
>>> and stable readable data sets for guaranteed-forward-progress readers. And
>>> it makes double buffered data management semi-trivial. See blog entry
>>> explaining stuff : "WriterReaderPhaser: A story about a new (?)
>>> synchronization primitive"<http://stuff-gil-says.blogspot.com/2014/11/
>>> writerreaderphaser-story-about-new.html>. (with some interesting
>>> discussion comparing it to Left-Right, which does the opposite thing: wait
>>> free readers with blocking writers). See a simple (and very practical)
>>> example of using the primitive at: https://github.com/
>>> HdrHistogram/HdrHistogram/blob/master/src/main/java/org/HdrHistogram/
>>> IntervalHistogramRecorder.java And see the primitive qualities and use
>>> rules documented (in the JavaDoc) along with a working implementation at:
>>> https://github.com/HdrHistogram/HdrHistogram/
>>> blob/master/src/main/java/org/HdrHistogram/WriterReaderPhaser.java So
>>> please rip this thing apart… Or consider if it may be a useful addition to
>>> j.u.c. It needs a home. And if you've seen it before (i.e. it's not really
>>> new like I seem to think it is), I'd really like to know. — Gil.
>>> _______________________________________________ Concurrency-interest
>>> mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20150114/b4b845f3/attachment-0001.html>

More information about the Concurrency-interest mailing list