[concurrency-interest] synchronized vs Unsafe#monitorEnter/monitorExit

Ben Manes ben_manes at yahoo.com
Sun Dec 28 15:15:33 EST 2014


That's an nifty workaround but, as you said, not reliable in the case of a general api. This case is a multi-get for a cache library, so the keys and use-cases aren't known.
When adding bulk loading to Guava, due to the internal complexity we allowed it to racy by not blocking concurrent calls for shared entries. This has to be done regardless because a loadAll(keys) may return a Map<K, V> with more entries than originally requested, and those extra entries should be cached. These cases were trivial to handle when I had previously written a Map<K, Future<V>> to support multi-get, but a lot of bloat per entry. I had hoped Unsafe#monitorEnter would be a nice alternative and, if the API was stable, it arguably still could be due to the low write rate of caches. For now I'll leave it in a backlog of performance optimization ideas like we did with Guava, maybe revisiting once the rewrite stabilizes.
Thanks,-Ben 

     On Sunday, December 28, 2014 5:18 AM, Peter Levart <peter.levart at gmail.com> wrote:
   

  
 On 12/27/2014 09:31 PM, Ben Manes wrote:
  
  Can someone explain why using Unsafe's monitor methods are substantially worse than synchronized? I had expected them to emit equivalent monitorEnter/monitorExit instructions and have similar performance. 
  My use case is to support a bulk version of CHM#computeIfAbsent, where a single mapping function returns the result for computing multiple entries. I had hoped to bulk lock, insert the unfilled entries, compute, populate, and bulk unlock. An overlapping write would be blocked due to requiring an entry's  lock for mutation.   
 
 Hi Ben,
 
 If "multiple entries" means less than say 50 or even a little more (using more locks than that for one bulk operation is not very optimal anyway), you could try constructing a recursive method to bulk-lock on the list of objects and execute a provided function:
 
 
      public static <T> T synchronizedAll(Iterable<?> locks, Supplier<T> supplier) {
         return synchronizedAll(locks.iterator(), supplier);
     }
 
     private static <T> T synchronizedAll(Iterator<?> locksIterator, Supplier<T> supplier) {
         if (locksIterator.hasNext()) {
             synchronized (locksIterator.next()) {
                 return synchronizedAll(locksIterator, supplier);
             }
         } else {
             return supplier.get();
         }
     }
 
 
 
 Note that to avoid deadlocks, the locks list has to be sorted using a Comparator that is consistent with your key's equals() method...
 
 Regards, Peter
 
 
 
  I had thought that using Unsafe would allow for achieving this without the memory overhead of a ReentrantLock/AQS per entry, since the synchronized keyword is not flexible enough to provide this structure. 
  Thanks, Ben 
  Benchmark                                                    Mode  Samples         Score         Error  Units c.g.b.c.SynchronizedBenchmark.monitor_contention            thrpt       10   3694951.630 ±   34340.707  ops/s c.g.b.c.SynchronizedBenchmark.monitor_noContention          thrpt       10   8274097.911 ±  164356.363  ops/s c.g.b.c.SynchronizedBenchmark.reentrantLock_contention      thrpt       10  31668532.247 ±  740850.955  ops/s c.g.b.c.SynchronizedBenchmark.reentrantLock_noContention    thrpt       10  41380163.703 ± 2270103.507  ops/s c.g.b.c.SynchronizedBenchmark.synchronized_contention       thrpt       10  22905995.761 ±  117868.968  ops/s c.g.b.c.SynchronizedBenchmark.synchronized_noContention     thrpt       10  44891601.915 ± 1458775.665  ops/s 
   
  
 _______________________________________________
Concurrency-interest mailing list
Concurrency-interest at cs.oswego.edu
http://cs.oswego.edu/mailman/listinfo/concurrency-interest 
 
 

   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20141228/6c85c3ab/attachment.html>


More information about the Concurrency-interest mailing list