[concurrency-interest] ConcurrentHashMapV8 Livelock on computeIfAbsent() ?

Millies, Sebastian Sebastian.Millies at softwareag.com
Sat Apr 30 17:18:58 EDT 2016


Hi Viktor,

I’m sorry, I’ve made a mistake: there hasn’t been a deadlock at all. When running the JMH benchmark, I don’t get as much stack space as when running stand-alone, so the recursion depth of 6000 was causing a StackoverflowError. However, I didn’t see that error, it was hidden by JMH, I just saw the benchmark hanging and the JMH worker threads all being parked, and jumped to the wrong conclusion.

Anyway, computing the 3500th Fibonacci number, I consistently do not see any advantage of the async version over the synchronous one. In fact, it is the other way around:

2 Threads
Benchmark                                 Mode  Cnt  Score     Error  Units
FibCachedConcurrentBenchmark.cf3500         ss   20   4.132 ±  1.421  ms/op
FibCachedConcurrentBenchmark.cfAsync3500    ss   20   9.134 ±  0.862  ms/op
FibCachedConcurrentBenchmark.cf3500         ss   20   2.887 ±  0.571  ms/op
FibCachedConcurrentBenchmark.cfAsync3500    ss   20  10.345 ± 12.954  ms/op
FibCachedConcurrentBenchmark.cf3500         ss   20   3.500 ±  1.291  ms/op
FibCachedConcurrentBenchmark.cfAsync3500    ss   20   8.803 ±  1.679  ms/op

4 Threads
Benchmark                                 Mode  Cnt  Score   Error  Units
FibCachedConcurrentBenchmark.cf3500         ss   20  2.780 ± 0.430  ms/op
FibCachedConcurrentBenchmark.cfAsync3500    ss   20  8.850 ± 1.595  ms/op
FibCachedConcurrentBenchmark.cf3500         ss   20  3.034 ± 0.451  ms/op
FibCachedConcurrentBenchmark.cfAsync3500    ss   20  9.744 ± 1.669  ms/op
FibCachedConcurrentBenchmark.cf3500         ss   20  3.965 ± 1.380  ms/op
FibCachedConcurrentBenchmark.cfAsync3500    ss   20  8.430 ± 2.396  ms/op

Perhaps adding to BigIntegers just isn’t expensive enough to warrant the overhead of going async.


n  Sebastian

PS: Your code below I think doesn’t address the problem, namely as you suggested “to use the same Fib from multiple threads to show how it behaves under contention”. Your test(int) method below produces a new CF instance for each thread, so there is no contention. Or does it?


From: Viktor Klang [mailto:viktor.klang at gmail.com]
Sent: Friday, April 29, 2016 5:59 PM
To: Millies, Sebastian
Subject: Re: [concurrency-interest] ConcurrentHashMapV8 Livelock on computeIfAbsent() ?

Hi Sebastian,

could it be a thread-pool issue?

This works just fine for me:


package yava.klang;

import java.math.BigInteger;
import java.util.Map;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.Executor;
import java.util.function.Function;

/**
 * Demonstrates ways of caching recursive functions.
 *
 * @author Andrew Haley, Viktor Klang, Sebastian Millies
 * @see triggered by <a href=
 *      "http://concurrency.markmail.org/search/?q=#query:%20list%3Aedu.oswego.cs.concurrency-interest+page:3+mid:tf7xddfa6i6ow6d3+state:results">
 *      this discussion</a> on concurrency-interest
 *
 */
public class FibCached {

    private static class Memoizer<T, R> {
        private final Map<T, R> memo;

        public Memoizer(Map<T, R> memo) {
            this.memo = memo;
        }

        public Function<T, R> memoize(Function<T, R> f) {
            return t -> {
                            R r = memo.get(t);
                            if (r == null) {
                                r = f.apply(t);
                                memo.put(t, r);
                            }
                            return r;
                        };
        }
    }

    public static class FibonacciSimple {
        private final Memoizer<Integer, BigInteger> m;

        public FibonacciSimple(Map<Integer, BigInteger> cache) {
            m = new Memoizer<>(cache);
        }

        public BigInteger fib(int n) {
            if (n <= 2) return BigInteger.ONE;
            return m.memoize(this::fib).apply(n - 1).add(
                   m.memoize(this::fib).apply(n - 2));
        }
    }

    public static class CF {
        private final static CompletionStage<BigInteger> csOne = CompletableFuture.completedFuture(BigInteger.ONE);
        private final Map<Integer, CompletionStage<BigInteger>> cache;

        public CF(Map<Integer, CompletionStage<BigInteger>> cache) {
            this.cache = cache;
        }

        public CompletionStage<BigInteger> fib(int n) {
            if (n <= 2) return csOne;

            CompletionStage<BigInteger> ret = cache.get(n);
            if (ret == null) {
                final CompletableFuture<BigInteger> compute = new CompletableFuture<>();
                ret = cache.putIfAbsent(n, compute);
                if (ret == null) {
                    ret = fib(n - 1).thenCompose(x ->
                          fib(n - 2).thenCompose(y -> {
                                compute.complete(x.add(y));
                                return compute;
                    }));
                }
            }
            return ret;
        }

        // async version. It's very much possible and recommended to not make the first thenCompose an async one,
        // as only the addition of x and y might be "expensive" (for large values).
        public CompletionStage<BigInteger> fib(int n, Executor e) {
            if (n <= 2) return csOne;

            CompletionStage<BigInteger> ret = cache.get(n);
            if (ret == null) {
                final CompletableFuture<BigInteger> compute = new CompletableFuture<>();
                ret = cache.putIfAbsent(n, compute);
                if (ret == null) {
                    ret = fib(n - 1, e).thenComposeAsync(x ->
                          fib(n - 2, e).thenComposeAsync(y -> {
                                compute.complete(x.add(y));
                                return compute;
                    }, e));
                }
            }
            return ret;
        }
    }

    public static CompletionStage<BigInteger> test(final int n) {
        final CF fib = new CF(new ConcurrentHashMap<>());
        return fib.fib(n, ForkJoinPool.commonPool());
    }

}

On Thu, Apr 28, 2016 at 10:38 AM, Millies, Sebastian <Sebastian.Millies at softwareag.com<mailto:Sebastian.Millies at softwareag.com>> wrote:
Dropbox may not have been a good idea ☹ Here’s a thread-dump from deadlock in the cf6000Async benchmark:

2016-04-28 10:29:36
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.92-b14 mixed mode):

"java8.concurrent.FibCachedConcurrentBenchmark.cfAsync6000-jmh-worker-2" #14 daemon prio=5 os_prio=0 tid=0x000000001fc95800 nid=0x2418 waiting on condition [0x000000001f62e000]
   java.lang.Thread.State: WAITING (parking)
                at sun.misc.Unsafe.park(Native Method)
                - parking to wait for  <0x000000076bc429b8> (a java.util.concurrent.CompletableFuture$Signaller)
                at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
                at java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
                at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
                at java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
                at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1934)
                at java8.concurrent.FibCachedConcurrentBenchmark.cfAsync6000(FibCachedConcurrentBenchmark.java:76)
                at java8.concurrent.generated.FibCachedConcurrentBenchmark_cfAsync6000_jmhTest.cfAsync6000_ss_jmhStub(FibCachedConcurrentBenchmark_cfAsync6000_jmhTest.java:490)
                at java8.concurrent.generated.FibCachedConcurrentBenchmark_cfAsync6000_jmhTest.cfAsync6000_SingleShotTime(FibCachedConcurrentBenchmark_cfAsync6000_jmhTest.java:433)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:498)
                at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:430)
                at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:412)
                at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                at java.lang.Thread.run(Thread.java:745)

   Locked ownable synchronizers:
                - <0x000000076b9cda18> (a java.util.concurrent.ThreadPoolExecutor$Worker)

"java8.concurrent.FibCachedConcurrentBenchmark.cfAsync6000-jmh-worker-1" #13 daemon prio=5 os_prio=0 tid=0x000000001fc94800 nid=0x2414 waiting on condition [0x000000001f3df000]
   java.lang.Thread.State: WAITING (parking)
                at sun.misc.Unsafe.park(Native Method)
                - parking to wait for  <0x000000076b9517a0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
                at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
                at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
                at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
                at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                at java.lang.Thread.run(Thread.java:745)

   Locked ownable synchronizers:
                - None

"Service Thread" #10 daemon prio=9 os_prio=0 tid=0x000000001d97d000 nid=0x2404 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
                - None

"C1 CompilerThread3" #9 daemon prio=9 os_prio=2 tid=0x000000001c71f000 nid=0x1cb0 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
                - None

"C2 CompilerThread2" #8 daemon prio=9 os_prio=2 tid=0x000000001c71e000 nid=0x19cc waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
                - None

"C2 CompilerThread1" #7 daemon prio=9 os_prio=2 tid=0x000000001c71b000 nid=0x1d78 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
                - None

"C2 CompilerThread0" #6 daemon prio=9 os_prio=2 tid=0x000000001d8db000 nid=0xfec waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
                - None

"Attach Listener" #5 daemon prio=5 os_prio=2 tid=0x000000001d8d9800 nid=0x200c waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
                - None

"Signal Dispatcher" #4 daemon prio=9 os_prio=2 tid=0x000000001d8d8000 nid=0x620 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
                - None

"Finalizer" #3 daemon prio=8 os_prio=1 tid=0x000000001c711000 nid=0x10a4 in Object.wait() [0x000000001ec7f000]
   java.lang.Thread.State: WAITING (on object monitor)
                at java.lang.Object.wait(Native Method)
                - waiting on <0x000000076b208ee0> (a java.lang.ref.ReferenceQueue$Lock)
                at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143)
                - locked <0x000000076b208ee0> (a java.lang.ref.ReferenceQueue$Lock)
                at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164)
                at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209)

   Locked ownable synchronizers:
                - None

"Reference Handler" #2 daemon prio=10 os_prio=2 tid=0x000000001c70a000 nid=0x1b48 in Object.wait() [0x000000001e91f000]
   java.lang.Thread.State: WAITING (on object monitor)
                at java.lang.Object.wait(Native Method)
                - waiting on <0x000000076b206b50> (a java.lang.ref.Reference$Lock)
                at java.lang.Object.wait(Object.java:502)
                at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
                - locked <0x000000076b206b50> (a java.lang.ref.Reference$Lock)
                at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)

   Locked ownable synchronizers:
                - None

"main" #1 prio=5 os_prio=0 tid=0x000000000020f800 nid=0x1e14 waiting on condition [0x0000000002b1e000]
   java.lang.Thread.State: TIMED_WAITING (parking)
                at sun.misc.Unsafe.park(Native Method)
                - parking to wait for  <0x000000076b9cd9f8> (a java.util.concurrent.FutureTask)
                at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
                at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:426)
                at java.util.concurrent.FutureTask.get(FutureTask.java:204)
                at org.openjdk.jmh.runner.BenchmarkHandler.runIteration(BenchmarkHandler.java:376)
                at org.openjdk.jmh.runner.BaseRunner.runBenchmark(BaseRunner.java:263)
                at org.openjdk.jmh.runner.BaseRunner.runBenchmark(BaseRunner.java:235)
                at org.openjdk.jmh.runner.BaseRunner.doSingle(BaseRunner.java:142)
                at org.openjdk.jmh.runner.BaseRunner.runBenchmarksForked(BaseRunner.java:76)
                at org.openjdk.jmh.runner.ForkedRunner.run(ForkedRunner.java:72)
                at org.openjdk.jmh.runner.ForkedMain.main(ForkedMain.java:84)

   Locked ownable synchronizers:
                - None

"VM Thread" os_prio=2 tid=0x000000001c701000 nid=0x2038 runnable

"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x000000000266e800 nid=0x1ba0 runnable

"GC task thread#1 (ParallelGC)" os_prio=0 tid=0x0000000002670000 nid=0x4a0 runnable

"GC task thread#2 (ParallelGC)" os_prio=0 tid=0x0000000002671800 nid=0xfb4 runnable

"GC task thread#3 (ParallelGC)" os_prio=0 tid=0x0000000002673000 nid=0x1b8 runnable

"GC task thread#4 (ParallelGC)" os_prio=0 tid=0x0000000002676800 nid=0x1168 runnable

"GC task thread#5 (ParallelGC)" os_prio=0 tid=0x0000000002677800 nid=0xa84 runnable

"GC task thread#6 (ParallelGC)" os_prio=0 tid=0x000000000267b000 nid=0x17dc runnable

"GC task thread#7 (ParallelGC)" os_prio=0 tid=0x000000000267c000 nid=0x2090 runnable

"VM Periodic Task Thread" os_prio=2 tid=0x000000001d981800 nid=0x2408 waiting on condition

JNI global references: 334


From: Viktor Klang [mailto:viktor.klang at gmail.com<mailto:viktor.klang at gmail.com>]
Sent: Wednesday, April 27, 2016 3:21 PM
To: Millies, Sebastian
Cc: concurrency-interest
Subject: Re: [concurrency-interest] ConcurrentHashMapV8 Livelock on computeIfAbsent() ?

Do you have a thread dump? (sorry, I don't have any spare cycles to have a stab at running it right now)

On Wed, Apr 27, 2016 at 3:09 PM, Millies, Sebastian <Sebastian.Millies at softwareag.com<mailto:Sebastian.Millies at softwareag.com>> wrote:
I have added https://gist.github.com/smillies/0cceb17501f74c4f53bf4930eba61889#file-fibcachedconcurrentbenchmark-java
to compute concurrent benchmarks of the 6000th Fibonacci number. Only the CF versions of course.

The same Fib is used by two threads in each case, the pool for the async version gets another two.

The bad news is they’re prone to  deadlock. Now I am no expert in JMH, perhaps it’s just the way I’ve set up the tests .(I hope so.)


•  Sebastian


Software AG – Sitz/Registered office: Uhlandstraße 12, 64297 Darmstadt, Germany – Registergericht/Commercial register: Darmstadt HRB 1562 - Vorstand/Management Board: Karl-Heinz Streibich (Vorsitzender/Chairman), Eric Duffaut, Dr. Wolfram Jost, Arnd Zinnhardt; - Aufsichtsratsvorsitzender/Chairman of the Supervisory Board: Dr. Andreas Bereczky - http://www.softwareag.com




--
Cheers,
√
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20160430/1768c738/attachment-0001.html>


More information about the Concurrency-interest mailing list