[concurrency-interest] ConcurrentHashMap-Very Strange behavior of ReentrantLock under contention.

Peter Levart peter.levart at gmail.com
Thu Apr 18 03:23:17 EDT 2013


Hi Mudit,

Have you measured the distribution of put operation time at various 
think-time points? It might shed some light on what's happening.

Regards, Peter

On 04/18/2013 04:40 AM, Mudit Verma wrote:
> Its linux on 48 core NUMA machine. I am using 32 threads each one 
> pinned to a separate core.
>
>
> On Thu, Apr 18, 2013 at 4:28 AM, Vitaly Davidovich <vitalyd at gmail.com 
> <mailto:vitalyd at gmail.com>> wrote:
>
>     Ah sorry, I misread the chart - yes, makes sense.
>
>     So what CPU and OS is this on?
>
>     Sent from my phone
>
>     On Apr 17, 2013 9:20 PM, "Mudit Verma" <mudit.f2004912 at gmail.com
>     <mailto:mudit.f2004912 at gmail.com>> wrote:
>
>         @Vitaly,
>
>         I am sorry I didn't get you correctly.
>
>         What's strange in your graph is when think time is very high,
>         op time drops down even further.
>
>         This is expected behavior.  More the think time, lesser and
>         lesser should be the contention.
>
>         What's strange in your graph is when think time is very high,
>         op time drops down even further.
>
>         No,  Why?  op time on Y axis is just the average time a put
>         operation take (time taken in HashMap.put invocation and
>         return).  It doesn't include think Time.  Therefore,  at
>         around 1,00,000 cycles of ThinkTime,   threads are doing
>         enough work in between the operation and enough interleaved
>         that they don't hit bucket at the same time, and hence very
>         low (almost nothing) time in put op.
>
>
>         Thanks,
>         Mudit
>
>
>
>
>         On Thu, Apr 18, 2013 at 3:02 AM, Vitaly Davidovich
>         <vitalyd at gmail.com <mailto:vitalyd at gmail.com>> wrote:
>
>             What's strange in your graph is when think time is very
>             high, op time drops down even further.  Are you warming up
>             the C2 compiler properly? The drop at the tail end almost
>             seems like your think it time is being reduced to almost
>             nothing, which I can see happening if your think time is
>             trivial and C2 aggressively optimizes it.
>
>             Are you also using tiered compilation? If so, turn it off.
>
>             Sent from my phone
>
>             On Apr 17, 2013 3:04 PM, "Mudit Verma"
>             <mudit.f2004912 at gmail.com
>             <mailto:mudit.f2004912 at gmail.com>> wrote:
>
>                 Well, as per JIT. I am not sure. I don't think its the
>                 case. As with increasing ThinkTime, experiment
>                 completion time visibly increases.
>
>                 Also, alternate to counting iterations, we used
>                 System.nanoSec call, to actually make ThinkTime loop
>                 wait for X no of nano secs before it terminates.
>
>                 With frequency of our machine, 1 nano sec = 2 cycles
>                 roughly.
>
>                 We see same graph in both the cases.
>
>
>                 On Wed, Apr 17, 2013 at 8:41 PM, Nathan Reynolds
>                 <nathan.reynolds at oracle.com
>                 <mailto:nathan.reynolds at oracle.com>> wrote:
>
>                     > lock is acquired per bucket
>
>                     I wasn't aware of the implementation.  I see your
>                     point.  There might not be any difference.
>
>
>                     > ThinkTime is nothing but the a simple loop made
>                     to run X number of times
>
>                     As for ThinkTime, are you sure JIT didn't get rid
>                     of your loop and now ThinkTime runs in very little
>                     time?
>
>
>                     Nathan Reynolds
>                     <http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds>
>                     | Architect | 602.333.9091 <tel:602.333.9091>
>                     Oracle PSR Engineering <http://psr.us.oracle.com/>
>                     | Server Technology
>                     On 4/17/2013 11:32 AM, Mudit Verma wrote:
>>                     @Nathan,
>>                     Yes I am aware that new implementation of
>>                     ConcurrentHashMap which is rolling out in Java 8.
>>                     I am using Java 7. However, I don't think it will
>>                     change the behavior because all the threads are
>>                     hitting the same bucket. To best of my knowledge,
>>                     the difference between Java 8 and 7 is that the
>>                     lock is acquired per bucket, while in latter case
>>                     its acquired per segment. This, I believe should
>>                     not change the strange behavior I am seeing.
>>                     Anyhow, even with the current implementation, its
>>                     just weird that in low contention performance
>>                     deteriorates.
>>
>>
>>                     @Vitaly,
>>                     ThinkTime is nothing but the a simple loop made
>>                     to run X number of times. We are considering one
>>                     iteration as one cycle. This is not absolutely
>>                     correct since one iteration should take more
>>                     cycles (5-6) including increasing the counter and
>>                     terminate condition. But this should not change
>>                     the graph. This is only going to shift the graph
>>                     to the right a bit.
>>
>>                     @Kimo,
>>                     Thanks for the links.  I'll take a look. But the
>>                     problem is not with the CAS.  I guess the issue
>>                     is with ReentrantLock. Current implemenation try
>>                     CASing 64 times and after that it goes for
>>                     ReentrantLock. Under high contention most of the
>>                     times all 64 CAS will fail anyway and hashMap
>>                     will have to resort to ReentrantLocking. We are
>>                     just trying to understand this strange behavior.
>>
>>
>>                     Thanks,
>>                     Mudit
>>                     Intern, INRIA, Paris
>>
>>
>>                     On Wed, Apr 17, 2013 at 7:11 PM, Vitaly
>>                     Davidovich <vitalyd at gmail.com
>>                     <mailto:vitalyd at gmail.com>> wrote:
>>
>>                         What exactly are you doing in ThinkTime?
>>
>>                         On Apr 17, 2013 9:41 AM, "Mudit Verma"
>>                         <mudit.f2004912 at gmail.com
>>                         <mailto:mudit.f2004912 at gmail.com>> wrote:
>>
>>                             Hi All,
>>
>>                             I recently performed a scalability test
>>                             (very aggressive and may be not
>>                             practical, anyhow) on put operation of
>>                             ConcurrentHashMap.
>>
>>                             Test: Each thread is trying to put (Same
>>                             Key, Random Value) in HashMap in a tight
>>                             loop. Therefore, all the threads will hit
>>                             the same location on hashMap and will
>>                             cause contention.
>>
>>                             What is more surprising is, when each
>>                             thread continue to do another put one
>>                             after the other, avg time taken in one
>>                             put operation is lesser than when a
>>                             thread do some other work between two put
>>                             operations.
>>
>>                             We continue to see the increase in per
>>                             operation time by increasing the work
>>                             done in between.  This is very counter
>>                             intuitive. Only after a work of about
>>                             10,000 - 20,000 cycles in between, per op
>>                             time comes down.
>>
>>                             When I read the code, I found out that
>>                             put op first try to use CAS to aquire the
>>                             lock(64 times on multicore), only if it
>>                             could not acquire the lock on segment
>>                             through CASing, it goes for
>>                             ReentrantLocking (which suspend threads
>>                             .. ).
>>
>>                             We also tweaked, the number of times CAS
>>                             (from 0 to 10,000) is used before
>>                             actually going for ReentrantLocking.
>>                             Attached is the graph.
>>
>>                             One interesting thing to note. As we
>>                             increase the work between two ops,
>>                             locking with 0 CAS (pure
>>                             ReentrantLocking) seems to be worst
>>                             affected with the spike. Therefore, I
>>                             assume that, spike comes from
>>                             ReentractLocking even when there is a
>>                             mixture of two (CAS+Lock).
>>
>>                             Code Skeleton: For each Thread
>>
>>                             While() {
>>                             hashMap.put(K,randomValue); // K is same
>>                             for each thread
>>                             ThinkTime(); < Ranging from 0 Cycles to 1
>>                             million Cycles>
>>
>>                             }
>>
>>                              Machine: 48 core NUMA
>>                             Threads used: 32 (each one is pinned to a
>>                             core).
>>                             #ops: In total 51200000 (each thread with
>>                             160000 ops)
>>
>>                             That's like saying, if I do something
>>                             else in between my operations (upto a
>>                             limit), contention will increase. Very
>>                             strange.
>>
>>                             Does anyone of you know why is it happening?
>>
>>                             _______________________________________________
>>                             Concurrency-interest mailing list
>>                             Concurrency-interest at cs.oswego.edu
>>                             <mailto:Concurrency-interest at cs.oswego.edu>
>>                             http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>
>>
>>
>>
>>                     _______________________________________________
>>                     Concurrency-interest mailing list
>>                     Concurrency-interest at cs.oswego.edu  <mailto:Concurrency-interest at cs.oswego.edu>
>>                     http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
>                     _______________________________________________
>                     Concurrency-interest mailing list
>                     Concurrency-interest at cs.oswego.edu
>                     <mailto:Concurrency-interest at cs.oswego.edu>
>                     http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
>
>                 _______________________________________________
>                 Concurrency-interest mailing list
>                 Concurrency-interest at cs.oswego.edu
>                 <mailto:Concurrency-interest at cs.oswego.edu>
>                 http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
>
>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20130418/246a0e7d/attachment-0001.html>


More information about the Concurrency-interest mailing list