<div dir="ltr">@Vitaly, <br><br>I am sorry I didn't get you correctly. <br><p dir="ltr"><span style="color:rgb(255,0,0)">What's strange in your graph is when think time is very 
high, op time drops down even further. </span><br></p><p dir="ltr">This is expected behavior.  More the think time, lesser and lesser should be the contention. <br></p><p dir="ltr"><span style="color:rgb(255,0,0)">What's strange in your graph is when think time is very high, op time 
drops down even further.</span>   <br></p><p>No,  Why?  op time on Y axis is just the average time a put operation take (time taken in HashMap.put invocation and return).  It doesn't include think Time.  Therefore,  at around 1,00,000 cycles of ThinkTime,   threads are doing enough work in between the operation and enough interleaved that they don't hit bucket at the same time, and hence very low (almost nothing) time in put op.</p>

<p><br></p><p>Thanks,<br>Mudit<br></p><p><br></p></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Apr 18, 2013 at 3:02 AM, Vitaly Davidovich <span dir="ltr"><<a href="mailto:vitalyd@gmail.com" target="_blank">vitalyd@gmail.com</a>></span> wrote:<br>

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p dir="ltr">What's strange in your graph is when think time is very high, op time drops down even further.  Are you warming up the C2 compiler properly? The drop at the tail end almost seems like your think it time is being reduced to almost nothing, which I can see happening if your think time is trivial and C2 aggressively optimizes it.</p>



<p dir="ltr">Are you also using tiered compilation? If so, turn it off.</p>
<p dir="ltr">Sent from my phone</p><div class="HOEnZb"><div class="h5">
<div class="gmail_quote">On Apr 17, 2013 3:04 PM, "Mudit Verma" <<a href="mailto:mudit.f2004912@gmail.com" target="_blank">mudit.f2004912@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">


<div dir="ltr"><div>Well, as per JIT. I am not sure. I don't think its the case. As with increasing ThinkTime, experiment completion time visibly increases. <br><br>Also, alternate to counting iterations, we used System.nanoSec call, to actually make ThinkTime loop wait for X no of nano secs before it terminates. <br>




<br></div><div>With frequency of our machine, 1 nano sec = 2 cycles roughly. <br></div><div><br></div><div>We see same graph in both the cases.   <br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">



On Wed, Apr 17, 2013 at 8:41 PM, Nathan Reynolds <span dir="ltr"><<a href="mailto:nathan.reynolds@oracle.com" target="_blank">nathan.reynolds@oracle.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF">
    <div>> lock is acquired per bucket<br>
      <br>
      I wasn't aware of the implementation.  I see your point.  There
      might not be any difference.<div><br>
      <br>
      > ThinkTime is nothing but the a simple loop made to run X
      number of times<br>
      <br></div>
      As for ThinkTime, are you sure JIT didn't get rid of your loop and
      now ThinkTime runs in very little time?<div><br>
      <br>
      <div><a href="http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds" target="_blank">Nathan
          Reynolds</a> | Architect | <a href="tel:602.333.9091" value="+16023339091" target="_blank">602.333.9091</a><br>
        <font color="red">Oracle</font> <a href="http://psr.us.oracle.com/" target="_blank">PSR Engineering</a> | Server
        Technology<br>
      </div></div><div><div>
      On 4/17/2013 11:32 AM, Mudit Verma wrote:<br>
    </div></div></div><div><div>
    <blockquote type="cite">
      <div dir="ltr">
        <div>
          <div>
            <div>
              <div>
                <div>
                  <div>@Nathan, <br>
                    Yes I am aware that new implementation of
                    ConcurrentHashMap which is rolling out in Java 8. I
                    am using Java 7. However, I don't think it will
                    change the behavior because all the threads are
                    hitting the same bucket. To best of my knowledge,
                    the difference between Java 8 and 7 is that the lock
                    is acquired per bucket, while in latter case its
                    acquired per segment. This, I believe should not
                    change the strange behavior I am seeing.  Anyhow,
                    even with the current implementation, its just weird
                    that in low contention performance deteriorates.  <br>
                  </div>
                  <br>
                  <br>
                </div>
                @Vitaly, <br>
              </div>
              ThinkTime is nothing but the a simple loop made to run X
              number of times. We are considering one iteration as one
              cycle. This is not absolutely correct since one iteration
              should take more cycles (5-6) including increasing the
              counter and terminate condition. But this should not
              change the graph. This is only going to shift the graph to
              the right a bit. <br>
              <br>
            </div>
            @Kimo, <br>
            Thanks for the links.  I'll take a look. But the problem is
            not with the CAS.  I guess the issue is with ReentrantLock.
            Current implemenation try CASing 64 times and after that it
            goes for ReentrantLock. Under high contention most of the
            times all 64 CAS will fail anyway and hashMap will have to
            resort to ReentrantLocking. We are just trying to understand
            this strange behavior. <br>
            <br>
            <br>
          </div>
          Thanks,<br>
        </div>
        Mudit<br>
        Intern, INRIA, Paris
        <div>
          <div><img src="https://mail.google.com/mail/images/cleardot.gif"></div>
        </div>
      </div>
      <div class="gmail_extra">
        <br>
        <br>
        <div class="gmail_quote">On Wed, Apr 17, 2013 at 7:11 PM, Vitaly
          Davidovich <span dir="ltr"><<a href="mailto:vitalyd@gmail.com" target="_blank">vitalyd@gmail.com</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <p dir="ltr">What exactly are you doing in ThinkTime?</p>
            <div class="gmail_quote">
              <div>
                <div>On Apr 17, 2013 9:41 AM, "Mudit Verma"
                  <<a href="mailto:mudit.f2004912@gmail.com" target="_blank">mudit.f2004912@gmail.com</a>>
                  wrote:<br type="attribution">
                </div>
              </div>
              <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                <div>
                  <div>
                    <div dir="ltr">
                      <div>
                        <div>
                          <div>
                            <div>
                              <div>Hi All, <br>
                                <br>
                                I recently performed a scalability test
                                (very aggressive and may be not
                                practical, anyhow) on put operation of
                                ConcurrentHashMap. <br>
                                <br>
                              </div>
                              <div>Test: Each thread is trying to put
                                (Same Key, Random Value) in HashMap in a
                                tight loop. Therefore, all the threads
                                will hit the same location on hashMap
                                and will cause contention.<br>
                                <br>
                              </div>
                              <div>What is more surprising is, when each
                                thread continue to do another put one
                                after the other, avg time taken in one
                                put operation is lesser than when a
                                thread do some other work between two
                                put operations. <br>
                                <br>
                                We continue to see the increase in per
                                operation time by increasing the work
                                done in between.  This is very counter
                                intuitive. Only after a work of about
                                10,000 - 20,000 cycles in between, per
                                op time comes down.   <br>
                                <br>
                              </div>
                              <div>When I read the code, I found out
                                that put op first try to use CAS to
                                aquire the lock(64 times on multicore),
                                only if it could not acquire the lock on
                                segment through CASing, it goes for
                                ReentrantLocking (which suspend threads
                                .. ). <br>
                                <br>
                              </div>
                              <div>We also tweaked, the number of times
                                CAS (from 0 to 10,000) is used before
                                actually going for ReentrantLocking.  
                                Attached is the graph. <br>
                                <br>
                                One interesting thing to note. As we
                                increase the work between two ops,
                                locking with 0 CAS (pure
                                ReentrantLocking) seems to be worst
                                affected with the spike. Therefore, I
                                assume that, spike comes from
                                ReentractLocking even when there is a
                                mixture of two (CAS+Lock).<br>
                              </div>
                              <div><br>
                              </div>
                              <div>Code Skeleton: For each Thread <br>
                                <br>
                                While() {<br>
                              </div>
                              <div>  hashMap.put(K,randomValue);     //
                                K is same for each thread<br>
                              </div>
                              <div>  ThinkTime();    < Ranging from 0
                                Cycles to 1 million Cycles> <br>
                              </div>
                              <div><br>
                                } <br>
                                <br>
                                 Machine: 48 core NUMA <br>
                              </div>
                            </div>
                            Threads used:  32 (each one is pinned to a
                            core). <br>
                          </div>
                          #ops: In total 51200000 (each thread with
                          160000 ops)<br>
                          <br>
                        </div>
                        That's like saying, if I do something else in
                        between my operations (upto a limit), contention
                        will increase.  Very strange. <br>
                        <br>
                      </div>
                      Does anyone of you know why is it happening? <br>
                    </div>
                    <br>
                  </div>
                </div>
                <div>_______________________________________________<br>
                  Concurrency-interest mailing list<br>
                  <a href="mailto:Concurrency-interest@cs.oswego.edu" target="_blank">Concurrency-interest@cs.oswego.edu</a><br>
                  <a href="http://cs.oswego.edu/mailman/listinfo/concurrency-interest" target="_blank">http://cs.oswego.edu/mailman/listinfo/concurrency-interest</a><br>
                  <br>
                </div>
              </blockquote>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
      <br>
      <fieldset></fieldset>
      <br>
      <pre>_______________________________________________
Concurrency-interest mailing list
<a href="mailto:Concurrency-interest@cs.oswego.edu" target="_blank">Concurrency-interest@cs.oswego.edu</a>
<a href="http://cs.oswego.edu/mailman/listinfo/concurrency-interest" target="_blank">http://cs.oswego.edu/mailman/listinfo/concurrency-interest</a>
</pre>
    </blockquote>
    <br>
  </div></div></div>

<br>_______________________________________________<br>
Concurrency-interest mailing list<br>
<a href="mailto:Concurrency-interest@cs.oswego.edu" target="_blank">Concurrency-interest@cs.oswego.edu</a><br>
<a href="http://cs.oswego.edu/mailman/listinfo/concurrency-interest" target="_blank">http://cs.oswego.edu/mailman/listinfo/concurrency-interest</a><br>
<br></blockquote></div><br></div>
<br>_______________________________________________<br>
Concurrency-interest mailing list<br>
<a href="mailto:Concurrency-interest@cs.oswego.edu" target="_blank">Concurrency-interest@cs.oswego.edu</a><br>
<a href="http://cs.oswego.edu/mailman/listinfo/concurrency-interest" target="_blank">http://cs.oswego.edu/mailman/listinfo/concurrency-interest</a><br>
<br></blockquote></div>
</div></div></blockquote></div><br></div>