[concurrency-interest] general performance question

Nathan Reynolds nathan.reynolds at oracle.com
Thu Dec 22 11:51:16 EST 2011


Thanks for the information.  Now, what about theoretically?  Does the 
serial portion include only the protected region of the bottlenecked 
lock or does it include the protected regions of all accessed locks?

Nathan Reynolds 
<http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds> | 
Consulting Member of Technical Staff | 602.333.9091
Oracle PSR Engineering <http://psr.us.oracle.com/> | Server Technology

On 12/21/2011 10:35 PM, Dr Heinz M. Kabutz wrote:
> It contains a += on a long by a random int which is pulled from a pre
> filled array.
>
> On 22/12/2011, Nathan Reynolds<nathan.reynolds at oracle.com>  wrote:
>> Thanks for the information.  What does the serial portion include?  Is
>> it only the bottlenecked lock?  Or does it include the other locks
>> involved in processing the input?
>>
>> I would guess it only includes the bottlenecked lock.  In my experience
>> of fixing locks, we have to fix the most contended lock before we can
>> see what is the next bottlenecked lock to fix.  If we were to fix the
>> next bottlenecked lock first, then the throughput increases minimally.
>>
>> Nathan Reynolds
>> <http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds>  |
>> Consulting Member of Technical Staff | 602.333.9091
>> Oracle PSR Engineering<http://psr.us.oracle.com/>  | Server Technology
>>
>> On 12/21/2011 12:18 PM, Dr Heinz M. Kabutz wrote:
>>> Hi Navin,
>>>
>>> Little's Law tells us that throughput is inversely proportional to
>>> single-threaded wait time.  Thus the shorter your wait time, the
>>> better your throughput will be.  Amdahl's law also tells us that the
>>> serial portion of a piece of parallel code will tend to dominate and
>>> restrict our scalability.
>>>
>>> Thus the first will probably have a slightly shorter serial path and
>>> thus your performance from a scalability perspective will be most
>>> probably better.  I will verify this with a little benchmark for you,
>>> but in the meantime here is a graph from my new concurrency course
>>> that shows how even a small amount of serial portion (0.25%) limits
>>> the ability to scale beyond 400 cores.
>>>
>>>
>>> Regards
>>>
>>> Heinz
>>> --
>>> Dr Heinz M. Kabutz (PhD CompSci)
>>> Author of "The Java(tm) Specialists' Newsletter"
>>> Sun Java Champion
>>> IEEE Certified Software Development Professional
>>> http://www.javaspecialists.eu
>>> Tel: +30 69 72 850 460
>>> Skype: kabutz
>>>
>>>
>>> On 12/21/11 5:45 PM, Jha, Navin wrote:
>>>> Is there an advantage to do:
>>>>
>>>> someMethod(...) {
>>>>           synchronized(this) {
>>>>                   ................
>>>>                   ................
>>>>                   ................
>>>>           }
>>>> }
>>>>
>>>> instead of:
>>>> synchronized someMethod(...) {
>>>> ................
>>>> ................
>>>> ................
>>>> }
>>>>
>>>> Even when the entire method needs to be synchronized? I understand in
>>>> general it is a good practice to use synchronized blocks since more often
>>>> than not only certain lines need to be synchronized.
>>>>
>>>> -Navin
>>>>
>>>> _______________________________________________
>>>> Concurrency-interest mailing list
>>>> Concurrency-interest at cs.oswego.edu
>>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20111222/e636ae7a/attachment.html>


More information about the Concurrency-interest mailing list