[concurrency-interest] Concurrency-interest Digest, Vol 52, Issue 3

Brian Goetz brian at briangoetz.com
Wed May 6 22:53:09 EDT 2009


You're comparing two different measures here.  Allocating less is a 
performance boost -- you are doing less work in the straight-line case.  Short 
sync blocks, on the other hand, are a scalability boost -- they don't entail 
less work (for the most part), but you're enabling greater concurrency by 
holding the lock for less time, which could under the right conditions turn 
into higher throughput (google for "Little's Law") given enough contention 
that it is an impediment.



Cleber Muramoto wrote:
> I'd be interested in knowing how such "cheap" operations are profiled. 
> Does he mention it on the book?
> 
> There's also one interesting detail on the Exchanger implementation where
> a method local variable is created outside the synch block to be 
> (possibly) further assigned
> to the instance variable.
> 
> private void createSlot(int index) {
>         // Create slot outside of lock to narrow sync region
>         Slot newSlot = new Slot();
>         Slot[] a = arena;
>         synchronized (a) {
>             if (a[index] == null)
>                 a[index] = newSlot;
>         }
> }
> 
> How can one estimate the trade-offs of running into possibly unnecessary
> instantiations in favor of narrower synch blocks?
> 
> 
> 
> 
>     Message: 1
>     Date: Sun, 3 May 2009 08:24:48 -0400
>     From: Tim Peierls <tim at peierls.net <mailto:tim at peierls.net>>
>     Subject: Re: [concurrency-interest] Concurrency-interest Digest, Vol
>            52,     Issue   1
>     To: Bharath Ravi Kumar <reachbach at gmail.com
>     <mailto:reachbach at gmail.com>>
>     Cc: concurrency-interest at cs.oswego.edu
>     <mailto:concurrency-interest at cs.oswego.edu>
>     Message-ID:
>            <63b4e4050905030524y31ee259fn1b7fcd78718bd4 at mail.gmail.com
>     <mailto:63b4e4050905030524y31ee259fn1b7fcd78718bd4 at mail.gmail.com>>
>     Content-Type: text/plain; charset="iso-8859-1"
> 
>     Reads and writes of volatiles have memory effects under the JMM. The
>     idiom
>     described in EJ2e, Item 71 minimizes the number of volatile reads
>     and writes
>     through the use of a temporary variable. Josh Bloch says (in Item
>     71) that
>     on his machine this code was 25 percent faster than the obvious
>     version with
>     no temporary variable. Item 71 also has good advice about when the
>     use of
>     this idiom is appropriate.
> 
>     --tim
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest


More information about the Concurrency-interest mailing list