[concurrency-interest] RFR  optimization of CopyOnWriteArrayList.addIfAbsent()
vitalyd at gmail.com
Tue Apr 2 19:43:15 EDT 2013
I actually like Ivan's version as well.
The diff in perf may also be due to uneliminated range checks in current
code - loop bounds is stored in local but unclear whether jit can still
trace through and determine that it's always within bounds. Also the store
into new array may get hit if, again, jit doesn't figure it out - would
have to look at disassembly. The Arrays.copyOf() intrinsic obviously will
avoid range checks.
Also unclear whether the branch in the loop body kills certain
optimizations, so Ivan's code, despite walking the array twice (possibly),
is easier to optimize aggressively to the point where it may become a
memory bandwidth limited operation (for very large sizes).
Sent from my phone
On Apr 2, 2013 6:48 PM, "Martin Buchholz" <martinrb at google.com> wrote:
> On Tue, Apr 2, 2013 at 3:45 PM, Ivan Gerasimov <ivan.gerasimov at oracle.com
> > Thank you, Ulf!
> > maybe the old code wins for looong arrays, so there could be a threshold
> >> to decide between old and new code:
> > I've modified the benchmark code to test arrays with 90'000 to 100'000
> > elements. (Previously was testing 1 to 100 elements.)
> > The performance gain turns out to be even more significant.
> > On my machine tests show that with that many elements the new code runs
> > 40% faster.
> > Honestly, I didn't expect that. I thought my code might be a bit slower
> > and hoped that not much slower.
> Yeah, that's a bit surprising. Perhaps because you're avoiding the branch
> of testing object for null on each iteration?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Concurrency-interest