[concurrency-interest] Does factoring out VarHandle-based manipulations cause performance penalties?

Dávid Karnok akarnokd at gmail.com
Wed Aug 9 07:20:31 EDT 2017


Thanks Aleksey!

I did a benchmark and VarHandles seem to work fine compared to
AtomicReferences (assuming that I measured the right setup):

Benchmark                     Mode  Cnt          Score         Error  Units
VarHandleCostPerf.baseline1  thrpt    5  159276715,278 ± 3571493,763  ops/s
VarHandleCostPerf.bench1     thrpt    5  162339382,232 ±  763533,693  ops/s
VarHandleCostPerf.baseline2  thrpt    5   72675096,238 ± 1262485,061  ops/s
VarHandleCostPerf.bench2     thrpt    5   84963708,660 ±  596244,817  ops/s
VarHandleCostPerf.baseline3  thrpt    5   38747819,413 ± 1513177,407  ops/s
VarHandleCostPerf.bench3     thrpt    5   47328852,938 ±  157493,140  ops/s
VarHandleCostPerf.baseline4  thrpt    5   38047055,938 ±  316562,325  ops/s
VarHandleCostPerf.bench4     thrpt    5   38053864,102 ±  180163,924  ops/s
VarHandleCostPerf.baseline5  thrpt    5   30075092,319 ±  151191,006  ops/s
VarHandleCostPerf.bench5     thrpt    5   29819608,499 ± 1088452,474  ops/s
VarHandleCostPerf.baseline6  thrpt    5   24924283,770 ±  214311,889  ops/s
VarHandleCostPerf.bench6     thrpt    5   24872577,980 ±  390354,651  ops/s
VarHandleCostPerf.baseline7  thrpt    5   21210169,977 ±  282669,696  ops/s
VarHandleCostPerf.bench7     thrpt    5   21083601,549 ±  424591,111  ops/s

Code:
https://gist.github.com/akarnokd/64430b072e7f042a8be8b1e476efb383

Run:
i7 4790, Windows 7 x64, Java 9b181, JMH 1.19

2017-08-09 10:30 GMT+02:00 Aleksey Shipilev <shade at redhat.com>:

> On 08/09/2017 10:25 AM, Aleksey Shipilev wrote:
> > On 08/09/2017 10:17 AM, Dávid Karnok wrote:
> >> In my codebase, targeting Java 9, I often have to perform the same set
> of atomic operations on
> >> fields of various classes, for example, a deferred cancellation of
> Flow.Subscriptions:
> >>
> >> Flow.Subscription upstream;
> >> static final VarHandle UPSTREAM;
> >>
> >> @Override
> >> public void cancel() {
> >>     Flow.Subscription a = (Flow.Subscription)UPSTREAM.getAcquire(this);
> >>     if (a != CancelledSubscription.INSTANCE) {
> >>         a = (Flow.Subscription)UPSTREAM.getAndSet(this,
> CancelledSubscription.INSTANCE);
> >>         if (a != null && a != CancelledSubscription.INSTANCE) {
> >>             a.cancel();
> >>         }
> >>     }
> >> }
> >>
> >> Refactored into:
> >>
> >> final class SubscriptionHelper {
> >>
> >>     public static void cancel(Object target, VarHandle handle) {
> >>         Flow.Subscription a = (Flow.Subscription)handle.
> getAcquire(target);
> >>         if (a != CancelledSubscription.INSTANCE) {
> >>             a = (Flow.Subscription)handle.getAndSet(target,
> CancelledSubscription.INSTANCE);
> >>             if (a != null && a != CancelledSubscription.INSTANCE) {
> >>                 a.cancel();
> >>             }
> >>         }
> >>     }
> >> }
> >>
> >> @Override
> >> public void cancel() {
> >>     SubscriptionHelper.cancel(this, UPSTREAM);
> >> }
> >>
> >>
> >> I'd think JIT can and will inline SubscriptionHelper.cancel to all its
> use sites, but the fact that
> >> the cancel method no longer has "this" but an arbitrary target Object,
> my concern is that the
> >> optimizations may not happen.
> >>
> >> I haven't noticed any performance penalties so far but I remember
> Aleksey Shipilev mentioning
> >> somewhere, some time ago, a warning about such out-of-context VarHandle
> uses.
> >
> > Like with Unsafe, like with Atomic*FieldUpdaters, like with *Handles in
> general, the compiler's
> > ability to optimize is dependent on constant propagation. Putting the
> VarHandle to static final
> > field helps that a lot, with the same mechanism as putting OFFSET for
> Unsafe accesses helps
> > performance.
> >
> > It your case above, making VarHandle a method parameter is
> performance-risky move, but it is
> > mitigated by the use-site that loads it from the static final field
> anyway. Thus, if method is
> > inlined, you get the same benefits. The concern for "Object" and "this"
> is not valid there, I think,
> > because inlining propagates type information too.
>
> I should have mentioned that at least in Hotspot, there is a real problem
> with type *profile*
> pollution, because the type profile is context-agnostic, and bound to the
> concrete bytecode index.
> So if SubscriptionHelper.cancel gets called with different "targets",
> *and* optimization depends on
> profile, the inlining would not help to untangle that knot. Pretty sure
> the static type propagation
> works fine there, but do test.
>
> Thanks,
> -Aleksey
>
>


-- 
Best regards,
David Karnok
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20170809/8aa7f780/attachment.html>


More information about the Concurrency-interest mailing list