[concurrency-interest] VarHandle.setVolatile vs classical volatile write
akarnokd at gmail.com
Tue Aug 29 05:01:11 EDT 2017
My target setting is usually the development of libraries to execute and
support reactive dataflows where the typical question is: how many
items/objects/messages can a particular flow transmit over time. Therefore,
if a throughput measurement of 100Mops/s jumps to 120 Mops/s after some
optimization, that is more telling to me than seeing the time go from 6ns
to 5ns per op.
2017-08-29 10:19 GMT+02:00 Aleksey Shipilev <shade at redhat.com>:
> On 08/29/2017 10:04 AM, Andrew Haley wrote:
> > On 29/08/17 01:38, Paul Sandoz wrote:
> >> Yes, you are misreading them. The units are number operations per
> > This default is always very confusing. I always use jcstress and
> > set units to nanoseconds. That would make a much more sensible
> > default.
> Living in nanobenchmarks world, I generally agree. The throughput/second
> was the widely agreed
> default at JMH 1.0 era that focused on larger benchmarks. However, I
> frequently see large numbers in
> JMH output as the litmus test for novice user: if submitter cannot (or did
> not bother to) choose the
> right units for experiments, maybe submitter does not know how to operate
> JMH, and thus the chance
> the benchmarks need more attention is much higher. :) The correlation is
> strong with this one...
> As for benchmark mode, there are always people who would expect "larger is
> better", and for them
> "average time" would be confusing. Been there, tried that.
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Concurrency-interest