[concurrency-interest] Why not "weakNanoTime" for jdk9?
davidcholmes at aapt.net.au
Fri Mar 6 20:44:24 EST 2015
The VM uses whatever high-resolution "clock" the OS provides it. If
experience shows that clock is not monotonic when it is meant to be then we
added internal guards to ensure time doesn't go backwards (which means
sometimes it stands still which has its own issues). As time goes by and
things like frequency-stable, cross-core synchronized TSC becomes available,
the need for the extra monotonicity guard is reduced, and for those
situations when you know what your platform provides we could have a VM flag
to say "trust the OS timer". I know we have an open bug for that on Solaris.
A new API that simply returns whatever the OS returns may be useful for some
people. However I would not support trying to implement something direct to
the hardware in the VM.
Just be aware that the implementation of these timers varies greatly
depending on the hardware and the OS. And once you throw in virtualization
many more problems arise.
Also we'll start using CLOCK_MONOTONIC_RAW on linux at some point. (There's
a bug open for that too.)
> -----Original Message-----
> From: concurrency-interest-bounces at cs.oswego.edu
> [mailto:concurrency-interest-bounces at cs.oswego.edu]On Behalf Of Justin
> Sent: Saturday, 7 March 2015 10:42 AM
> To: Aleksey Shipilev; Andrew Haley; Martin Buchholz;
> concurrency-interest; core-libs-dev
> Subject: Re: [concurrency-interest] Why not "weakNanoTime" for jdk9?
> Aleksey Shipilev wrote:
> > It would really help if you list what problems weakNanoTime is
> > supposed to solve.
> I was talking to Martin about this idea recently so I'll take a shot
> at describing why it's appealing to me (with the usual disclaimer
> that I know I'm much less of an expert than most other folks here).
> The main case I'm interested in is handling timeout calculations in
> concurrent operations. The usual case should be that the operation
> succeeds without timing out, and if it _does_ time out it's often
> after waiting several seconds or minutes, in which case being off
> by, say, a few microseconds is not a big deal.
> Given those assumptions, we really want the usual case (success) to
> be as fast as possible, and especially not to impose any additional
> synchronization or volatile accesses. Since strict monotonicity
> requires checking some kind of centrally synchronized clock state,
> it fails that use case.
> Furthermore, in this particular use case, it's trivial to have the
> appearance of monotonicity _within_ a particular operation: Just
> keep a local variable with the last time seen, and only update it if
> the next time seen is greater than the last time seen. No extra
> synchronization is required.
> The semantics I'm imagining would be for a very fast timer that is
> _usually_ monotonic, as long as the current thread stays on one
> processor, with occasional blips when switching between processors.
> We would still want those blips to be as small as practically
> achievable, so I guess there would still have to be some occasional
> synchronization to keep the fast timer within some tolerance of the
> central system clock.
> The way I see it, good concurrency semantics are about acknowledging
> the reality of the hardware, and a strictly monotonic clock is
> simply not the reality of the hardware when there's more than one
> processor involved.
> Actually, come to think of it, given an underlying non-monotonic
> timer, the weakNanoTime method could easily provide monotonicity on
> a per-thread basis without any synchronization overhead. That would
> mean most concurrent code wouldn't even have to change to become
> tolerant of non-monotonicity. It would just have to be made really
> clear to users that different threads might see timer values out of
> order relative to each other, though still within some best-effort
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
More information about the Concurrency-interest