[concurrency-interest] Java Memory Model and ParallelStream

Luke Hutchison luke.hutch at gmail.com
Fri Mar 6 07:18:59 EST 2020

OK, that answers all my questions. Thanks for taking the time to respond
(and for the pointer to MethodHandles.arrayElementVarHandle).

On Fri, Mar 6, 2020 at 4:45 AM Aleksey Shipilev <shade at redhat.com> wrote:

> On 3/6/20 12:11 PM, Luke Hutchison wrote:
> > ...which is why I specifically excluded inlining in my original question
> (or said consider the state
> > after all inlining has taken place). I realize that inlining doesn't
> just happen at compiletime, and
> > the JIT could decide at any point to inline a function, but I want to
> ignore that (very real)
> > possibility to understand whether reordering can take place across
> method boundaries _if inlining
> > never happens_.
> That is an odd exclusion. Inlining is the mother of all optimizations: it
> expands the optimization
> scope. But even "if" formal inlining does not happen, you can devise the
> closed-world/speculatve
> optimizations that peek into method implementations and use that knowledge
> to optimize. Coming up
> with the concrete example is counter-productive, IMO, because it plays
> into caring about
> implementation specifics, rather than the high-level guarantees.
> >     > There's no "element-wise volatile" array unless you resort to
> using an AtomicReferenceArray,
> >     > which creates a wrapper object per array element, which is
> wasteful on computation and space.
> >
> >     Not really related to this question, but: VarHandles provide
> "use-site" volatility without
> >     "def-site" volatility. In other words, you can access any
> non-volatile element as if it is volatile.
> >
> > Thanks for the pointer, although if you need to create one VarHandle per
> array element to guarantee
> > this behavior,
> No, you don't need a VH per array element, you can have one that accepts
> the array and the index:
> https://docs.oracle.com/javase/9/docs/api/java/lang/invoke/MethodHandles.html#arrayElementVarHandle-java.lang.Class-
> > then that's logically no different than wrapping each array element in a
> wrapper
> > object with AtomicReferenceArray.
> No, it is not the same. AtomicReferenceArray gives you one additional
> indirection to its own array.
> VarHandle can do the operation _on the array you give it_.
> > I guess fundamentally I was asking if any memory reordering (or cache
> staleness) can happen across
> > synchronization barriers. It sounds like that is not the case, due to
> synchronization barriers
> > implementing a computational "happens-before" guarantee, which enforces
> the same "happens-before"
> > total ordering on memory operations across the barrier.
> Tell me what do you mean by "Synchronization barrier" (and how it relates
> to the what you are
> asking, to avoid XY Problem), and then we can talk about what properties
> does it have. Loosely
> defined things can have whatever properties :)
> Otherwise, look, high-level guarantees are the king. They do not force you
> to know the low-level
> details. In just about every parallel implementation everything that
> worker threads do
> happens-before their thread/task termination/publication, and
> thread/result termination/publication
> happens-before the detection/acquisition of the result.
> It is not really relevant how that detection/acquisition happens:
>  - successful Thread.join() for a terminating worker; (guaranteed by JLS)
>  - successful Future.get() from executor; (guaranteed by package spec in
> java.util.concurrent.*)
>  - successful forEach for a parallel stream; (provided by extension?)
> --
> Thanks,
> -Aleksey
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20200306/b084632e/attachment-0001.htm>

More information about the Concurrency-interest mailing list