[concurrency-interest] Java Memory Model and ParallelStream

Aleksey Shipilev shade at redhat.com
Fri Mar 6 05:51:26 EST 2020


Hi,

On 3/6/20 11:40 AM, Luke Hutchison via Concurrency-interest wrote:
> Thanks. That's pretty interesting, but I can't think of an optimization that would have that effect.
> Can you give an example?

Method gets inlined, and boom: optimizer does not even see the method boundary.

> There's no "element-wise volatile" array unless you resort to using an AtomicReferenceArray,
> which creates a wrapper object per array element, which is wasteful on computation and space.

Not really related to this question, but: VarHandles provide "use-site" volatility without
"def-site" volatility. In other words, you can access any non-volatile element as if it is volatile.

> I have to assume this is not the case, because the worker threads should all go quiescent at the end
> of the stream, so should have flushed their values out to at least L1 cache, and the CPU should
> ensure cache coherency between all cores beyond that point. But I want to make sure that can be
> guaranteed.

Stop thinking in low level? That would only confuse you.

Before trying to wrap your head around Streams, consider the plain thread pool:

    ExecutorService e = Executors.newFixedThreadPool(1);
    int[] a = new int[1];
    Future<?> f = e.submit(() -> a[0]++);
    f.get();
    System.out.println(a[0]); // guaranteed to print "1".

This happens because all actions in the worker thread (so all writes in lambda body) happen-before
all actions after result acquisition (so all reads after Future.get). Parallel streams carry the
similar property.

-- 
Thanks,
-Aleksey

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20200306/e2926ffa/attachment-0001.sig>


More information about the Concurrency-interest mailing list