[concurrency-interest] Java Memory Model and ParallelStream
shade at redhat.com
Fri Mar 6 05:51:26 EST 2020
On 3/6/20 11:40 AM, Luke Hutchison via Concurrency-interest wrote:
> Thanks. That's pretty interesting, but I can't think of an optimization that would have that effect.
> Can you give an example?
Method gets inlined, and boom: optimizer does not even see the method boundary.
> There's no "element-wise volatile" array unless you resort to using an AtomicReferenceArray,
> which creates a wrapper object per array element, which is wasteful on computation and space.
Not really related to this question, but: VarHandles provide "use-site" volatility without
"def-site" volatility. In other words, you can access any non-volatile element as if it is volatile.
> I have to assume this is not the case, because the worker threads should all go quiescent at the end
> of the stream, so should have flushed their values out to at least L1 cache, and the CPU should
> ensure cache coherency between all cores beyond that point. But I want to make sure that can be
Stop thinking in low level? That would only confuse you.
Before trying to wrap your head around Streams, consider the plain thread pool:
ExecutorService e = Executors.newFixedThreadPool(1);
int a = new int;
Future<?> f = e.submit(() -> a++);
System.out.println(a); // guaranteed to print "1".
This happens because all actions in the worker thread (so all writes in lambda body) happen-before
all actions after result acquisition (so all reads after Future.get). Parallel streams carry the
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 833 bytes
Desc: OpenPGP digital signature
More information about the Concurrency-interest