[concurrency-interest] RFR: 8065804: JEP 171: Clarifications/corrections for fence intrinsics
stephan.diestelhorst at gmail.com
Thu Dec 11 17:01:34 EST 2014
Am Donnerstag, 11. Dezember 2014, 15:36:27 schrieb Andrew Haley:
> On 12/11/2014 02:54 PM, Stephan Diestelhorst wrote:
> > You pretty much swapped DMBs and MFENCEs ;) So MFENCEs are local, in
> > that they need to drain the write-buffer before allowing a later load
> > access to the data. DMBs, on the other hand, at least conceptually need
> > to make sure that stores from other cores have become visible everywhere
> > when the local core has seen them before the DMB.
> Excuse me? Now I'm even more confused.
> If this core has seen a store from another core, then a DMB on this
> core makes that store visible to all other cores, even if the store
> had no memory fence?
Yep. Conceptually it has to. Imagine in the IRIW example, there is not
even a fence behind the store:
Memory: foo = bar = 0
T1 T2 T3 T4
foo := 1 bar := 1 r1 = foo r3 = bar
r2 = bar r4 = foo
r1 == 1 && r2 == 0 && r3 ==1 && r4 == 0 ?
> So the simple act of reading a memory location and then doing a DMB
> causes a previous store to that memory location to become visible
> to all cores.
Yes, if you read the updated store value with a load before the DMB. If
you look through the ARM documentation, this is precisely the reason for
the somewhat complex description with the recursive definition of what
really is before and after the barrier. The description tells you that
the barrier not only orders things that were on the same core before /
after, but also things that were read by instructions before the
barrier, likewise for things happening after the barrier. This is
necessary, as otherwise there would be no way to enforce a consistent
global store order in a weak memory model.
The reason for fences being so simple on TSO(-like) architectures is
precisely, that stores are magically globally ordered already, so the
fence does not have to influence them.
More information about the Concurrency-interest