[concurrency-interest] CPU cache-coherency's effects on visibility

Vitaly Davidovich vitalyd at gmail.com
Thu Feb 14 12:27:00 EST 2013


It's not guaranteed because compiler can hoist the read out of the loop (if
the field is non volatile).  If it didn't do that, then for all practical
purposes (I.e. exotic hypotheticals aside) yes you'll see it once the store
buffer drains to L1 (assuming it even went there in the first place and
again assuming no exotic stuff where buffer never drains).

Sent from my phone
On Feb 14, 2013 11:56 AM, <thurston at nomagicsoftware.com> wrote:

> Given that all (?) modern CPUs provide cache-coherency, in the following
> (admittedly superficial example):
>
> Thread 1:
> while (true)
> {
>     log(this.shared)
>     Thread.sleep(1000L)
> }
>
> Thread 2:
>    this.shared = new Foo();
>
> with Thread 2's code only invoked once and sometime significantly after
> (in a wall-clock sense)
> Thread 1; and there are no operations performed by either thread forming a
> happens-before relationship (in the JMM sense).
>
> Is Thread 1 *guaranteed* to eventually see the write by Thread 2?
> And that that guarantee is provided not by the JMM, but by the
> cache-coherency of the CPUs?
> ______________________________**_________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.**oswego.edu <Concurrency-interest at cs.oswego.edu>
> http://cs.oswego.edu/mailman/**listinfo/concurrency-interest<http://cs.oswego.edu/mailman/listinfo/concurrency-interest>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20130214/7842e991/attachment.html>


More information about the Concurrency-interest mailing list