[concurrency-interest] Relativity of guarantees provided by volatile

√iktor Ҡlang viktor.klang at gmail.com
Wed Aug 22 16:08:02 EDT 2012


On Wed, Aug 22, 2012 at 9:57 PM, Vitaly Davidovich <vitalyd at gmail.com>wrote:

> A single threaded program may still migrate across processors, so OS will
> do the flushing anyway.
>

Yes of course. Core migration will have to include packing the threads
belongings and ship them over to the new core.


> Also, draining store buffer is not always an expensive operation.  Really
> the scalability issue is sharing memory at all between lots of cores,
> doesn't have to be volatile to cause problems.
>
> Sent from my phone
> On Aug 22, 2012 3:43 PM, "√iktor Ҡlang" <viktor.klang at gmail.com> wrote:
>
>>
>>
>> On Wed, Aug 22, 2012 at 7:00 PM, Zhong Yu <zhong.j.yu at gmail.com> wrote:
>>
>>> On Wed, Aug 22, 2012 at 8:24 AM, Marko Topolnik <mtopolnik at inge-mark.hr>
>>> wrote:
>>> >>> Let's say that they could in theory be reflecting the exact time. We
>>> >>> don't have to descend into the gory details of specific CPUs.
>>> Imagine a
>>> >>> CPU specifically built to allow precise wall-clock time observation.
>>> >>
>>> >> Then I think it depends on how the CPU makes that time available to
>>> clients.
>>> >> If it does it in a way that's equivalent to regularly updating a
>>> volatile
>>> >> variable that can be read by all threads, then timing should work as
>>> expected.
>>> >> I think the current specification for the timing functions is
>>> effectively
>>> >> much weaker, because it's silent on the issues.
>>> >>
>>> >> I think that if we agree this is a problem, then it could be fixed by
>>> >> updating the specifications for the timing functions, without
>>> touching the
>>> >> memory model per se.  I'm not sure I understand the implementation
>>> >> implications of that, so I'm neither advocating it, nor lobbying
>>> against it.
>>> >> The C++ specification is somewhere in the middle, and I remain a bit
>>> nervous
>>> >> about implementability there.
>>> >
>>> >
>>> > I have thought about this approach more carefully and the funny thing
>>> is, it still guarantees nothing. Please consider the following
>>> happens-before graph:
>>> >
>>> >
>>> >     Thread R                                  /--> Rr0 --> Rr1
>>> >                                   -----------+--------/
>>> >                                 /            |
>>> >     Thread W        --> Rw0 -> Ww0 ---> Rw1 -+--> Ww1
>>> >                   /                /  ------/
>>> >                  |                | /
>>> >     Thread T    Wt0 -----------> Wt1
>>> >
>>> >
>>> > We have the Timer thread T, which writes the volatile variable
>>> "currentTime". Both threads R and W read that var. The rest is the same as
>>> in the opening post: W writes sharedVar, R reads sharedVar.
>>> >
>>> > Analysis:
>>> >
>>> > - T writes to currentTime with actions Wt0 and Wt1;
>>> >
>>> > - W observes action Wt0 with the action Rw0;
>>> >
>>> > - W writes to sharedVar with the action Ww0;
>>> >
>>> > - W observes Wt1 with the action Rw1;
>>> >
>>> > - R observes Wt1 with the action Rr0;
>>> >
>>> > - R observes Ww0 with the action Rr1.
>>> >
>>> > Conclusions:
>>> >
>>> > - R first observes Wt1, then Ww0;
>>> >
>>> > - W first commits Ww0, then observes Wt1.
>>> >
>>> >
>>> > Since there are no cycles in the graph, there is no contradiction.
>>>
>>> Suppose W writes v=1, then observes t=1; R observes t=2, then read v.
>>> The last read cannot see v=0.
>>>
>>> Therefore if R/W actions are sandwiched with reading the timing
>>> variable, we will not detect any apparent timing paradoxes.
>>>
>>> Since we are not talking about physical time (beyond JMM) any more,
>>> instead just a variable (within JMM), JMM guarantees that any
>>> execution of the program appears to be sequentially consistent.
>>
>>
>> So flush of any write to an address only has to be done just prior to any
>> read from that address by some other thread than the writer.
>> So given a program that only has one thread; flushes could effectively be
>> elided.
>>
>> Cheers,
>>>>
>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>
>>
>>
>> --
>> Viktor Klang
>>
>> Akka Tech Lead
>> Typesafe <http://www.typesafe.com/> - The software stack for
>> applications that scale
>>
>> Twitter: @viktorklang
>>
>>
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>
>>


-- 
Viktor Klang

Akka Tech Lead
Typesafe <http://www.typesafe.com/> - The software stack for applications
that scale

Twitter: @viktorklang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20120822/95382658/attachment-0001.html>


More information about the Concurrency-interest mailing list