[concurrency-interest] On A Formal Definition of 'Data-Race'

thurstonn thurston at nomagicsoftware.com
Tue Apr 16 18:31:19 EDT 2013


Nathan Reynolds-2 wrote
> All things being equal, reading a volatile and non-volatile field from 
> L1/2/3/4 cache/memory has no impact on performance.  The instructions 
> are exactly the same (on x86).
> 
> Writing a volatile and non-volatile field to cache/memory has an impact 
> on performance.  Writing to a volatile field requires a memory fence on 
> x86 and many other processors.  This fence is going to take cycles.
> 
> Nathan Reynolds 
> <http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds> | 
> Architect | 602.333.9091
> Oracle PSR Engineering <http://psr.us.oracle.com/> | Server
> Technology

Sure, that's my understanding as well.  I wasn't asking about the 'cost' of
reading #stopped when declared volatile, as you mentioned there isn't one.
My question was about the 'timing' of the visibility of #stopped in the
*non-volatile* case, given cache coherency




--
View this message in context: http://jsr166-concurrency.10961.n7.nabble.com/On-A-Formal-Definition-of-Data-Race-tp9408p9459.html
Sent from the JSR166 Concurrency mailing list archive at Nabble.com.


More information about the Concurrency-interest mailing list