[concurrency-interest] On A Formal Definition of 'Data-Race'

oleksandr otenko oleksandr.otenko at oracle.com
Wed Apr 17 06:18:56 EDT 2013


You need to prove System.out.println isn't using shared.

Alex

On 17/04/2013 07:38, Nathan Reynolds wrote:
> Couldn't JIT hoist the non-volatile writes out of the loop?  For 
> example, the following code...
>
> for (i = 0; i < 1_000_000_000; i++)
> {
>     System.out.println(i);
>     shared = 2 * i;
> }
>
> ... could be transformed into ...
>
> for (i = 0; i < 1_000_000_000; i++)
> {
>     System.out.println(i);
> }
>
> shared = 2 * 1_000_000_000;
>
> ... If so, then the non-volatile write may not happen for a very long 
> time.
>
> Nathan Reynolds 
> <http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds> | 
> Architect | 602.333.9091
> Oracle PSR Engineering <http://psr.us.oracle.com/> | Server Technology
> On 4/16/2013 10:27 PM, Zhong Yu wrote:
>> On Tue, Apr 16, 2013 at 8:51 PM, thurstonn<thurston at nomagicsoftware.com>  wrote:
>>> Vitaly Davidovich wrote
>>>> The code works as-is.
>>> Absolutely.  volatile is not needed for correctness
>>>
>>> Vitaly Davidovich wrote
>>>> Why?
>>> Well, for performance reasons given the 'undefined/indefinite' visibility of
>>> #hash to other threads.
>>> At least according to the JMM (which has nothing to say about CPU cache
>>> coherency), it is *possible* that each distinct thread that invoked
>>> #hashCode() *could* result in a recalculation of the hash.
>> In practice though, application threads contain very frequent
>> synchronization actions, or other operations that force VM to
>> flush/reload. So it won't take very long for any non-volatile write in
>> one thread to become visible to other threads.
>>> Imagine a long-lived Map<String, ?>; and many threads accessing the map's
>>> keyset and for some unknown reason invoking #hashCode() on each key.
>>> If #hash was declared volatile, although there is no guarantee that #hash
>>> would only be calculated once, it is guaranteed that once a write to main
>>> memory was completed, every *subsequent* (here meaning after the write to
>> In JMM though, we cannot even express this guarantee. Say we have
>> threads T1...Tn, each thread Ti burns `i` seconds CPU time first, then
>> volatile-reads #hash, and if it's 0, calculates and volatile-writes
>> #hash which takes 100 ns. We can find no guarantee from JMM that
>> there's only one write; it's legal that every thread sees 0 from the
>> volatile read.
>>
>> Zhong Yu
>>
>>> main memory) read no matter from which thread would see #hash != 0 and
>>> therefore skip the calculation.
>>>
>>>
>>>
>>> Vitaly Davidovich wrote
>>>> String is too high profile (especially
>>>> hashing it) to do the "naive" thing.
>>> Nothing wrong with being naive; naive can be charming.
>>>
>>>
>>> Vitaly Davidovich wrote
>>>> Also, some architectures pay a
>>>> penalty for volatile loads and you'd incur that each time.
>>> Fair point; the JDK authors only get one shot and they can't assume that
>>> volatile reads are cheap
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:http://jsr166-concurrency.10961.n7.nabble.com/On-A-Formal-Definition-of-Data-Race-tp9408p9466.html
>>> Sent from the JSR166 Concurrency mailing list archive at Nabble.com.
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>
>>
>
>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20130417/fc98ca9b/attachment.html>


More information about the Concurrency-interest mailing list