[concurrency-interest] Relativity of guarantees provided by volatile

Boehm, Hans hans.boehm at hp.com
Wed Aug 22 23:05:52 EDT 2012

Progress guarantees were discussed during both the original JSR133 effort, and again in connection with the C++ memory model.  It is really hard to specify anything that is universally acceptable, and is reasonably easy to write down.  What about a special purpose embedded JVM that runs stored procedures in a database?  Do we want to require it to have a preemptive scheduler?  What does it mean to get a "chance to run" in the presence of standard library functions that acquire locks under the covers?  Can another thread calling those same functions repeatedly prevent it from making progress?  If so, do we now have to specify which locks are acquired by each library function?  What does it mean for a JVM to provide a progress guarantee when the hardware specifications don't (e.g. because they don't promise that store buffers drain in a bounded amount of time)?

In the case of C++11, we ended giving a vague guideline along the lines mentioned below, so that implementers knew what to strive for, but we ended up not making it binding on implementations, in part because we couldn't make it precise.

As far as the JMM itself is concerned, it's important to remember that hardware memory models vary a lot, and at least the 2005 variants were often less well defined, and even less comprehensible than the language level models.  I think that defining a memory model in terms of hardware translations, for example, is a completely lost cause.  We need to keep things at an abstract level.


From: Vitaly Davidovich [mailto:vitalyd at gmail.com]
Sent: Wednesday, August 22, 2012 6:01 AM
To: Marko Topolnik
Cc: concurrency-interest at cs.oswego.edu; Boehm, Hans
Subject: Re: [concurrency-interest] Relativity of guarantees provided by volatile

If the JMM ever goes through a revision, it would be useful to state some basic assumptions such as threads get a chance to run on a processor so that "perverse" scenarios don't muddy the waters.

Also, as I mentioned earlier, I have an issue with the wording of a happens-before edge.  For example, thread A writes volatile memory and thread B reads that memory; but if there's no read, the JMM says nothing about what should happen to the write.  I also don't understand how a normal (i.e. non experimental/academic) JVM could even do this type of analysis consistently (e.g. the volatile field is public/protected) if we're not sacrificing performance.

I understand it's an abstract model and shouldn't be clouded with impl details of JVM/os/hardware, but I don't know if that's a good or bad thing.  It's bad enough that folks on this list have ambiguity over it, but what's a compiler writer to do, for example? Maybe the JMM can be lowered into something a bit more concrete?

Sent from my phone
On Aug 22, 2012 2:42 AM, "Marko Topolnik" <mtopolnik at inge-mark.hr<mailto:mtopolnik at inge-mark.hr>> wrote:

On 22. kol. 2012., at 01:40, Boehm, Hans wrote:
>> From: Marko Topolnik [mailto:mtopolnik at inge-mark.hr<mailto:mtopolnik at inge-mark.hr>]
>>> Similar to your concern that consecutive volatile writes can be
>>> compressed into the last write, it also seems true that consecutive
>>> volatile reads can be compressed into the first read - exactly the
>>> kind of optimization to be disabled by C's volatile. It's
>>> inconceivable that any JVM will do such optimization on volatile
>>> reads, it'll break lots of programs, e.g. busy waits.
>> Actually there is already a provision in the JMM (in the original
>> paper, at least) that prevents a busy-waiting loop whose condition
>> involves a volatile read to forever read a stale value. There can be
>> only a finite number of such read actions. But, this is a small
>> consolation, really. Most code DOES make a difference between "eternity
>> minus one" and "right now".
> My recollection is that this is only sort of/mostly of true.  If your entire program consists of
> Thread 1:
> while (!flag) {}
> Thread 2:
> flag = true;
> There is intentionally no requirement that thread 2 ever be scheduled.  If it's not, that looks a lot like flag having been read once.  If, on the other hand, thread 2 sets flag and then prints "Hello", and you see the "Hello", then I believe you are correct that thread 1 must terminate.

If thread 2 never gets scheduled then the value of the flag is not stale (that's what I said---"read a stale value"). This JMM provision precludes bunching together an infinite amount of busy-wait flag checks, all reading the stale false value.


Concurrency-interest mailing list
Concurrency-interest at cs.oswego.edu<mailto:Concurrency-interest at cs.oswego.edu>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20120823/49ec6115/attachment-0001.html>

More information about the Concurrency-interest mailing list