[concurrency-interest] Fences APIand visibility/coherency

Dmitriy V'jukov dvyukov at gmail.com
Tue Nov 10 05:57:42 EST 2009


On Sat, Oct 17, 2009 at 3:46 AM, Boehm, Hans <hans.boehm at hp.com> wrote:

>
>
> > -----Original Message-----
> > From: Dmitriy V'jukov [mailto:dvyukov at gmail.com]
> > Sent: Friday, October 16, 2009 5:28 AM
> > To: concurrency-interest at cs.oswego.edu;
> > javamemorymodel-discussion at cs.umd.edu
> > Cc: Doug Lea; Boehm, Hans; davidcholmes at aapt.net.au; Adam Morrison
> > Subject: Fences APIand visibility/coherency
> >
> > > Date: Thu, 15 Oct 2009 18:18:48 +0000
> > > From: "Boehm, Hans" <hans.boehm at hp.com>
> > > Subject: Re: [concurrency-interest] [Javamemorymodel-discussion]
> > >        Fences APIand visibility/coherency
> >
> > > I'm not sure we have a real disagreement here about how
> > implementations should behave.  But I think we're running
> > into a limitation of language specifications in general:
> > They can only specify the allowable behavior of a program,
> > not the details of the implementation.  Once we agree that
> > our example program is allowed not to terminate (because we
> > want to allow dumb schedulers), the implementation is allowed
> > to produce that nonterminating behavior in any way it wants.
> >
> > Hans,
> >
> > I understand what you are talking about, and this makes
> > sense. However here is some thoughts on this.
> >
> > Is it why "synchronization order must be a countable set"
> > removed in favor of "Java does not require fair scheduler"
> > (providing both statements is contradicting)?
> As I said in my last message, I don't think the intent was to remove it,
> just to restate it.  See the section on observable behavior.
> >
> > As far as I understand, what you are saying effectively
> > implying that volatile and plain vars are basically  the same
> > (ordering aside). I.e.
> > assume I am in your team and write "signaling on a flag" with
> > flag declared as plain var. You say to me: Hey, Dmitry, you
> > have to declare flag as volatile. Me: But why, Hans? They
> > both may work, and neither guaranteed to work. So what's the
> > distinction? What will you answer to me?
> The volatile flag will guarantee partial correctness; If you see the flag
> set, you will see prior state updates by the thread setting the flag.  The
> ordinary variable one won't guarantee that.
>
> As a practical matter, the volatile flag will work as far as termination
> properties are concerned, the other one may well not.  I believe the
> volatile flag is guaranteed to do the right thing for applications that work
> no matter what the scheduler does.
>

Hi Hans, concurrency-interest, javamemorymodel-discussion,

Sorry for delay, it took some time to form consistent reasoning on the
topic. Here is how I understand the problem now.

JLS does not provide strict guarantees regarding propagation of changes
between threads. So most Java programs have to rely on quality and sanity of
language implementation. But it's Ok, because they most likely do it anyway.
There are 1000 and 1 way how "bad" language implementation can break
reasonable programs while staying 100% conforming. For example,
implementation can allow allocation of at most 4 bytes of dynamic memory, or
give zero stack space, or allow creation of at most 1 thread, or make every
volatile assignment (or any other operation) take 10^10 years of wall clock
time (i.e. visible hang of all programs, even single-threaded), etc, etc.
Language standard can't specify exactly how much dynamic memory
implementation must guarantee, however we all understand that implementation
will do it's best to provide as much dynamic memory as possible. "Good"
language implementation handles everything in a best-effort way,
particularly it tries to make volatile stores visible to other threads as
soon as possible.
So the only formal guarantee that makes sense in such an environment is -
program must work correctly or does not work at all. And that's what
guaranteed for volatiles.
Hope I get it in the right way now.

p.s. I still see some possible practical problem with Fences.orderWrites()
because it relies on plain store, so I do not quite see why even "good"
language implementation will prevent it from sinking below an infinite loop
or something like that. Is it intended that compiler must specifically
handle assignment after an Fences.orderWrites()?
I would expect ordering of writes and the store be combined into a single
operation, so that compiler will be able to ensure correct code generation.
I.e.orderReads(O) actually means getAndOrderReads(O), i.e. the load is a
part of the operations. This allows an implementation to emit actual load
from main memory and prevent all reorderings on compiler level. This is
enough to ensure useful guarantees. In order to be consistent,
orderWrites(O) must have a form of orderWritesAndSet(X, O), so that the
store is a part of the operation. This will allow an implementation to emit
actual write to main memory and prevent all reorderings on compiler level.
As a side effect such form will be consistent with C++0x's
std::atomic::store(memory_order_release), and C#'s Thread.VolatileWrite().

--
Dmitriy V'jukov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20091110/b385b09c/attachment.html>


More information about the Concurrency-interest mailing list