[concurrency-interest] jdk9 VarHandle and Fence methods

Alex Otenko oleksandr.otenko at gmail.com
Wed Oct 14 03:27:04 EDT 2015


Well, that’s true for “normal” store. But I am considering a volatile store to x, so it doesn’t get reordered with y.lock().

Alex

> On 14 Oct 2015, at 05:35, David Holmes <davidcholmes at aapt.net.au> wrote:
> 
> Late to the party here but I’ve been on a nice long vacation … J
>  
> The argument below is to me conflating a temporal notion of “happens before” and the memory-model notion. Just because I can reason that thread A must have already performed a certain action (ie x=1) because thread B can’t get the lock, it does not mean that action is visible to thread B, unless a suitable happens-before edge exists. This should not be at all surprising because the store and lock in Thread A can already be freely reordered, if no HB edge exists.
>  
> David
>  
> From: concurrency-interest-bounces at cs.oswego.edu <mailto:concurrency-interest-bounces at cs.oswego.edu> [mailto:concurrency-interest-bounces at cs.oswego.edu <mailto:concurrency-interest-bounces at cs.oswego.edu>] On Behalf Of Hans Boehm
> Sent: Thursday, September 17, 2015 3:28 AM
> To: Oleksandr Otenko
> Cc: Doug Lea; concurrency-interest at cs.oswego.edu <mailto:concurrency-interest at cs.oswego.edu>
> Subject: Re: [concurrency-interest] jdk9 VarHandle and Fence methods
>  
> This all depends on your perspective.  In my view, if we assumed that trylock() always provides a correct response based on the current state of the lock (something you shouldn't be assuming), then I think we would definitely want the x = 1 to be visible after a failed trylock().  If you write (and yes you shouldn't):
>  
> Thread 1:
> x = 1;
> y.lock();
>  
> Thread 2:
> while (y.trylock()) {y.unlock();}
> load x;
>  
> That program does not intuitively have a data race.  The load and store to x cannot occur simultaneously. Thus it should have interleaving-based semantics.  Which means the load of x must return 1.
>  
> If there is no sw edge from the lock() to the failed trylock(), then it does have a data race.  But that's a contrivance of the model; data races no longer correspond to actual possible concurrent execution in a simple model.  In my view that's bad, since I can no longer rely on intuitions based on simultaneous executions for "data races".  I actually have to teach people about all the happens-before rules.  With the perspective of our PLDI 2008 paper, the two possible definitions of data race no longer agree.
>  
> People do very occasionally write code like this, vaguely along the lines of the close example in this thread.  ("Somebody's already working on it, so I don't have to.") Weaker ordering properties that guarantee correctness are really hard to explain and at best brittle.  (But that thread had the lock, of course it finished running that earlier initialization code.)  I think it's far easier to just say "trylock() may spuriously fail.  Your code doesn't work.  Use e.g. an atomic getAndSet() instead."
>  
>  
> On Wed, Sep 16, 2015 at 8:28 AM, Oleksandr Otenko <oleksandr.otenko at oracle.com <mailto:oleksandr.otenko at oracle.com>> wrote:
> Wow, that's some very flaky relaxation of the meaning of a lock!
> 
> I would expect the failing lock acquire to establish no sw edges, but I would certainly expect the order of acquires and releases (successful and no) to be total, and the total order of all acquires and releases (successful and no) to be in alignment with the total order of operations on volatiles. That way indeed x=1 should be visible, but only if it is a volatile store - no guarantees for normal stores.
> 
> Also, I would not expect the debugging thread to acquire the lock, if that breaks the protocol. You wouldn't encourage a debugging thread to write to arbitrary volatiles - so you wouldn't encourage the debugging thread to acquire arbitrary locks.
> 
> Alex
>  
> 
> On 15/09/2015 06:16, Hans Boehm wrote:
>> > How does it slow down lock()?
>>  
>> It depends on the precise guarantee you provide, and I suspect this thread didn't quite agree on that.  The most natural one is that the succeeding lock acquisition happens before the failed trylock().  That implies that if we have
>>  
>> x = 1;
>> lock();
>>  
>> those can't be reordered by the hardware, since a failing trylock() would have to see the assignment to x.  That requires a fence between them on ARM or Power.
>>  
>> I think the right way to think of trylock(), at least informally, is as allowing spurious failures. I.e. trylock() is allowed to behave as though the lock was held when it isn't.  You thus can't conclude anything about other threads from the fact that it failed.  In this view you don't have to think about memory ordering issues when reasoning about correctness, you just reason about spurious failures instead. 
>>  
>> If your code is robust against unknown, e.g. debugger, threads acquiring the lock now and then, then it must be robust against this sort of spurious failure.  If the lock is really used only to provide mutual exclusion, this should not affect correctness.
>>  
>> On Mon, Sep 14, 2015 at 6:41 PM, Vitaly Davidovich <vitalyd at gmail.com <mailto:vitalyd at gmail.com>> wrote:
>> How does it slow down lock()?
>> 
>> I don't necessarily disagree but I can certainly see people considering tryLock to have same ordering effect as (failed) CAS.  It's certainly true that a CAS is a lower level primitive than a lock, but I don't know if that resonates immediately when thinking about this.  It's also the case that on very popular platforms such as x86 a failing tryLock will have the same ordering as a successful one, and no difference is observed (and JIT doesn't do anything different).
>> 
>> I don't understand the debugger thread example - what's the issue there?
>> 
>> sent from my phone
>> 
>> On Sep 14, 2015 9:07 PM, "Hans Boehm" <boehm at acm.org <mailto:boehm at acm.org>> wrote:
>> FWIW, this general issues is discussed in section 3 of http://dl.acm.org/citation.cfm?id=1375581.1375591 <http://dl.acm.org/citation.cfm?id=1375581.1375591> . 
>>  
>> Yet another argument against providing the stronger guarantees is that, on many architectures, it doesn't just slow down trylock(), it more importantly slows down lock().  In general, if your code cares about ordering for unsuccessful trylock(), then it's not robust against, say, a debugging thread unexpectedly acquiring the lock for a short period.  In my view, in such a case, you're no longer using it as a lock, and you should be using something else, e.g. an atomic object, with stronger guarantees.
>>  
>> On Fri, Sep 4, 2015 at 4:18 AM, Doug Lea <dl at cs.oswego.edu <mailto:dl at cs.oswego.edu>> wrote:
>> On 09/03/2015 02:19 PM, Oleksandr Otenko wrote:
>> Has anyone come up with the answer about ordering for tryLock, or have I missed it?
>> 
>> You missed the dog not barking :-)
>> 
>> The Lock specs don't require any specific HB effects here on failed
>> tryLock. Even if we wanted to, we cannot retroactively impose any
>> considering that anyone can implement the Lock interface (not just j.u.c)
>> and some of these might become in violation.
>> 
>> As you and Vitaly pointed out, there are a few fringe cases where
>> users might want to impose ordering on failure. In jdk9, you'll
>> me able to do this with moded VarHandle accesses and/or fences. The
>> resulting extra fencing might be redundant here and there, but if you
>> cared enough, you could create and rely on custom locks with stronger
>> guarantees.
>> 
>> -Doug
>> 
>> 
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu <mailto:Concurrency-interest at cs.oswego.edu>
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest <http://cs.oswego.edu/mailman/listinfo/concurrency-interest>
>>  
>> 
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu <mailto:Concurrency-interest at cs.oswego.edu>
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest <http://cs.oswego.edu/mailman/listinfo/concurrency-interest>
>>  
>>  
>> 
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu <mailto:Concurrency-interest at cs.oswego.edu>
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest <http://cs.oswego.edu/mailman/listinfo/concurrency-interest> 
>  
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu <mailto:Concurrency-interest at cs.oswego.edu>
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest <http://cs.oswego.edu/mailman/listinfo/concurrency-interest>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20151014/60b859d2/attachment-0001.html>


More information about the Concurrency-interest mailing list