[concurrency-interest] Single writer multiple readers no barriers -- safe ?

Nitsan Wakart nitsanw at yahoo.com
Fri Nov 29 03:42:15 EST 2013


What could go wrong is:
   while(getMap().get(key)){
      // wait for key
   }
Can in theory (and sometimes in practice) spin forever. The value gets hoisted and never checked again. The JIT assumes there's no way for the read value to change, so won't check again. At that point the 'at some point' becomes 'never'.
Reading through your original post again I would take a step back and evaluate the performance win vs. correctness. Are you 100% sure this is a bottleneck for your program? have you tried and measured using CHM and it makes a measurable difference to use your alternative?
If the answer to the above is "yes we did, and yes it does" then it would be good to see some concrete code to demonstrate how you are using this map.



On Friday, November 29, 2013 10:01 AM, Thomas Kountis <tkountis at gmail.com> wrote:
 
Thanks for the responses guys.
I do understand all the above, but what I don't understand is what could go wrong with the no barrier approach on x86 ? Wouldn't that write eventually get flushed to main memory, and the other processors must invalidate cache at some point also ? I know this is a lot of "at some point", therefore, I guess its very vague to be trusted, but is there anything else apart from timing that could go wrong and that write not become visible?

t.



On Fri, Nov 29, 2013 at 6:51 AM, Nitsan Wakart <nitsanw at yahoo.com> wrote:

From my experience, lazySet is indeed your best choice (but only a valid choice for a single writer). You need a volatile read to match the HB relationship otherwise the compiler is free to optimize the value you read, so someone using your map in a loop may end up stuck if you don't do it.
>
>
>
>
>
>On Friday, November 29, 2013 5:35 AM, Vitaly Davidovich <vitalyd at gmail.com> wrote:
> 
>AtomicReference.lazySet is the way to go here - on x86 this is just normal mov instruction with compiler barrier only (StoreStore).  If you don't want overhead of AtomicReference wrapper (doesn't sound like that would be an issue) you can get same effect with Unsafe.putObjectOrdered.
>I wouldn't worry about AtomicReference.get() performance on x86 - this is a read from memory but if you read frequently, you'll hit L1 cache anyway.
>HTH
>Sent from my phone
>On Nov 28, 2013 5:34 PM, "Thomas Kountis" <tkountis at gmail.com> wrote:
>
>Hi all,
>>
>>
>>This is my first time posting on this list, been follower for quite some time now and really enjoying all the knowledge sharing :) .
>>
>>
>>I was looking on optimizing a solution today at work, and I came across the following.
>>We have a scenario where we keep a simple cache (HashMap) and this is accessed by multiple
>>readers on an application server, millions of times per day and highly contented. This cache is immutable and only gets updated by a single writer by replacing the reference that the variable points to every 5 mins. This is currently done as a volatile field. I was looking for a way to lose completely the memory barriers and rely on that field being eventually visible across all other threads (no problem by reading stale data for a few seconds).
>>
>>
>>Would that be possible with the current JMM ? I tried to test that scenario with some code, and it seems to work most of the times, but some threads read stale data for longer that I would expect (lots of seconds). Is there any platform dependency on such implementation ? Its going to run on x86 environments. Is there any assumption we can make as of how long that 'eventually' part can be ? (could it be more than 5 mins, when the next write occurs?). My understanding is that, that write even if re-ordered will have to happen. I came across an article about using the AtomicReference doing a lazySet (store-store) for the write, and then the Unsafe to do a getObject (direct) instead of the default get which is based on the volatile access. Would that be a better solution?
>>
>>
>>Any ideas, alternatives?
>>
>>
>>PS. Sorry for the question-bombing :/
>>
>>
>>Regards,
>>Thomas
>>_______________________________________________
>>Concurrency-interest mailing list
>>Concurrency-interest at cs.oswego.edu
>>http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>
>>
>
>_______________________________________________
>Concurrency-interest mailing list
>Concurrency-interest at cs.oswego.edu
>http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20131129/db2e0b32/attachment-0001.html>


More information about the Concurrency-interest mailing list