[concurrency-interest] On A Formal Definition of 'Data-Race'

Bharath Ravi Kumar reachbach at gmail.com
Fri Apr 19 05:28:22 EDT 2013

On Thu, Apr 18, 2013 at 12:22 AM, Boehm, Hans <hans.boehm at hp.com> wrote:

> > Absolute performance, probably.  Scalability, I doubt it.  Fences on x86
> seem to involve just local work which doesn't involve the memory system.
>  Programs whose performance is limited by fences tend to scale well.  See
> my RACES 12 workshop paper
> http://www.hpl.hp.com/techreports/2012/HPL-2012-218.pdf.

The section on race & scalability reads: "In a sense, the synchronized
pro-gram scales better than the racy version. I conjecture that this is due
to memory bandwidth limitations. There are many locks, but due to the
mapping scheme, locks are reused manytimes before being evicted from the
cache" What mapping scheme is being referred to? I did read the footnote on
the mapping scheme, but it doesn't explain lock reuse before cache
eviction. Is the mere presence of the locks in the cacheline helping the
correctly synchronized version? What'd prevent the racy code from
benefiting similarly without the use of an actual lock? Could you please
explain further?  Thanks.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20130419/0b4120e9/attachment.html>

More information about the Concurrency-interest mailing list