[concurrency-interest] CPU cache-coherency's effects on visibility

Nathan Reynolds nathan.reynolds at oracle.com
Thu Feb 14 12:59:49 EST 2013


 > let me further stipulate that the JIT doesn't inline the log method.

Today it doesn't.  In the future, the JIT may do all sorts of tricks to 
enhance the log method.  So, you are essentially sitting on a ticking 
time bomb.  When it goes off, is a very difficult question.  If it does 
go off, it will be very difficult to reproduce and even if it were very 
reproducible, it would be very difficult to figure out.  I highly 
recommend not doing that.

 > Would cache-coherency somehow reach down to another CPU's registers 
and 'invalidate' the registers?

Cache coherency only affects L1, L2 and L3 caches.  It doesn't have any 
impact on the registers, the load/store buffer or any intermediate 
values moving about inside the core.

Nathan Reynolds 
<http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds> | 
Architect | 602.333.9091
Oracle PSR Engineering <http://psr.us.oracle.com/> | Server Technology
On 2/14/2013 10:48 AM, thurston at nomagicsoftware.com wrote:
> Ahh, the problem with superficial examples is well, they're superificial.
> Let me rewrite the example a bit:
> Thread 1:
> run()
> {
>     log();
>     Thread.sleep();
> }
>
> void log()
> {
>    log(this.shared)
> }
>
> and let me further stipulate that the JIT doesn't inline the log method.
>
> I guess what I'm really asking is does CPU cache-coherency extend to 
> registers?  Say, shared was a primitive (int,boolean), then 
> theoretically shared could be cached in a register.  Would 
> cache-coherency somehow reach down to another CPU's registers and 
> 'invalidate' the registers?
>
>
> On 2013-02-14 09:38, Stanimir Simeonoff wrote:
>> this.shared can be hoisted by JVM unless it's volatile. So Thread1
>> always sees the initial value, e.g. null.
>>
>> The code can be x-formed into:
>>  Thread 1:
>> Foo shared = this.shared
>>  while (true){
>>      log(shared)
>>      Thread.sleep(1000L)
>>  }
>>
>> On Thu, Feb 14, 2013 at 6:48 PM, <thurston at nomagicsoftware.com> wrote:
>>
>>> Given that all (?) modern CPUs provide cache-coherency, in the 
>>> following (admittedly superficial example):
>>>
>>> Thread 1:
>>> while (true)
>>> {
>>>     log(this.shared)
>>>     Thread.sleep(1000L)
>>> }
>>>
>>> Thread 2:
>>>    this.shared = new Foo();
>>>
>>> with Thread 2's code only invoked once and sometime significantly 
>>> after (in a wall-clock sense)
>>> Thread 1; and there are no operations performed by either thread 
>>> forming a happens-before relationship (in the JMM sense).
>>>
>>> Is Thread 1 *guaranteed* to eventually see the write by Thread 2?
>>> And that that guarantee is provided not by the JMM, but by the 
>>> cache-coherency of the CPUs?
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest [1]
>>
>>
>>
>> Links:
>> ------
>> [1] http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20130214/c56df949/attachment.html>


More information about the Concurrency-interest mailing list