[concurrency-interest] when is safe publication safe?

Taylor Gautier tgautier at terracottatech.com
Mon Apr 26 11:34:53 EDT 2010


This is very interesting. What you have here are read many, write  
infrequently, but writes must be coherent for readers but once  
'refreshed' readers shouldn't incur barrier penalties.

Terracotta solves this problem at the network level for many jvms by  
allowing local readers and writers to acquire a lock with a lock lease  
which can be revoked asynchronously.  This allows local jvms to  
proceed with a read lock without incurring a lock read penalty.   I  
mention Terracotta because  for this problem JVMs in a cluster are  
analogous to threads in the JVM.

Is it possible to construct the same thing in threads using java  
primitives?

Does a ReentrantReadWriteLock help? I don't think so. What you would  
need is for all threads to acquire and hold a read lock but have an  
ability to acquiesce it on demand for a writer without incurring  
memory barrier hits.

I can't think of any way to 'cache' a read lock in ThreadLocal such  
that it doesn't incur barriers on every read and yet also can yield to  
a writer asynchronously.

On Apr 26, 2010, at 2:35 AM, Rémi Forax <forax at univ-mlv.fr> wrote:

> Le 26/04/2010 07:17, Joe Bowbeer a écrit :
>>
>> While I'm accumulating questions...
>>
>> Why is ThreadLocal not the preferred cache in this case?
>>
>> Joe
>
> You can also mutate the metaclass by example by adding a new method  
> dynamically,
> in that case, all threads must see the modification.
> So ThreadLocal doesn't solve the problem here.
>
> Rémi
>
>>
>> On Sun, Apr 25, 2010 at 4:26 PM, Joe Bowbeer wrote:
>> Jochen,
>>
>> What you are describing seems like a caching problem as much as it  
>> is about safe publication and/or dynamic languages.  The language  
>> runtime creates the (immutable) instances and publishes them to the  
>> cache, right?  The performance of the cache is the hot spot?
>>
>> So are you using something like MapMaker to implement the cache?
>>
>> http://code.google.com/p/google-collections/
>>
>> What are you using to hold off a t2 when t1 is in the process of  
>> publishing to the cache?  Some scheme involving a Future?
>>
>> Joe
>>
>>
>> On Sun, Apr 25, 2010 at 8:48 AM, Jochen Theodorou wrote:
>> Doug Lea wrote:
>> On 04/25/10 05:31, Jochen Theodorou wrote:
>> As a first step, consider exactly what effects/semantics you want
>> here, and the ways you intend people to be able to write  
>> conditionally
>> correct Groovy code.
>>
>> People wouldn't have to write conditionally correct Groovy code. they
>> would write normal code as they would in Java (Groovy and Java are  
>> very
>> near).
>>
>> It seems implausible that you could do enough
>> analysis at load/run time to determine whether you need
>> full locking in the presence of multithreaded racy initialization
>> vs much cheaper release fences. This would require at least some
>> high-quality escape analysis. And the code generated
>> would differ both for the writing and reading callers.
>>
>> maybe I did explain it not good. Let us assume I have the Groovy  
>> code:
>>
>> 1+1
>>
>> Then this is really something along the lines of:
>>
>> SBA.getMetaClassOf(1).invoke("plus",1)
>>
>> and SBA.getMetaClassOf(1) would return the meta class of Integer.  
>> Since this is purely a runtime construct, it does not exist until  
>> the first time this meta class is requested. So getMetaClassOf  
>> would be the place to initialize the meta class, that would  
>> register it in a global structure and on subsequent invocation use  
>> that cached meta class. If two threads execute the code above, then  
>> one would do the initialization, while the other has to wait. The  
>> waiting thread would then read the initialized global meta class.  
>> On subsequent invocations both threads would just read. Since  
>> changes of the meta class are rare, we would in 99% of all cases  
>> simply read the existing value. Since we have to be memory aware,  
>> these meta class can be unloaded at runtime too. They are  
>> SoftReferenced so it is done only if really needed. But rather than  
>> the normal change a reinitialization might be needed much more often.
>>
>> As you see the user code "1+1" does contain zero synchronization  
>> code. The memory barriers are all in the runtime. It is not that  
>> this cannot be solved by using what Java already has, it is that  
>> this is too expensive.
>>
>>
>> As I mentioned, an alternative is to lay down some rules.
>> If people stick to the rules they get consistent (in the sense
>> of data-race-free) executions, else they might not. And of
>> such rules, I think the ones that can apply here amount
>> to saying that other threads performing initializations cannot
>> trust any of their reads of the partially initialized object.
>> And further, they cannot leak refs to that object outside of the
>> group of initializer threads.
>>
>> This is not hugely different than the Swing threading rules
>> (http://java.sun.com/products/jfc/tsc/articles/threads/threads1.html)
>> but applies only during initialization.
>>
>> but unlike what the above may suggest there is no single  
>> initialization phase. The meta classes are created on demand. We  
>> cannot know beforehand which meta classes are needed and doing them  
>> all before starting would increase the startup time big times.
>>
>> If there were of course a way to recognize a partially initialized  
>> object I could maybe think of something... but is there a reliable  
>> one?
>>
>>
>> bye blackdrag
>>
>> -- 
>> Jochen "blackdrag" Theodorou
>> The Groovy Project Tech Lead (http://groovy.codehaus.org)
>> http://blackdragsview.blogspot.com/
>>
>>
>>
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20100426/02e7f93a/attachment.html>


More information about the Concurrency-interest mailing list