[concurrency-interest] problem with ReentrantReadWriteLock toimplement atomic function

Gregg Wonderly gergg at cox.net
Tue Jun 20 07:55:13 EDT 2006


Joe Bowbeer wrote:
> On 6/20/06, Kolbjørn Aambø <kaa at pax.priv.no> wrote:
> 
>>I suspect that the maximum number of locks (65536 recursive write locks and
>>65536 read locks) happen to be the problem here. I'm trying to find out how
>>to make  an atomic put function using ReentrantReadWriteLock without getting
>>problems with this limitation.
>
> (If you were to retain this code I'd suggest adding a "locked" flag
> and/or more try-finally nesting in order to disambiguate this
> condition. This would also protect against "put" failures, as David
> points out.)

Anytime that I have code structure like this, I use the following pattern to 
make sure I see the Exceptions which can be thrown:

try {
	... do work which throws exceptions
} catch( SomeExceptionMethodThrows ex ) {
	log.log( Level.APPROPRIATE, ex.toString(), ex );
	throw ex;
} catch( RuntimeException ex ) {
	log.log( Level.APPROPRIATE, ex.toString(), ex );
	throw ex;
} finally {
	... cleanup which can also throw method declared exceptions ...
}

Making sure that the 'throws' declared exceptions are caught, logged,
and rethrown is helpful in tracking down these kinds of issues.  You can use 
some of the other log methods, or a different level to reduce duplicates in a 
block/method which can routinely abort.

Gregg Wonderly


More information about the Concurrency-interest mailing list