[concurrency-interest] Lock memory/resource Cost

Thierry Hanot thanot at infovista.com
Mon Oct 31 06:52:02 EST 2005


-----Original Message-----
From: David Holmes [mailto:dholmes at dltech.com.au] 
Sent: lundi 31 octobre 2005 11:13
To: Thierry Hanot; concurrency-interest at altair.cs.oswego.edu
Subject: RE: [concurrency-interest] Lock memory/resource Cost 


> I 'm just wondering what is the memory/system resource cost of
> ReentrantLocks or read write lock.

Each explicit lock is at least two distinct objects: the lock itself and the
AbstractQueuedSynchronizer instance it uses. Then there's around another
16-20 bytes of state. So compared to a monitor lock that doesn't come out of
the heap, an explicit lock uses a lot of heap memory.

It's exactly what I was afraid of and why I'm currently using monitor
instead of locks.

> I have millions of object in memory and I need to synchronize
> access to each one.

That seems impractical, even if you had the memory resources. This level of
locking sounds too fine-grained, and the degree of sharing seems excessive.

But between one global lock and one lock per object there are many other
ways of partitioning things. If you can somehow group your objects in some
way then you could use a lock per group.

But rather than figuring out how to synchronize these millions of objects,
I'd spend some time trying to figure out if you can avoid needing to
synchronize in the first place.

Can you elaborate more on the system you are working on?

We will investigate the partitioning possibility but I'm afraid that will
cause more problem than it solve. The input is delivered by some external
process which does not respect any locality rules. So the treatment could
not be clustered effectively.  (This is why we have choose to lock per

It seems also difficult to remove some part of the lock. What we really want
is to isolate the treatment of one object for avoiding access to its data
and update in the same time (which is also not possible with monitor
read/write lock ...).  One other constraint is that on treatment on one
object could require data from other objects. For avoiding any dead lock we
are keeping object in an acyclic graph. 

> 	- If I have something to do on many objects
> 		If I use classic java synchronization
> 		List<MyObject> toDo = ...;
> 		For(MyObject theObject:toDo){
> 			synchronized(theObject){
> 				// treatment on theObject
> 			}
> 		}

Make sure your iterators have consistent ordering, otherwise you could
easily deadlock.

  I agree :) , but this is just an example , not the real code ;)

> 		If I use concurrent objects I could do something like
> 		Int cursor_ = 0;
> 		While(todo.size()>0){
> 			MyObject theObject = toDo.get(cursor_);
> 			If(theObject.getLock().tryLock()){
> 				Try{
> 					toDo.remove(cursor_);
> 					// treatment on theObject
> 				}finally{
> 					theObject.getLock().unlock();
> 				}
> 			}
> 			if(toDo.size>0)
> 			cursor_ = (cursor_+1)%todo.size();
> 		}

If you can ask each object for its lock (which need not be a distinct lock
per object) then I don't see why the above form is needed instead of:

 	     for(MyObject theObject:toDo){
               Lock l = theObject.getLock();
               try {
               } finally {

	This a small optimization: avoid to wait for an object if you can
the treatment on the other in the same time. (Which is a lot more
complicated to do with monitors .. )

> 	How much system resources will be used (is system resource are
> acquired only when something has been done on the lock or when
> constructing the object)

The main resource used will be memory, both in constructing the locks and
anytime a thread needs to block on the lock. Actually blocking of a thread
may need additional low-level resources but these are likely to be needed by
a thread during its lifetime anyway.

David Holmes

Thank you for your quick answer.

More information about the Concurrency-interest mailing list