[concurrency-interest] Problem with using an unbounded map of Mutex lock objects

Joe Bowbeer joe.bowbeer at gmail.com
Tue Jun 6 06:36:21 EDT 2006


I'm disappointed you're not using the Future-based approach.  I was
hoping you'd benchmark it and report back.  I'm interested to know how
it performs :-)

However, I can suggest a couple optimizations for your striping implementation.

1. Spread the bits in the hash code.

ConcurrentHashMap spreads the hash with the following:

     * Returns a hash code for non-null Object x.
     * Uses the same hash code spreader as most other java.util hash tables.
     * @param x the object serving as a key
     * @return the hash code
    static int hash(Object x) {
        int h = x.hashCode();
        h += ~(h << 9);
        h ^=  (h >>> 14);
        h +=  (h << 4);
        h ^=  (h >>> 10);
        return h;

Rationale: Hashcodes are generally unique, but they are not random.
The method above disperses the bits, producing a more random
distribution after division.

2. Require LOCK_NUMBER to be a power of two, such as 128.

In this way, a binary AND (&) can replace division (/).

Applying both of these to getLockForKey:

    * The default number of locks to use. Must be a power of 2.
   protected static final int LOCK_NUMBER = 128;

   private Mutex getLockForKey(final Serializable key) {
       if (key == null)
           return locks[0];
       assert (LOCK_NUMBER & (LOCK_NUMBER-1)) == 0; // power of 2
       int h = hash(key) & (LOCK_NUMBER-1);
       return locks[h];

Joe ~ joebowbeer.thruhere.net

On 6/6/06, Greg Luck <gluck at gregluck.com> wrote:
> Brian et al
> Thanks so much for everyone's contributions.
> I tested out Brian's lock striping with my suite of concurrency tests
> (BlockingCacheTest in ehcache) which hammer BlockingCache with 100 threads.
> It seems to work very well.
> Mutex objects use 24 bytes so I opted to go for 100. That is 2400 bytes in
> total, way down from the tens of MBs that is common now in ehcache for large
> systems.
> I also looked into the  allocation of locks for different keys. In the tests
> I did for lock selection, locks were evenly selected +- 20%.
> In all this looks to be an excellent solution with minimal change to the
> BlockingCache. The newly revised blocking cache is reproduced below. I would
> appreciate anyone letting me know if there are any shortcomings in it.
> One point to note. In Brian's implementation, getLockForKey(Serializable
> key) was synchronized. I think this is unncecessary and potentially harmful
> to scalability. Mine is not synchronized.
> Finally thanks to Tim Peieris and others for alternate solutions. I don't
> want to bring another lib at this time. One of the attractions of ehcache,
> which I would like to preserve,  is the minimal dependencies.

More information about the Concurrency-interest mailing list