[concurrency-interest] Problem with using an unbounded map of Mutex lock objects

Dhanji R. Prasanna dhanji at gmail.com
Sat Jun 3 00:09:10 EDT 2006


a minor correction:

> However with lock striping and a sufficiently high entropy of cache
> keys (I assume this depends entirely on the target business domain)

should read "sufficiently *low* entropy"

On 6/3/06, Dhanji R. Prasanna <dhanji at gmail.com> wrote:
> The lock striping idea is a very interesting. I wonder what your cache
> key distribution would like and if it would always have a high enough
> entropy to make lock striping effective (versus pooling).
>
> As I see it, the benefits of pooling are:
> - no bottleneck until all pool elements are simultaneously in use,
> - no overhead of instantiation/gc of new locks for cache keys (this is
> also true of striping)
>
> However with lock striping and a sufficiently high entropy of cache
> keys (I assume this depends entirely on the target business domain)
> you could end up with the same bottleneck on one heavily used stripe,
> while the other locks sit idle. I would definitely suggest if the
> entropy of cache keys cannot be guaranteed beforehand, that lock
> pooling is a better solution (even with the same number of locks).
>
> On 6/3/06, Brian Goetz <brian at quiotix.com> wrote:
> > > That sounds good. I don't have a copy of JCiP so I bought one :) from
> > > Amazon.
> >
> > Guess all that "Jedi mind trick" practice is paying off :)
> >
> > > It will arrive between June 21 and July 3. I would appreciate
> > > any online or email stuff on lock striping in the meantime.
> >
> > Hashtable / synchronizedMap uses the "one big fat lock" approach to
> > guard the mutable state of the map.  That works, but is a big
> > concurrency bottleneck, as you've observed.  You went to the opposite
> > extreme, one lock per key.  That works (as long as you've got sufficient
> > synchronization in the cache itself to protect its own data structures.)
> >
> > Lock striping is a middle ground, partitioning keys into a fixed number
> > of subsets, like the trick used at large theaters for will-call ticket
> > pickup -- there are separate lines for "A-F, G-M, N-R, and S-Z".  This
> > way, there are a fixed number of locks, each guarding (hopefully) 1/Nth
> > of the keys.
> >
> > You could use striping in your example:
> >
> >    Lock[] locks = new Lock[N];
> >    /* intializer block */ {
> >      for (i=0; i<N; i++)
> >          locks[i] = new ReentrantLock();
> >    }
> >
> > and
> >
> > private synchronized Mutex getLockForKey(final Serializable key) {
> >      int h = key.hashCode() % N;
> >      if (h < 0)
> >          h += N;
> >      return locks[h];
> > }
> >
> > This way, 1/Nth of the keys are guarded by locks[0], 1/Nth by locks[1],
> > etc.  Concurrency can be tuned by choosing N; we found 16 works well on
> > systems with fewer than a few dozen processors.
> > _______________________________________________
> > Concurrency-interest mailing list
> > Concurrency-interest at altair.cs.oswego.edu
> > http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
> >
>


More information about the Concurrency-interest mailing list