[concurrency-interest] Concurrency-interest Digest, Vol 17, Issue 4

steve at jofti.com steve at jofti.com
Mon Jun 5 15:49:24 EDT 2006


Hi Greg,
Have you thought about a compromise (sort of like the concurrent  
hashmap) and
have something like an array of locks (or open hashing table) which  
you choose by the hashcode of the key mod the number of elements in  
the array

This would give a reasonable fine grained locking (but would depend  
on the hash collision and the number of slots) - maybe a large enough  
prime would give you a tradeoff you could live with

Assuming that the locks are pretty short lived you should get  
reasonable performance - if not you might get bottleneck areas

any thoughts?

steve


On 2 Jun 2006, at 22:03, concurrency-interest-request at cs.oswego.edu  
wrote:

> Send Concurrency-interest mailing list submissions to
> 	concurrency-interest at altair.cs.oswego.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
> 	http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
> or, via email, send a message with subject or body 'help' to
> 	concurrency-interest-request at altair.cs.oswego.edu
>
> You can reach the person managing the list at
> 	concurrency-interest-owner at altair.cs.oswego.edu
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Concurrency-interest digest..."
>
>
> Today's Topics:
>
>    1. Problem with using an unbounded map of Mutex	lock objects
>       (Greg Luck)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 3 Jun 2006 07:02:59 +1000
> From: Greg Luck <gluck at gregluck.com>
> Subject: [concurrency-interest] Problem with using an unbounded map of
> 	Mutex	lock objects
> To: concurrency-interest at cs.oswego.edu
> Message-ID: <A397B776-BE4D-4F79-B053-F16654D354A5 at gregluck.com>
> Content-Type: text/plain; charset="us-ascii"
>
> Hi
>
> (I mailed this initially to Doug Lea. He is travelling at present and
> suggested I post it here for a quicker response).
>
> I am the maintainer of ehcache, a leading open source project, and
> probably the most used Java cache. I have also just joined JSR-107
> and am working towards making ehcache an implementation of this spec.
> I also have a copy of Concurrent Programming in Java, Second Edition.
>
> A few years ago we had a serious  problem with scalability caused by
> extreme contention on synchronized methods on BlockingCache. We
> redesigned it to use fined grained Mutex objects, from your
> concurrency package, in a Map of locks, keyed by possible cache keys.
> The class is listed below. The scalability went up by an order of
> magnitude. Problem solved.
>
> However, some very careful users have discovered a problem. Once a
> Mutex object is placed in the Map it stays there forever. It is
> technically a slow memory leak unless the keyset is bounded so that
> at some time it stops growing.
> Users are also complaining of slow garbage collection times because
> of the millions of Mutex objects they end up with.
>
> I am not sure what the solution here is. I don't think I can simply
> remove them because I don't know if they are in use, or more subtly,
> whether a caller is holding a reference waiting to acquire a lock and
> put the Mutex into use. If I remove one from the lock Map a new
> caller will put a new one in the Map with the same key. I could have
> the situation where more than one thread things it has the same lock.
>
> I am thinking perhaps a pool of locks but I need to work out how to
> safely reuse the pool. Something close to the Pool discussed in
> Section 3.4.1.2 is probably what I need.
>
> Any thoughts you have on this would be most appreciated by myself and
> my user community.
>
> /**
> * A blocking cache, backed by {@link Ehcache}.
> * <p/>
> * It allows concurrent read access to elements already in the cache.
> If the element is null, other
> * reads will block until an element with the same key is put into the
> cache.
> * <p/>
> * This is useful for constructing read-through or self-populating
> caches.
> * <p/>
> * This implementation uses the {@link Mutex} class from Doug Lea's
> concurrency package. If you wish to use
> * this class, you will need the concurrent package in your class path.
> * <p/>
> * It features:
> * <ul>
> * <li>Excellent liveness.
> * <li>Fine-grained locking on each element, rather than the cache as
> a whole.
> * <li>Scalability to a large number of threads.
> * </ul>
> * <p/>
> * A version of this class is planned which will dynamically use
> JDK5's concurrency package, which is
> * based on Doug Lea's, so as to avoid a dependency on his package for
> JDK5 systems. This will not
> * be implemented until JDK5 is released on MacOSX and Linux, as JDK5
> will be required to compile
> * it, though any version from JDK1.2 up will be able to run the code,
> falling back to Doug
> * Lea's concurrency package, if the JDK5 package is not found in the
> classpath.
> * <p/>
> * The <code>Mutex</code> class does not appear in the JDK5
> concurrency package. Doug Lea has
> * generously offered the following advice:
> * <p/>
> * <pre>
> * You should just be able to use ReentrantLock here.  We supply
> * ReentrantLock, but not Mutex because the number of cases where a
> * non-reentrant mutex is preferable is small, and most people are more
> * familiar with reentrant seamantics. If you really need a non- 
> reentrant
> * one, the javadocs for class AbstractQueuedSynchronizer include  
> sample
> * code for them.
> * <p/>
> * -Doug
> * </pre>
> * <p/>
> *
> * @author Greg Luck
> * @version $Id: BlockingCache.java 94 2006-05-25 09:06:30Z gregluck $
> */
> public class BlockingCache {
>
>      private static final Log LOG = LogFactory.getLog
> (BlockingCache.class.getName());
>
>      /**
>       * The backing Cache
>       */
>      private final Ehcache cache;
>
>
>      private final int timeoutMillis;
>
>      /**
>       * A map of cache entry locks, one per key, if present
>       */
>      private final Map locks = new HashMap();
>
>      /**
>       * Creates a BlockingCache with the given name.
>       *
>       * @param name the name to give the cache
>       * @throws CacheException
>       */
>      public BlockingCache(final String name) throws CacheException {
>          this(name, 0);
>      }
>
>      /**
>       * Creates a BlockingCache with the given name.
>       *
>       * @param name          the name to give the cache
>       * @param timeoutMillis the amount of time, in milliseconds, to
> block for
>       * @throws CacheException
>       * @since 1.2
>       */
>      public BlockingCache(final String name, int timeoutMillis)
> throws CacheException {
>          CacheManager manager = null;
>          try {
>              manager = CacheManager.create();
>          } catch (net.sf.ehcache.CacheException e) {
>              LOG.fatal("CacheManager cannot be created. Cause was: "
> + e.getMessage() + e);
>              throw new CacheException("CacheManager cannot be
> created", e);
>          }
>          cache = manager.getCache(name);
>          if (cache == null || !cache.getName().equals(name)) {
>              throw new CacheException("Cache " + name + " cannot be
> retrieved. Please check ehcache.xml");
>          }
>          this.timeoutMillis = timeoutMillis;
>      }
>
>      /**
>       * Creates a BlockingCache with the given name and
>       * uses the given cache manager to create the cache
>       *
>       * @param name    the name to give the cache
>       * @param manager the EHCache CacheManager used to create the
> backing cache
>       * @throws CacheException
>       */
>      public BlockingCache(final String name, final CacheManager
> manager) throws CacheException {
>          this(name, manager, 0);
>      }
>
>      /**
>       * Creates a BlockingCache with the given name and
>       * uses the given cache manager to create the cache
>       *
>       * @param name          the name to give the cache
>       * @param manager       the EHCache CacheManager used to create
> the backing cache
>       * @param timeoutMillis the amount of time, in milliseconds, to
> block for
>       * @throws CacheException
>       * @since 1.2
>       */
>      public BlockingCache(final String name, final CacheManager
> manager, int timeoutMillis) throws CacheException {
>          if (manager == null) {
>              throw new CacheException("CacheManager cannot be null");
>          }
>          cache = manager.getCache(name);
>          if (cache == null || !cache.getName().equals(name)) {
>              throw new CacheException("Cache " + name + " cannot be
> retrieved. Please check ehcache.xml");
>          }
>          this.timeoutMillis = timeoutMillis;
>      }
>
>      /**
>       * Retrieve the EHCache backing cache
>       */
>      protected net.sf.ehcache.Ehcache getCache() {
>          return cache;
>      }
>
>      /**
>       * Returns this cache's name
>       */
>      public String getName() {
>          return cache.getName();
>      }
>
>      /**
>       * Looks up an entry.  Blocks if the entry is null.
>       * Relies on the first thread putting an entry in, which
> releases the lock
>       * If a put is not done, the lock is never released
>       */
>      public Serializable get(final Serializable key) throws
> BlockingCacheException {
>          Mutex lock = checkLockExistsForKey(key);
>          try {
>              if (timeoutMillis == 0) {
>                  lock.acquire();
>              } else {
>                  boolean acquired = lock.attempt(timeoutMillis);
>                  if (!acquired) {
>                      StringBuffer message = new StringBuffer("lock
> timeout attempting to acquire lock for key ")
>                              .append(key).append(" on cache ").append
> (cache.getName());
>                      throw new BlockingCacheException(message.toString
> ());
>                  }
>              }
>              final Element element = cache.get(key);
>              if (element != null) {
>                  //ok let the other threads in
>                  lock.release();
>                  return element.getValue();
>              } else {
>                  //don't release the read lock until we write
>                  return null;
>              }
>          } catch (InterruptedException e) {
>              throw new CacheException("Interrupted. Message was: " +
> e.getMessage());
>          }
>      }
>
>      private synchronized Mutex checkLockExistsForKey(final
> Serializable key) {
>          Mutex lock;
>          lock = (Mutex) locks.get(key);
>          if (lock == null) {
>              lock = new Mutex();
>              locks.put(key, lock);
>          }
>          return lock;
>      }
>
>      /**
>       * Adds an entry and unlocks it
>       */
>      public void put(final Serializable key, final Serializable  
> value) {
>          Mutex lock = checkLockExistsForKey(key);
>          try {
>              if (value != null) {
>                  final Element element = new Element(key, value);
>                  cache.put(element);
>              } else {
>                  cache.remove(key);
>              }
>          } finally {
>              //Release the readlock here. This will have been
> acquired in the get, where the element was null
>              lock.release();
>          }
>      }
>
>      /**
>       * Returns the keys for this cache.
>       *
>       * @return The keys of this cache.  This is not a live set, so
> it will not track changes to the key set.
>       */
>      public Collection getKeys() throws CacheException {
>          return cache.getKeys();
>      }
>
>      /**
>       * Drops the contents of this cache.
>       */
>      public void clear() throws CacheException {
>          if (LOG.isDebugEnabled()) {
>              LOG.debug("Cache " + cache.getName() + ": removing all
> entries");
>          }
>          cache.removeAll();
>      }
>
>      /**
>       * Synchronized version of getName to test liveness of the
> object lock.
>       * <p/>
>       * The time taken for this method to return is a useful measure
> of runtime contention on the cache.
>       */
>      public synchronized String liveness() {
>          return getName();
>      }
>
>      /**
>       * Gets all entries from a blocking cache. Cache is not
> serializable. This
>       * method provides a way of accessing the keys and values of a
> Cache in a Serializable way e.g.
>       * to return from a Remote call.
>       * <p/>
>       * This method may take a long time to return. It does not lock
> the cache. The list of entries is based
>       * on a copy. The actual cache may have changed in the time
> between getting the list and gathering the
>       * KeyValuePairs.
>       * <p/>
>       * This method can potentially return an extremely large object,
> roughly matching the memory taken by the cache
>       * itself. Care should be taken before using this method.
>       * <p/>
>       * By getting all of the entries at once this method can
> transfer a whole cache with a single method call, which
>       * is important for Remote calls across a network.
>       *
>       * @return a Serializable {@link java.util.List} of {@link
> KeyValuePair}s, which implement the Map.Entry interface
>       * @throws CacheException where there is an error in the
> underlying cache
>       */
>      public List getEntries() throws CacheException {
>          if (LOG.isDebugEnabled()) {
>              LOG.debug("Getting entries for the " + cache.getName() +
> " cache");
>          }
>          Collection keys = cache.getKeys();
>          List keyValuePairs = new ArrayList(keys.size());
>          for (Iterator iterator = keys.iterator(); iterator.hasNext
> ();) {
>              Serializable key = (Serializable) iterator.next();
>              Element element = cache.get(key);
>              keyValuePairs.add(new KeyValuePair(key, element.getValue
> ()));
>          }
>          return keyValuePairs;
>      }
> }
>
>
>
> Regards
>
> Greg Luck
>
> web: http://gregluck.com
> skype: gregrluck
> yahoo: gregrluck
> mobile: +61 408 061 622
>
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: /pipermail/attachments/20060603/55b307a3/attachment.html
>
> ------------------------------
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at altair.cs.oswego.edu
> http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
> End of Concurrency-interest Digest, Vol 17, Issue 4
> ***************************************************



More information about the Concurrency-interest mailing list