[concurrency-interest] Problem with using an unbounded map of Mutex lock objects

Tim Peierls tim at peierls.net
Sat Jun 3 02:39:18 EDT 2006


On second thought, this looks like another job for FutureTask. It's similar
to the Memoizer example in chapter 5 of JCiP.

I was expecting Joe Bowbeer to have chimed in by now, but I'll have to do it
for him. :-)

The approach is to cache a Future for each value rather than the value
itself. The get method finds the Future for the given key, adding it
atomically to the cache if it doesn't exist, and then waits for its result
to become available. The timed get method waits for up to a given time limit
before returning null. The put method sets the value of the Future
associated with the key (creating and adding if necessary); if the value has
already been set, it removes the Future from the cache and tries again.

Here's an illustration using ConcurrentHashMap as the underlying cache. It
uses generics to make the roles of variables clear, but that's not a
necessary part of the approach.

This code compiles and runs on a small test (not shown here).

public class BlockingCache<K, V> {
    private final ConcurrentMap<K, Result<V>> cache = new
ConcurrentHashMap<K, Result<V>>();

    public V get(K key) throws InterruptedException {
        try {
            return futureFor(key).get();
        } catch (ExecutionException e) {
            throw new IllegalStateException(e.getCause()); // shouldn't
happen
        }
    }

    public V get(K key, long timeout, TimeUnit unit) throws
InterruptedException {
        try {
            return futureFor(key).get(timeout, unit);
        } catch (TimeoutException e) {
            return null;
        } catch (ExecutionException e) {
            throw new IllegalStateException(e.getCause()); // shouldn't
happen
        }
    }

    public void put(K key, V value) {
        while (true) {
            Result<V> f = futureFor(key);
            if (!f.isDone()) {
                f.set(value);
                break;
            }
            cache.remove(key, f);
        }
    }

    private Result<V> futureFor(K key) {
        Result<V> f = cache.get(key);
        if (f == null) {
            Result<V> ft = new Result<V>();
            f = cache.putIfAbsent(key, ft);
            if (f == null) return ft;
        }
        return f;
    }

    static class Result<V> extends FutureTask<V> {
        Result() { super(new Runnable() { public void run() {} }, null); }
        public synchronized void set(V value) { super.set(value); }
    }
}

You'd definitely want to use the concurrency utilities backport if you go
with this approach.

--tim

On 6/3/06, Greg Luck <gluck at gregluck.com> wrote:
>
> Yes,
>
> It is a requirement that all operations for key K use the same lock.
>
>
> On 03/06/2006, at 2:12 PM, Brian Goetz wrote:
>
> However with lock striping and a sufficiently high entropy of cache
> keys (I assume this depends entirely on the target business domain)
> you could end up with the same bottleneck on one heavily used stripe,
>
>
> This is true of any hash-based collection.  If you have a poor hash
> function, performance suffers.  In the worst case:
>
>   public int hashCode() { return 0; }
>
> is a valid hash function, but it turns a hashmap into a linked list.
>
> I'm not quite sure what you have in mind for pooling locks, but I think it
> was a requirement that all operations for key K use the same lock, which
> eliminates pooling as an option.
>
>
> Regards
>
>
> Greg Luck
>
> web: http://gregluck.com
> skype: gregrluckyahoo: gregrluck
> mobile: +61 408 061 622
>
>
>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at altair.cs.oswego.edu
> http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/attachments/20060603/a6dd6fee/attachment.html 


More information about the Concurrency-interest mailing list