[concurrency-interest] Concurrent approximate LRU?

Bryan Thompson bryan at systap.com
Sat Nov 7 14:13:33 EST 2009


Sanne & Tim,

I will look into this.  Thanks for pointing it out.  I've been meaning to checkout infinispan anyway.  Given the resounding silence on the list, it occurs to me that this can probably be handled with a mixture of the ConcurrentLinkedList and the ConcurrentHashMap, but I have not yet explored this possibility.  The LIRS policy looks quite interesting.

You can take a look at the project that I am working on, which is http://www.bigdata.com/blog.  This is a horizontally scaled MVCC database architecture with a focus on semantic web database applications.

Thanks,

-bryan


> -----Original Message-----
> From: Sanne Grinovero [mailto:sanne.grinovero at gmail.com] 
> Sent: Saturday, November 07, 2009 6:28 AM
> To: Bryan Thompson
> Cc: concurrency-interest at cs.oswego.edu
> Subject: Re: [concurrency-interest] Concurrent approximate LRU?
> 
> Hi Bryan,
> I don't know the details of the implementation but Infinispan 
> is doing something similar; it's LGPL, so you might want to 
> have a look in the sources or ask to in the forums: 
> www.jboss.org/infinispan It implements "ConcurrentHashMap" so 
> it should be straight-forward to drop in.
> Especially related to your use case: have a look at the 
> Hibernate second level cache implementation based on Infinispan.
> 
> Sanne
> 
> 2009/10/21 Bryan Thompson <bryan at systap.com>:
> > Hello,
> >
> > We have a database with many different backing stores all 
> of which compete to buffer ram in a shared LRU cache policy.  
> We have implemented this using an outer concurrent hash map 
> providing indirection to a collection of inner hash maps (we 
> use LinkedHashMap here for better performance on its 
> iterator, but we do not use its ability to impose an LRU 
> ordering because the order must be shared across all inner 
> cache instances).  The keys are Long's.  The values are 
> elements in a double-linked list.  If there is an LRU 
> eviction then the record is evicted from the inner hash map 
> corresponding to the store.  This spreads out the memory 
> constraint across the backing stores based on access.
> >
> > The problem with this approach is that the double-linked 
> list is not thread-safe so we have to obtain a lock when 
> testing the cache put adding to the cache and hold it across 
> that operation.  This is about a 10% performance penalty involved.
> >
> > Unfortunately, the ConcurrentHashMap does not allow us to 
> impose an access policy, or we could use a single 
> ConcurrentHashMap instance and use the store's identifier and 
> the Long address of the record as a composite key and all 
> would be good.
> >
> > I am wondering if there is a way to maintain an approximate 
> LRU ordering across the elements in these different caches?
> >
> > Thanks in advance,
> >
> > -bryan
> > _______________________________________________
> > Concurrency-interest mailing list
> > Concurrency-interest at cs.oswego.edu
> > http://cs.oswego.edu/mailman/listinfo/concurrency-interest
> >
> 


More information about the Concurrency-interest mailing list