[concurrency-interest] A concurrent, linked HashMap impl
manik at jboss.org
Wed May 27 06:46:46 EDT 2009
On 27 May 2009, at 00:45, Doug Lea wrote:
> I gather that the main motivation for your approach is to support
> LRU replacement. Considering that many concurrent maps are used
> as caches, this seems to be a common desire. But we've been resisting
> supporting pure LRU because list-based approaches
> force a sequential bottlenecks at the ends of the lists. So even
> a lock-free approach to maintaining an access-ordered list may
> encounter enough contention to negate the advantages of cheap
> get()'s and usually-uncontended modifications. Additionally,
> if you trigger LRU-based removal when hitting a size threshold,
> then you end up calling size() a lot, which is also not cheap.
> All in all, it is not clear when or by how much a concurrent hash
> map approach will have better performance for strict LRU caching
> over a simple synchronized LinkedHashMap.
The main reason is to support eviction based on either FIFO or LRU.
I also weaken the guarantees of size() such that an approximate sizing
is considered "good enough" (add up segment counts without locking
segments), but I understand how this may not fit a more general
purpose use of such a Map.
> There are a few ways to get LRU-like replacement that
> don't impose bottlenecks. Hardware L1 etc caches, facing
> the same issues, tend to use various "pseudo-LRU" (google it)
> techniques that don't have very good software analogs.
> Among the leading candidates is a random sampling approach
> (similar to one in the thesis by Konstantinos Psounisy
> http://www-rcf.usc.edu/~kpsounis/thesis.html) triggered
> by local per-segment threshold overflows. Hopefully
> I'll get something using it ready for preliminary release soon.
Random sampling sounds interesting - something I will look into.
Efficiency is more important than precision in this case.
More information about the Concurrency-interest