[concurrency-interest] A concurrent, linked HashMap impl

Roman Elizarov elizarov at devexperts.com
Wed May 27 01:59:50 EDT 2009


Hello Doug!

DL> There are a few ways to get LRU-like replacement that
DL> don't impose bottlenecks. Hardware L1 etc caches, facing
DL> the same issues, tend to use various "pseudo-LRU" (google it)
DL> techniques that don't have very good software analogs.
DL> Among the leading candidates is a random sampling approach
DL> (similar to one in the thesis by Konstantinos Psounisy
DL> http://www-rcf.usc.edu/~kpsounis/thesis.html) triggered
DL> by local per-segment threshold overflows. Hopefully
DL> I'll get something using it ready for preliminary release soon.

We usually employ a very simple scheme for heavily read caches. We put
"last access timestamps" in each cached object. It does not have to
call currentTimeMillis to update this timestamp, it can take current
timestamp from a variable that is periodically updated by a different
thread. When the size of the cache grows beyond a certain threshold,
we scan our cache and remove half of its elements that were "least
recently accessed". This operation combines nicely with rehash. Our
caches are typically mostly read, so we employ ConcurrentHashMap-like
data structures (albeit without any lock striping due to low update
frequency) that are wait-free for read and require synchronization for
write.

Sincerely,
Roman Elizarov



More information about the Concurrency-interest mailing list