[concurrency-interest] Consistency guarantees for ConcurrentHashMap

Doug Lea dl at cs.oswego.edu
Thu Jun 11 14:25:49 EDT 2009


Jorge Ortiz wrote:
> Jim, Erkin, James,
> 
> I haven't written any code to test these behaviors. This is all purely a 
> thought experiment at this point. Perhaps I should provide more details.
> 
> For the first case, given the implementation of clear():
> 
>     public void clear() {
>         for (int i = 0; i < segments.length; ++i)
>             segments[i].clear();
>     }
> 
> It's evident that this behavior is at best atomic per segment. There 
> seem to be no atomicity guarantees between segments. (I understand this 
> is by design.) 

Yes, by design. If you require a global total write ordering
than you give up concurrency on writes, in which case
you might as well use a SynchronizedMap(new HashMap()).
You could alternatively use ReadWriteLocks but they have enough
overhead to make use with a HashMap questionable.
And perhaps someday there may be an efficient optimistic
linearizable hash algorithm for bulk operations based on
transactional memory.

> 
> I could write map, filter, etc based on weakly consistent iterators, but 
> weak consistency seems to be not much of a guarantee at all. 

We leave the tradeoff of consistency-strength versus scalability
as a user decision, so offer both synchronized and concurrent versions
of most collections, as discussed in the j.u.c package docs
http://java.sun.com/javase/6/docs/api/java/util/concurrent/package-summary.html

> 
> I contrast, the Javadoc explanation for the entire ConcurrentHashMap 
> class is inaccurate, in my opinion:
> 
> "Similarly, Iterators and Enumerations return elements reflecting the 
> state of the hash table at some point at or since the creation of the 
> iterator/enumeration."
> 

Thanks! That sentence should be changed to be the same as the
other descriptions of weakly consistent iterators.

-Doug


More information about the Concurrency-interest mailing list