[concurrency-interest] ConcurrentHashMap footprint and contention improvements

Ben Manes ben_manes at yahoo.com
Wed Apr 13 14:24:30 EDT 2011


Another approach to consider for improving write throughput might be to guard the lock with a SynchronousQueue and attempt to transfer the operation to the thread currently holding the lock. If the transfer was successful, then the waiting thread would spin until the response materialized (e.g. per-op volatile result field). This would be inserted after the scanning phase exhausted its retries, as we can safely assume contention. Under high write loads this would increase the penalty for the thread performing the operation in exchange for reducing the contention of the lock (which should be a net benefit).



________________________________
From: Doug Lea <dl at cs.oswego.edu>
To: "Concurrency-interest at cs.oswego.edu" <Concurrency-interest at cs.oswego.edu>
Sent: Tuesday, April 12, 2011 5:07 PM
Subject: [concurrency-interest] ConcurrentHashMap footprint and contention improvements


For years, we've known that ConcurrentHashMaps have initial
footprints (over 1000 bytes using default constructor) that
are too big for casual use. And that the best way to address
this would be to use the Fences API to emulate "final field"
memory model guarantees in the presence of lazy initialization.
But we aren't releasing the Fences API. So I committed a version
that instead uses Unsafe calls to essentially the same effect
(reducing initial footprint to around 100 bytes, plus a few
percent savings for large populated tables). Also, this
version includes throughput improvements under contention
(mainly by interleaving locking with probes, to absorb cache misses),
which can double performance on big tables with many threads.
While conceptually straightforward, these lead to many
line-of-code changes.

The main price paid for these improvements is a greater reliance
of "volatile" vs "final" reads, which are essentially equivalent
in cost on most machines, but can be more costly on ARM and POWER.
Even on these though, the net effect should be positive.

It would be helpful if members of this list could help check
that this is so. The committed version is now
in java.util.concurrent sources (at
http://gee.cs.oswego.edu/dl/concurrency-interest/index.html)
and can be run by getting jsr166.jar and using
"java -Xbootclasspath/p:jsr166.jar" with any java7 build
or binary (http://dlc.sun.com.edgesuite.net/jdk7/binaries/index.html).
Also, as an alternative, I temporarily placed an unpackaged
source version (with the class renamed "CHM")
at http://gee.cs.oswego.edu/dl/wwwtmp/CHM.java
You can compile and somehow run in any java6/7 JVM.

While working on these changes, I also contemplated other
more extensive redesigns, including Cliff Click's non-blocking
version (http://sourceforge.net/projects/high-scale-lib/)
which usually has better scalability with large numbers
of threads solely using get and put, but not otherwise
uniformly a better choice.

-Doug

_______________________________________________
Concurrency-interest mailing list
Concurrency-interest at cs.oswego.edu
http://cs.oswego.edu/mailman/listinfo/concurrency-interest
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20110413/22398f97/attachment.html>


More information about the Concurrency-interest mailing list