[concurrency-interest] Scaling ConcurrentNavigableMap
nathan.reynolds at oracle.com
Fri Jul 13 19:58:37 EDT 2012
Unless I missed something in ConcurrentSkipListMap's code, there isn't a
synchronized or LockSupport.park(). So, it would seem there is no way
for a thread to block. If so, it suggests that the problem isn't in the
data structure but in the code outside of the data structure.
* Check GC logs
* Try profiling with elapsed time measurement
* Try taking stack traces (poor man's profiler) and see where the
* Consider NUMA binding (e.g. taskset, numactl) might prevent threads
from running on the other cores
Consulting Member of Technical Staff | 602.333.9091
Oracle PSR Engineering <http://psr.us.oracle.com/> | Server Technology
On 7/13/2012 4:16 PM, Sergi Vladykin wrote:
> I'm working on component which is based on ConcurrentNavigableMap and
> it has strong requirement to scale well on updates in multicore
> environment. But I'm experiencing some problems with that since
> ConcurrentSkipListMap or SnapTreeMap don't seem to be scalable enough
> because in my simple test where I'm putting pregenerated random UUIDs
> in multiple threads I can't reach full CPU utilization even on 8
> cores. As far as I understand there are two main scalability problems
> here, the first is threads contention and the second is memory
> management/gc (although FullGC never happens in my test).
> My first attempt to improve throughput was to reduce contention by
> splitting map to multiple segments and choose which one to update
> using key's hashCode, assuming that hash codes are uniformly
> distributed, but it didn't help much. ConcurrentSkipListMap is
> lock-free so I expect to have full CPU utilization when threads number
> is greater then or equal to number of cores but it is not the case
> too. So it seems to me that memory management is major bottleneck
> here. Am I right? May be someone can point me other directions to
> solution of this problem?
> I should point right away that adding data to unsorted map and sort on
> demand will not be a good idea since there can be millions of entries
> and the sorting will take too long. Also I'm thinking about creating
> some special array-based data structure which will not generate
> additional garbage but it will probably have problems with insertion
> to the middle.
> Any thoughts and suggestions are very appreciated.
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Concurrency-interest