[concurrency-interest] Reference Collections

Peter Firmstone peter.firmstone at zeus.net.au
Wed Jan 4 15:35:33 EST 2012


Interesting ...So it sounds like each collection needs it's own private
gc thread waiting on ReferenceQueue.

ReferenceQueue then has a single consumer thread.  Other threads
accessing the collection no longer need to worry about cleaning dead
references, although they will of course receive the occassional null
value when they access a reference that has been garbage collected, but
not yet cleaned.  This is good in the case of putIfAbsent, where a null
return indicates absence.

Then garbage collection is just one more thread that accesses the
underlying collection, in this case to perform removals, so we should
see similar performance to the native underlying collection, enabling it
to take advantage of your future work.

In the implementation, actual References are only created for
insertions, for reads, a Referrer is created and discarded (never
shared) which has the same identity as the Reference.  References and
Referrers are invisible to the client caller, which just sees the
collection.

But threads themselves consume memory, hard to see all the use cases, so
it sounds like it needs to be a construction parameter, leaving the
choice up to the user-developer: If you want to scale, the Reference
Collection creates a garbage cleaning thread, if not you save the memory
and live with CAS.

Thank you all very much for the comments.

Regards,

Peter.



> 
> Message: 2
> Date: Wed, 04 Jan 2012 14:24:06 -0500
> From: Doug Lea <dl at cs.oswego.edu>
> To: concurrency-interest at cs.oswego.edu
> Subject: Re: [concurrency-interest] Reference Collections
> Message-ID: <4F04A756.8050806 at cs.oswego.edu>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> 
> On 01/04/12 13:32, Vitaly Davidovich wrote:
> > Volatile read will be cheaper but if other cores are constantly writing to the
> > shared memory that's being read then throughput will degrade - basically even
> > normal stores/loads of shared locations won't scale under heavy writing.  My
> > only point was that CAS doesn't have to be 10s of cycles as you said (and it's
> > getting cheaper with each new generation, except sandy bridge seems to have
> > gotten worse than nehalem)
> 
> This is my experience as well. A CAS that almost never fails is sometimes
> even cheaper than a volatile write on i7-Nehalem. It is more
> expensive on SandyBridge/NehalemEX and recent AMDs but still not worth
> spending more than a couple of cycles trying to avoid. Nathan's advice to read
> rather than CAS when possible is a good example of when it is worth
> avoiding, but beyond that there are diminishing returns. Of course,
> avoiding unnecessary writes of any form is always a good idea.
> 
> These days, memory contention (mainly false-sharing-style cache
> contention, plus NUMA effects) is a far more serious performance
> issue than CAS contention per se, especially on multisocketed machines.
> 
> I've been working on a set of improvements
> to a bunch of j.u.c classes that address this. Stay tuned.
> Currently, the only one committed to our CVS is a preliminary
> version of overhauled Exchanger. (Exchangers are not commonly
> used, but they provide an ideal setting for evaluating new
> performance enhancement algorithms, since they are subject to
> extreme contention, lock-free data-transfer, and blocking;
> all of which are found in more commonly used classes).
> 
> -Doug
> 




More information about the Concurrency-interest mailing list