[concurrency-interest] Bounded TransferQueue
gustav.trede at gmail.com
Tue Aug 24 04:35:02 EDT 2010
On 24 August 2010 09:54, Viktor Klang <viktor.klang at gmail.com> wrote:
> On Tue, Aug 24, 2010 at 9:48 AM, David Holmes <davidcholmes at aapt.net.au>wrote:
>> Not sure I understand your "problem" here. You write a
>> BoundedTransferQueue that has-a LinkedTransferQueue and a Semaphore.
> Yes, that's my fall-back option, but if I don't need to reinvent the wheel
> I'd rather not.
>> David Holmes
>> -----Original Message-----
>> *From:* concurrency-interest-bounces at cs.oswego.edu [mailto:
>> concurrency-interest-bounces at cs.oswego.edu]*On Behalf Of *Viktor Klang
>> *Sent:* Tuesday, 24 August 2010 5:39 PM
>> *To:* Joe Bowbeer
>> *Cc:* concurrency-interest at cs.oswego.edu
>> *Subject:* Re: [concurrency-interest] Bounded TransferQueue
>> Thanks Joe,
>> The problem with that would be that I'd have to wrap the queue and make
>> sure I have the Semaphore in place in all places that matter, and then
>> implement remainingCapacity etc, I'd rather not do that so if it's possible
>> to avoid...
>> If someone has an impl under a permissive license (ApacheV2, MIT, BSD or
>> such) I'd be very thankful for it.
>> On Mon, Aug 23, 2010 at 10:56 PM, Joe Bowbeer <joe.bowbeer at gmail.com>wrote:
>>> In a previous discussion, Doug Lea wrote:
>>> "For the bounded case, it's hard to do any better than use a Semaphore in
>>> front of a LinkedTransferQueue."
>>> I don't know if the thinking has changed any since then.
>>> On Mon, Aug 23, 2010 at 1:19 PM, Viktor Klang wrote:
>>> Hi folks,
>>>> It's late here and I've rummaged through the Internet in it's entirety
>>>> so I'll get straight to the point,
>>>> I have a dire need for a bounded TransferQueue, and the
>>>> LinkedTransferQueue states that it's unbounded,
>>>> does anyone have any suggestions where I can find a (highly performant)
>>>> bounded TransferQueue?
>>>> Best regards,
There is a QueueLimitedThreadPool.java in sun io framework grizzly ,
glassfish appserver that is using LTQ with an Atomic to limit the queue.
Unless its changed after i implemented it , Its used if your not putting -1
as queue limit while using equal min max pool size in glassfish,
If you set pool min and max to same value , and -1 as queue limit a pure LTQ
I cant find a working link to it now.
At high contention the Atomic counter is ending up looping in its internal
methods, the throughput can become similar to a synchronized design.
Its a lot better if its possible to handle the limit at other places, lets
say you have many connections that can produce work,
if you limit the per outstanding work per connection, you will indirectly
control the overall limit and don't need to kill concurrent scalability in
the global queue.
The avoid to re inventing the weel is a good concept, but should never be an
excuse to not learn new things due to fear.
You can always ask for feedback here, to help you QA the implementation if
you decide to do it.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Concurrency-interest