[concurrency-interest] Bounded TransferQueue

Viktor Klang viktor.klang at gmail.com
Fri Aug 27 08:48:56 EDT 2010


In case someone needs a bounded transfer queue (scala), here's one:

https://gist.github.com/2ee0b136fe17ff4414b7

On Tue, Aug 24, 2010 at 11:33 AM, gustav <gustav.trede at gmail.com> wrote:

>
>
> On 24 August 2010 11:09, Viktor Klang <viktor.klang at gmail.com> wrote:
>
>>
>>
>> On Tue, Aug 24, 2010 at 10:57 AM, gustav <gustav.trede at gmail.com> wrote:
>>
>>>
>>>
>>> On 24 August 2010 10:43, Viktor Klang <viktor.klang at gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Aug 24, 2010 at 10:35 AM, gustav <gustav.trede at gmail.com>wrote:
>>>>
>>>>>
>>>>>
>>>>> On 24 August 2010 09:54, Viktor Klang <viktor.klang at gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 24, 2010 at 9:48 AM, David Holmes <
>>>>>> davidcholmes at aapt.net.au> wrote:
>>>>>>
>>>>>>>  Not sure I understand your "problem" here. You write a
>>>>>>> BoundedTransferQueue that has-a LinkedTransferQueue and a Semaphore.
>>>>>>>
>>>>>>
>>>>>> Yes, that's my fall-back option, but if I don't need to reinvent the
>>>>>> wheel I'd rather not.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> David Holmes
>>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> *From:* concurrency-interest-bounces at cs.oswego.edu [mailto:
>>>>>>> concurrency-interest-bounces at cs.oswego.edu]*On Behalf Of *Viktor
>>>>>>> Klang
>>>>>>> *Sent:* Tuesday, 24 August 2010 5:39 PM
>>>>>>> *To:* Joe Bowbeer
>>>>>>> *Cc:* concurrency-interest at cs.oswego.edu
>>>>>>> *Subject:* Re: [concurrency-interest] Bounded TransferQueue
>>>>>>>
>>>>>>> Thanks Joe,
>>>>>>>
>>>>>>> The problem with that would be that I'd have to wrap the queue and
>>>>>>> make sure I have the Semaphore in place in all places that matter, and then
>>>>>>> implement remainingCapacity etc, I'd rather not do that so if it's possible
>>>>>>> to avoid...
>>>>>>>
>>>>>>> If someone has an impl under a permissive license (ApacheV2, MIT, BSD
>>>>>>> or such) I'd be very thankful for it.
>>>>>>>
>>>>>>> Cheers,
>>>>>>>
>>>>>>> On Mon, Aug 23, 2010 at 10:56 PM, Joe Bowbeer <joe.bowbeer at gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> In a previous discussion, Doug Lea wrote:
>>>>>>>>
>>>>>>>> "For the bounded case, it's hard to do any better than use a
>>>>>>>> Semaphore in front of a LinkedTransferQueue."
>>>>>>>>
>>>>>>>>
>>>>>>>> http://cs.oswego.edu/pipermail/concurrency-interest/2007-May/004108.html
>>>>>>>>
>>>>>>>> I don't know if the thinking has changed any since then.
>>>>>>>>
>>>>>>>> Joe
>>>>>>>>
>>>>>>>> On Mon, Aug 23, 2010 at 1:19 PM, Viktor Klang wrote:
>>>>>>>>
>>>>>>>> Hi folks,
>>>>>>>>>
>>>>>>>>> It's late here and I've rummaged through the Internet in it's
>>>>>>>>> entirety so I'll get straight to the point,
>>>>>>>>>
>>>>>>>>> I have a dire need for a bounded TransferQueue, and the
>>>>>>>>> LinkedTransferQueue states that it's unbounded,
>>>>>>>>> does anyone have any suggestions where I can find a (highly
>>>>>>>>> performant) bounded TransferQueue?
>>>>>>>>>
>>>>>>>>> Best regards,
>>>>>>>>> --
>>>>>>>>>
>>>>>>>>
>>>>> There is a  QueueLimitedThreadPool.java in sun io framework grizzly ,
>>>>> glassfish appserver  that is using LTQ with an Atomic to limit the queue.
>>>>> Unless its changed after i implemented it , Its used if your not
>>>>> putting -1 as queue limit while using equal min max pool size in glassfish,
>>>>> If you set pool min and max to same value , and -1 as queue limit a
>>>>> pure LTQ is used.
>>>>>
>>>>> I cant find a working link to it now.
>>>>>
>>>>> At high contention the Atomic counter is ending up looping in its
>>>>> internal methods, the throughput can become similar to a synchronized
>>>>> design.
>>>>> Its a lot better if its possible to handle the limit at other places,
>>>>> lets say you have many connections that can produce work,
>>>>> if you limit the per outstanding work per connection, you will
>>>>> indirectly control the overall limit and don't need to kill concurrent
>>>>> scalability in the global queue.
>>>>>
>>>>> The avoid to re inventing the weel is a good concept, but should never
>>>>> be an excuse to not learn new things due to fear.
>>>>> You can always ask for feedback here, to help you QA the implementation
>>>>> if you decide to do it.
>>>>>
>>>>
>>>> I agree, but I don't need it for some hobby project, I need it to fit
>>>> into a high-performance concurrency framework, so I'd rather avoid to waste
>>>> a lot of time to tune a makeshift implementation.
>>>>
>>>>
>>> I implemented it myself and its used in real projects, no problems since
>>> its indeed a very easy 10 min job to do =).
>>> Whats the QA on some random code from the net anyway , that your asking
>>> for ?.
>>> you cant use some random external code and blame problems on it...
>>> Its your choice to use it and hence the same need for understanding as it
>>> takes to write it yourself, how else can you possible know it will work ?.
>>> But i guess it feels better to be able to put blame somebody else
>>> regardless, and hope management dont see through it ? =).
>>>
>>
>>
>> Haha, you've got a very good point, I'll write it myself.
>>
>> That's the spirit !.
>
> For a simple wrapper its enough to ensure the increment and decrements on
> the atomics are balanced around the offer, poll methods.
> Test harnesses for concurrent stuff is only good up to a limit,
> Its probably impossible to truly ensure you trigger all theoretical special
> cases in a concurrent environment on all platforms.
>
>


-- 
Viktor Klang,
Code Connoisseur
Work:   www.akkasource.com
Code:   github.com/viktorklang
Follow: twitter.com/viktorklang
Read:   klangism.tumblr.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20100827/ef682a5e/attachment.html>


More information about the Concurrency-interest mailing list