[concurrency-interest] "Intelligent" queue for thread pool application.

Nathan Reynolds nathan.reynolds at oracle.com
Wed Jul 25 13:25:33 EDT 2012


I don't mean to push a product but give this as an idea.  The idea is 
that application servers may already have the functionality you are 
looking for.

Oracle's Weblogic Application Server has pool of threads that work on 
jobs.  The size of the pool is automatically adjusted to achieve maximum 
throughput.  In the server, there are several WorkManagers.   Each 
WorkManager is a queue with minimum and maximum thread constraints.   
The maximum constraint ensures that only up to that number of threads 
will process jobs concurrently. The minimum constraint ensures that at 
least that number of threads will process jobs concurrently (if there 
are jobs in the queue).  The minimum constraint could force the size of 
the thread pool to be increased just to satisfy this constraint.

The minimum and maximum constraints can be shared across a subset of the 
WorkManagers.  For example, you could have 3 WorkManagers to handle 3 
types of jobs: pull emails from POP3, convert emails, push emails to the 
same POP3.  The WorkManagers which deal with POP3 could share a maximum 
constraint so that only 1 thread will pull or push an email.  When an 
email is pulled, a job is created and put into the conversion 
WorkManager.  When the conversion is completed, a job is created and put 
into the push email.  Now, you will only have 1 thread working on POP3 
but several threads doing the conversion process.

You can then extend this out to all of the different types of locations 
(ftp, sms, fax, local folder).  FTP might have a maximum constraint of 
10 because the server crashes with more connections.  Sms might not have 
a maximum constraint since the sms throughput flat-lines once maximum 
throughput is achieved. Fax might have a maximum constraint of 1 since 
it can only send or receive 1 fax at a time.  Local folder might have a 
maximum constraint of 1 to optimize for HDD throughput or 16 to optimize 
for SSD throughput.

Nathan Reynolds 
<http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds> | 
Consulting Member of Technical Staff | 602.333.9091
Oracle PSR Engineering <http://psr.us.oracle.com/> | Server Technology
On 7/25/2012 8:38 AM, Gregg Wonderly wrote:
> I think that what you need, is a Queue provider SPI that you provide 
> implementations for, for each pool type/instance.  That SPI would 
> return the Queue which is appropriate for submitting work to that 
> destination pool.
>
> Each pool service provider, would then manage how many threads are 
> going on, concurrently, at any given time.
>
> The end result, is that you have a tree like structure of queueing. 
>  The work is submitted at the top level, in FIFO order as it enters 
> into your application.  The application then says ohh, this is a XYZ 
> task, let's ask that service provider for the queue to submit the work 
> too.  Then, you submit the work there, and it is therefore enqueued 
> into the correct number of threads.
>
> From the other direction, the service providers, could ask a global 
> ThreadPool for a new thread, so that you could manage the total amount 
> of work going on.   That ThreadPool might be a FJ pool that would 
> allow for work stealing and other activities to keep all the threads busy.
>
> For network I/O, if you use the NIO infrastructure, you can greatly 
> parallelize activities there, of course.
>
> Gregg Wonderly
>
> On Jul 25, 2012, at 10:25 AM, Johannes Frank wrote:
>
>> Hello!
>>
>> This is my first post to this list after reading it for some time 
>> (though most things are some way from the use cases I have to handle 
>> at the moment).
>>
>> I have the following scenario:
>>
>> We have an application that transfers abstract data from some sort of 
>> source location (can be anything from ftp, mail, sms, fax, local 
>> folder and so on) to some other sort of target location (from the 
>> same pool), with additional conversion in between.
>> Our thread model right now is rather crude: Per one periodic transfer 
>> job we configure, we have one thread that does it all.
>>
>> I would like to alter that approach to that we have a dispatching 
>> thread that monitors all the jobs and automatically creates task 
>> containing the transfer command for ONE item from source to target 
>> (and possibly via an optional converter). This would allow the 
>> transfers to be running in parallel (where applicable) and enable a 
>> much bigger throughput, especially when the system is managing a lot 
>> of small items where the management logic (which is parallelizable) 
>> is taking up a considerably high amount of runtime for transmission 
>> of a single item compared to the pure data pumping action.
>>
>>
>> So far so good.
>> My problem now is that I need to be able to limit the amount of 
>> concurrent transmissions for every location so that for example pop3 
>> mail boxes only have one concurrent connection (because during one 
>> connection, the mailbox is locked and all other attempts will fail 
>> until the first connection is shut down), leaving all other 
>> outstanding tasks for this specific source on halt until the one task 
>> already running has completed.
>>
>> So basically what I need is this:
>>
>> A thread pool which has an intelligent concept on which runnable to 
>> dispatch next to the pool and on which runnable some prequsites need 
>> to be fullfilled before they can be dispatched.
>> I want to be able to define that "Runnables that fulfill this and 
>> that requirement may only have n running instances in the thread pool 
>> at any given time" without having to implement the actual queueing 
>> myself.
>>
>> Is there already an implementation of such behavior or are there at 
>> least existing concepts I could read into to help me on completing 
>> this task?
>>
>> Kind Regards,
>> Johannes Frank
>>
>> ------------------------------------------------------------------------
>> Johannes Frank - KISTERS AG - Charlottenburger Allee 5 - 52068 Aachen 
>> - Germany
>> Handelsregister Aachen, HRB-Nr. 7838 | Vorstand: Klaus Kisters, Hanns 
>> Kisters | Aufsichtsratsvorsitzender: Dr. Thomas Klevers
>> Phone: +49 241 9671 -0 | Fax: +49 241 9671 -555 | E-Mail: 
>> Johannes.Frank at kisters.de <mailto:Johannes.Frank at kisters.de> | WWW:
>> ------------------------------------------------------------------------
>> This e-mail may contain confidential and/or privileged information. 
>> If you are not the intended recipient (or have received this e-mail 
>> in error) please notify the sender immediately and destroy this 
>> e-mail. Any unauthorised copying, disclosure or distribution of the 
>> material in this e-mail is strictly forbidden. 
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu 
>> <mailto:Concurrency-interest at cs.oswego.edu>
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20120725/92b59a15/attachment.html>


More information about the Concurrency-interest mailing list