[concurrency-interest] "Intelligent" queue for thread pool application.

Gregg Wonderly gergg at cox.net
Wed Jul 25 11:38:10 EDT 2012

I think that what you need, is a Queue provider SPI that you provide implementations for, for each pool type/instance.  That SPI would return the Queue which is appropriate for submitting work to that destination pool.

Each pool service provider, would then manage how many threads are going on, concurrently, at any given time.

The end result, is that you have a tree like structure of queueing.  The work is submitted at the top level, in FIFO order as it enters into your application.  The application then says ohh, this is a XYZ task, let's ask that service provider for the queue to submit the work too.  Then, you submit the work there, and it is therefore enqueued into the correct number of threads.

From the other direction, the service providers, could ask a global ThreadPool for a new thread, so that you could manage the total amount of work going on.   That ThreadPool might be a FJ pool that would allow for work stealing and other activities to keep all the threads busy.

For network I/O, if you use the NIO infrastructure, you can greatly parallelize activities there, of course.

Gregg Wonderly

On Jul 25, 2012, at 10:25 AM, Johannes Frank wrote:

> Hello!
> This is my first post to this list after reading it for some time (though most things are some way from the use cases I have to handle at the moment). 
> I have the following scenario: 
> We have an application that transfers abstract data from some sort of source location (can be anything from ftp, mail, sms, fax, local folder and so on) to some other sort of target location (from the same pool), with additional conversion in between. 
> Our thread model right now is rather crude: Per one periodic transfer job we configure, we have one thread that does it all. 
> I would like to alter that approach to that we have a dispatching thread that monitors all the jobs and automatically creates task containing the transfer command for ONE item from source to target (and possibly via an optional converter). This would allow the transfers to be running in parallel (where applicable) and enable a much bigger throughput, especially when the system is managing a lot of small items where the management logic (which is parallelizable) is taking up a considerably high amount of runtime for transmission of a single item compared to the pure data pumping action. 
> So far so good. 
> My problem now is that I need to be able to limit the amount of concurrent transmissions for every location so that for example pop3 mail boxes only have one concurrent connection (because during one connection, the mailbox is locked and all other attempts will fail until the first connection is shut down), leaving all other outstanding tasks for this specific source on halt until the one task already running has completed. 
> So basically what I need is this: 
> A thread pool which has an intelligent concept on which runnable to dispatch next to the pool and on which runnable some prequsites need to be fullfilled before they can be dispatched. 
> I want to be able to define that "Runnables that fulfill this and that requirement may only have n running instances in the thread pool at any given time" without having to implement the actual queueing myself. 
> Is there already an implementation of such behavior or are there at least existing concepts I could read into to help me on completing this task? 
> Kind Regards, 
> Johannes Frank 
> Johannes Frank - KISTERS AG - Charlottenburger Allee 5 - 52068 Aachen - Germany
> Handelsregister Aachen, HRB-Nr. 7838 | Vorstand: Klaus Kisters, Hanns Kisters | Aufsichtsratsvorsitzender: Dr. Thomas Klevers
> Phone: +49 241 9671 -0 | Fax: +49 241 9671 -555 | E-Mail: Johannes.Frank at kisters.de | WWW:
> This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden. _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20120725/f75f4b0a/attachment.html>

More information about the Concurrency-interest mailing list