[concurrency-interest] A thread pool for servers?

Henri Tremblay henri.tremblay at gmail.com
Fri Sep 11 23:01:24 EDT 2015


I indeed have encountered this problem frequently.When trying to do basic
producer -> consumers system without having to think too much about it.

I was really surprised that nothing allowing the producer to stop producing
exists. So I usually end-up doing something like this:

*BlockingQueue<Runnable> queue = new BlockingQueue<>(10_000);
ThreadPoolExecutor pool = new ThreadPoolExecutor(
        parallelism, parallelism,
        0L, TimeUnit.MILLISECONDS,
        queue);
pool.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());*

It does the job but I would love to have something readily available in
Executors.

On 11 September 2015 at 21:33, Luke Sandberg <lukeisandberg at gmail.com>
wrote:

> I have found the "be prepared to handle task rejection sensibly" advice
> to be significantly easier said than done.  For example, few of Guava's
> ListenableFuture utilities are 'hardened' against
> RejectedExecutionException,  It was only about a year ago that
> Futures.transform would give you a 'dead' future if these executor threw an
> REE.  I have also seen a number of production deadlocks/stuck threads due
> to improper REE handling.  At least in the applications I work on, the
> number of places that submit tasks to thread pools is large so ensuring
> consistent reasonable handling is very hard.
>
> The strategy my teams has taken in some servers is to use fixed size
> thread pools (ThreadPoolExecutor, though we are experimenting with FJP)
> with unbounded queues and higher level throttling support.  In theory
> errant tasks could still cause OOMEs by producing tasks too fast, but in
> practice I have never observed this, while I have been involved in
> debugging production issues due to REE.
>
> On Fri, Sep 11, 2015 at 5:15 PM Mike Axiak <mike at axiak.net> wrote:
>
>> At HubSpot, we've found the standard Executors helper methods to be a
>> somewhat onerous API. We provide a bounded thread pool as part of a fluent
>> builder that makes it clearer what you're wanting.
>>
>> Rather than adding a new "newFixedThreadPool" or having people create a
>> "new ThreadPoolExecutor" and possibly subsequently calling
>> "allowCoreThreadTimeOut", would a JEP for a builder be reasonable?
>>
>> Best,
>> Mike
>>
>>
>> On Fri, Sep 11, 2015 at 7:46 PM, David Holmes <davidcholmes at aapt.net.au>
>> wrote:
>>
>>> Hi Martin,
>>>
>>>
>>>
>>> All those requirements suggest direct use of a thread pool constructor
>>> not a “convenience” method. You get to set the size of the pool, the type
>>> and size of queue and the rejection policy.
>>>
>>>
>>>
>>> David
>>>
>>>
>>>
>>> *From:* concurrency-interest-bounces at cs.oswego.edu [mailto:
>>> concurrency-interest-bounces at cs.oswego.edu] *On Behalf Of *Martin
>>> Buchholz
>>> *Sent:* Saturday, September 12, 2015 7:51 AM
>>> *To:* concurrency-interest; Douglas Dickinson
>>> *Subject:* [concurrency-interest] A thread pool for servers?
>>>
>>>
>>>
>>>
>>>
>>> Colleague Douglas Dickinson at Google pointed out that none of the
>>> convenience executors in Executors is truly resource bounded.
>>>
>>>
>>>
>>> I've never run a "production Java server", but trying to bound resource
>>> usage so that your server doesn't die with OOME or gc thrashing seems
>>> important.  So your thread pool needs to be bounded in the number of
>>> threads *and* in the size of the queue, and be prepared to handle task
>>> rejection sensibly.
>>>
>>>
>>>
>>> So it makes sense to me to add another convenience method to Executors
>>> that will act like newFixedThreadPool but will additionally have a bounded
>>> queue.  It's not completely obvious whether ArrayBlockingQueue or
>>> LinkedBlockingQueue is the better choice for the bounded queue, but I
>>> suspect for "serious servers" it's the former, because:
>>>
>>> - modern server environments tend to like pre-allocating all their
>>> resources at startup
>>>
>>> - there's little GC overhead to the "dead weight" of the
>>> ArrayBlockingQueue's backing array (a single object!)
>>>
>>> - LinkedBlockingQueue will allocate many more small objects, and servers
>>> don't like that
>>>
>>> - with today's economics, "memory is cheap"
>>>
>>>
>>>
>>> (and of course, we can try to write an actual better bounded thread pool
>>> not based on ThreadPoolExecutor, but that's actually hard)
>>>
>>> _______________________________________________
>>> Concurrency-interest mailing list
>>> Concurrency-interest at cs.oswego.edu
>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>
>>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20150911/f5e3d0e7/attachment.html>


More information about the Concurrency-interest mailing list