[concurrency-interest] Target CPU utilization

Peter Kovacs peter.kovacs.1.0rc at gmail.com
Fri Feb 16 09:15:23 EST 2007


Matthias,

On 2/16/07, Ernst, Matthias <matthias.ernst at coremedia.com> wrote:
>
>
>
> On a related note: how do you size your pools? Do you really go ahead and
> size them for each machine they are deployed on? Do you use
> Runtime.getAvailableProcessors() * ASSUMED_BLOCKING_RATIO?

I am still in the planning phase. My immediate plan is to use three
thread pools for three classes of jobs based on the order of the
expected ASSUMED_BLOCKING_RATIO. And yes, right now I expect to use
the formula you mentioned.

I am still pondering how GUI background jobs will fit into this
picture... Joe suggested lowering the priority of GUI background
jobs... I am not sure how to provide the "tuning knob" for this
behaviour to the enclosing application... Especially not sure in the
"pervasive multithreading" scenario where all kinds of operations will
execute multi-threaded and one instance of an "elementary kind" of
operation is eligible for execution in a background job while another
instance of the same kind is executing in user-interaction at the same
time. In this scenario, there appears to be no straightforward way of
telling apart a background work unit from an interactive work unit
based on their kind(s)...One transparent way of doing this (which
comes right now to my mind) is to check the call stack and if we find
the GUI event handler among the callers, we are part of an
interaction...No, it is not a good idea: interactive operations might
start their non-background work in a thread-pool and on the other
hand, basically every payload operation is started from the GUI event
handler in a typical GUI application. It seems that there is no way
around it, the enclosing application must explicitly turn on some kind
of "background" flag for a top level operation and this "background"
flag being then propagated across thread-pools involved in performing
the work units composing this top level operation.

>
>  I'm asking because the .NET runtime provides a vm wide thread pool that
> reacts to the load conditions and adds more threads if the system is
> underutilized - albeit slowly (every 500 seconds). I've been wondering
> whether that is a good idea or an MSFT
> looks-easy-but-ignores-the-realities feature.

>From a program designer's perspective, this "service" provides the
ability to share a common thread pool without having to create the
machinery (interfaces, custom adapters) required for passing around
the reference to it between components which are agnostic of each
other. On the other hand, having just one such thread pool smells
inflexibility (and appears to seriously limit its usefulness).

I am even less sure about the implications from a system engineer's
perspective. Still, it is interesting to know that you have something
like this in .NET.

More knowledgeable people on this list will surely give their
insightful opinion about this.

Peter

>
>  Matthias
>
>
>  -----Original Message-----
>  From: concurrency-interest-bounces at cs.oswego.edu on behalf
> of Joe Bowbeer
>  Sent: Fri 2/16/2007 01:27
>  To: concurrency-interest
>  Subject: Re: [concurrency-interest] Target CPU utilization
>
>  The specifics depend on how your service is expected to be used.  For
>  example, can it be invoked as-needed by a SwingWorker-style task?  Or
>  does it churn away in the background?
>
>  Staying out of way of the GUI is especially important in the latter
>  case.  Lowering the priority of your worker threads may be enough to
>  keep the GUI from dragging or sputtering.
>
>  If your service runs on-demand (and the user is essentially waiting
>  for you to finish), then providing prompt cancellation and periodic
>  progress reports is usually sufficient.
>
>  It looks like NetBeans and Eclipse have modes where a background task
>  can be foregrounded, and vice versa.
>
>  For the best integration (both in terms of UI and managing load), yes,
>  it can be important to leverage the platform's existing executors.
>
>  On 2/15/07, Peter Kovacs <peter.kovacs.1.0rc at gmail.com> wrote:
>  >
>  > How do I best integrate a library into an application context which is
>  > unknown beforehand (in terms of thread pools)? Do I have to share
>  > pools with the enclosing applications?
>  >
>  > For example assume our library is used in a GUI application which
>  > itself is built on the NetBeans platform. NetBeans has its own kind of
>  > thread pools:
> http://www.netbeans.org/download/dev/javadoc/org-openide-util/org/openide/util/RequestProcessor.Task.html.
>  > Does my library have to provide hooks where the application can have
>  > my libraries use the thread management of NetBeans?
>  >
>  > Or is the best approach "laisser fair, laisser aller"? And leave it to
>  > "the invisible hand" of the OS scheduler to arbitrate between the
>  > threads of coexisting thread pools?
>  >
>  _______________________________________________
>  Concurrency-interest mailing list
>  Concurrency-interest at altair.cs.oswego.edu
> http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at altair.cs.oswego.edu
> http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>


More information about the Concurrency-interest mailing list