[concurrency-interest] ParallelArray updates du jour
dcholmes at optusnet.com.au
Tue Jan 22 04:30:49 EST 2008
These things will impact performance but they do not consitute blocking as
proscribed by the framework. A cache miss is not considered a blocking
operation, even though a system could "swap out" one thread/process for
another while it is handled. There is very little you can do about this, as
an application programmer, without being unduly familiar with the VM, it's
native compilation scheme (object layouts etc) and even the OS. I don't
believe there is any intention for the framework to aid you at this level of
Regarding the console, I was thinking about the I part of IO :)
From: peter.kovacs.benomuki at gmail.com
[mailto:peter.kovacs.benomuki at gmail.com]On Behalf Of Peter Kovacs
Sent: Tuesday, 22 January 2008 7:21 PM
To: dholmes at ieee.org
Cc: Doug Lea; concurrency-interest at cs.oswego.edu
Subject: Re: [concurrency-interest] ParallelArray updates du jour
Thank you, David, for the clarification!
Obviously, disk and network have blocking potential by orders of magnitude
greater than main memory. Still, looking at the discussions at large about
hardware architectures (from the optimal location of the memory controller
to attempts to memory access optimizations such as the fully-buffered DIMMs)
as well as the efforts put by OS designers into "memory placement
optimization" frameworks -- I'd candidly think that the overhead of
accessing main memory will count at the end of the day. (Especially so that
we're talking about hundreds of CPUs here with a proportionately large
aggregate working set to manage.)
Or is the assumption here that by the time the FJ framework goes into
full-scale production, the problem of main memory access will have been
dealt with efficiently enough by the lower level stuff? Or (not necessarily
alternatively) does the FJ framework's implementation contain logic that
deals with this kind of blockings?
(I am not arguing for or against something, I am just trying to collect
the bits of information I need to understand the system.)
(I know almost nothing of video subsystems, but is not writing to the
console roughly equivalent to writing to main memory? Is it not that the CPU
writes to the video memory and forgets about it? The video subsystem then
processes the input asynchronously on its own. Is it not that simple?)
On Jan 22, 2008 1:39 AM, David Holmes <dcholmes at optusnet.com.au> wrote:
But to clarify Doug's answer, "blocked I/O" is as Tim referred to - disk
network / console ie traditional IO devices. It does not concern memory
access or cpu scheduling issues.
> -----Original Message-----
> From: concurrency-interest-bounces at cs.oswego.edu
> [mailto: concurrency-interest-bounces at cs.oswego.edu]On Behalf Of Doug
> Sent: Monday, 21 January 2008 11:32 PM
> To: Peter Kovacs
> Cc: concurrency-interest at cs.oswego.edu
> Subject: Re: [concurrency-interest] ParallelArray updates du jour
> > "The computation defined in the compute method should not in general
> > perform
> > any other form of blocking synchronization, should not perform IO,
> > should be independent of other tasks."
> > The kind of blocking which I feel could be more closely defined is:
> > Naively, I would define IO as operations involving reading from, or
> > writing
> > to a physical device. Is main system memory (RAM) to be considered a
> > physical device in this context? If so, I assume FJTask
> instances must be
> > fine-grained enough for the working set of any individual
> FJTask instance
> > to fit into the on-CPU cache of the processor the FJTask starts
> > scheduled on.
> > A corollary requirement is then that FJTask instances should
> > quickly enough so as to reduce the probability of
> > (a) the OS "migrating the instance" to another CPU due to
> > (b) the instance being rescheduled in general - in case the on-CPU
> > is
> > not large enough to hold the working sets of multiple instances.
> > Is this correct?
> More or less. Like all lightweight-task frameworks, FJ does
> not explicitly cope with blocked IO: If a worker thread
> blocks on IO, then (1) it is not available to help process
> other tasks (2) Other worker threads waiting for the task
> to complete (i.e., to join it) may run out of work and waste
> CPU cycles. Neither of these issues completely eliminates
> potential use, but they do require a lot of explicit care.
> For example, you could place tasks that may experience
> sustained blockages in their own small ForkJoinPools.
> (The fortress folks do something along these lines
> mapping fortress "fair" threads onto forkjoin.) You
> can also dynamically increase worker pool sizes (we
> have methods to do this) when blockages may occur.
> All in all though, the reason for the restrictions and
> advice are that we do not have good automated support
> for these kinds of cases, and don't yet know of the
> best practices, or whether it is a good idea at all.
> Concurrency-interest mailing list
> Concurrency-interest at altair.cs.oswego.edu
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Concurrency-interest