[concurrency-interest] jsr166y.forkjoin API comments
jdmarshall at gmail.com
Fri Jan 25 20:53:11 EST 2008
On Jan 25, 2008 5:21 PM, Doug Lea <dl at cs.oswego.edu> wrote:
> jason marshall wrote:
> > Now that I've stirred the hornet's nest, let me ask you all a higher
> > level question:
> > Do you think the serious design wins for ForkJoin are going to come from
> > from parallelizing tasks that only take a few instruction cycles, or do
> > you think ForkJoin's biggest win will be with tasks that take thousands
> > or millions of cycles?
> That one is easy; the former. We already have j.u.c frameworks in place
> (Executors, Futures, etc) for the latter, and they are widely used.
> > Most of the posts in this thread are dealing with things that happen
> > near the noise floor,
> How often do you iterate over collections or arrays?
> That's basically what ParallelArray et al do for you, for various
> kinds of structured traversals.
I would like to do it more often, actually. And this is where ParallelArray
offers some things that don't quite exist in j.u.c, which is why I'm
interested. But if jsr166 is predominantly about number crunching in the
small, then maybe I'm not the target audience.
I think my original point stands regardless: It would be nice if the
high-level concepts were accessible from the Javadoc, and currently they are
not (and now the Ops interfaces are actually worse by far in that respect
than Parallel*Array was when I first looked at the spec).
> > My understanding is that, if you really want to make a function over
> > primitives go very, very fast, you're going to convert the code to SIMD
> > instructions on the processor, on in the GPU.
> We've discussed with people looking into SIMD loop optimization.
> Surprisingly enough, they require the same sorts of structured
> traversals that we arrange. So the leaf computations in a forkjoin
> operation may well eventually be SIMD.
> > Therefore, the Big Win
> > here is not in writing an API that tries to make the Java code run as
> > fast as possible, but instead figuring out how to get Hotspot to turn
> > your code into SIMD calls for you.
> Or both :-)
> > Can you do that with Integer or Double? Maybe, maybe not. But
> > until Hotspot does that with ints, and float, then it hardly
> > matters, does it?
> The SIMD int connection is why ParallelIntArray may possibly be
> revived. Floats, not so much.
For floats, the prevailing winds suggest you're going to use GPGPU for
SIMD. IEEE incompatibilities notwithstanding.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Concurrency-interest