[concurrency-interest] jsr166y Parallel*Array withMapping/withFilter chaining

Tim Peierls tim at peierls.net
Wed Jan 23 16:47:39 EST 2008


It is by design. The return types of the methods restrict you to chains that
can be performed efficiently in one recursive fork-join "pass". The calls to
specify bounds, filters, or mappings just set things up; the real work
starts when you call an "action" method like all, reduce, replace, etc.

But in your example, since you aren't keeping the original array, you could
avoid the intermediate allocation by using replaceWithMapping. In other
words, convert this code:

  pa.withBounds(b).withMapping(m1).all()  // allocates a new PA
    .withFilter(f).withMapping(m2).all(); // allocates another new PA

into

  pa.withBounds(b).replaceWithMapping(m1) // no new allocation
    .withFilter(f).withMapping(m2).all(); // allocates a new PA

The latter allocates only one array besides pa, where the former allocates
two additional.

--tim


On Jan 23, 2008 4:07 PM, David J. Biesack <David.Biesack at sas.com> wrote:

> I find that I cannot do the following
>
>   public ParallelDoubleArray chain(int size) {
>        return = ParallelDoubleArray.create(size, myForkJoinExecutor)
>         .replaceWithGeneratedValue(myGenerator)
>         .withMapping(myMapping)
>         .withFilter(myFilter)
>         .withMapping(myOtherMapping).all();
>    }
>
> Instead, I am forced to construct an intermediate ParallelArray because
> withMapping returns a ParallelDoubleArrayWithMapping and that class does not
> provide a withFilter(Ops.DoublePredicate) method.
>
>    public ParallelDoubleArray chain(int size) {
>        ParallelDoubleArray intermediate;
>        intermediate = ParallelDoubleArray.create(size, myForkJoinExecutor)
>         .replaceWithGeneratedValue(myGenerator)
>         .withMapping(myMapping).all();
>         intermediate = intermediate.withFilter(myFilter)
>         .withMapping(myOtherMapping).all();
>        return intermediate;
>    }
>
> It appears to force premature allocation of the intermediate buffer and
> consequently higher intermediate memory consumption, a concern for larger
> arrays.
>
> Is this by design (or necessity), or an accidental omission? I suppose I
> can see how a filter normally changes the number of elements and thus must
> be run completely before applying a next step (mapping), but even so, it
> would be nice for the framework to internalize this so that client code is
> not forced to express possibly inefficient paths that prevent future
> optimization.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/attachments/20080123/9461c33b/attachment.html 


More information about the Concurrency-interest mailing list