[concurrency-interest] forkjoin matrix multiply
David J. Biesack
David.Biesack at sas.com
Wed Jan 30 13:35:24 EST 2008
> Date: Tue, 29 Jan 2008 16:43:36 -0500
> From: "Tim Peierls" <tim at peierls.net>
> Subject: Re: [concurrency-interest] forkjoin matrix multiply
> On Jan 29, 2008 4:28 PM, Mark Thornton <mthornton at optrak.co.uk> wrote:
> > Joe Bowbeer wrote:
> > > If so, would it be beneficial for us to look inside jama.Matrix, with
> > > the goal of enabling its implementors to parallelize its operations
> > > (using forkjoin tools) in a straightforward manner?
> > >
> > I think high performance matrix multiply is usually done with the matrix
> > divided into tiles or blocks, such that a couple of tiles fit neatly
> > into the processor cache. This technique is applied for each cache level
> > (blocks within blocks ...). While the forkJoin tools can help, I'm not
> > sure how the parallel array mechanisms help here.
> Agreed. I think David's example is mainly about comparing a
> ThreadPoolExecutor design to a ForkJoin design, not just in performance but
> in succinctness. I don't think it was intended as a serious example of doing
> high-performance matrix multiplication.
Most definitely. The examples merely show how to take an existing sequential algorithm and apply ForkJoin to it.
Developing more realistic examples can help refine the API as well; Doug is already considering some additions/enhancements spawned by writing this example (IndexedProcedure; ParallelIntRange). Framework design requires lots of use cases; Matrix Multiply is simply a well understood case.
Please try more examples and add them to the wiki at http://artisans-serverintellect-com.si-eioswww6.com/default.asp?W32
David J. Biesack SAS Institute Inc.
(919) 531-7771 SAS Campus Drive
http://www.sas.com Cary, NC 27513
More information about the Concurrency-interest