[concurrency-interest] forkjoin matrix multiply

Tim Peierls tim at peierls.net
Tue Jan 29 16:43:36 EST 2008

On Jan 29, 2008 4:28 PM, Mark Thornton <mthornton at optrak.co.uk> wrote:

> Joe Bowbeer wrote:
> > If so, would it be beneficial for us to look inside jama.Matrix, with
> > the goal of enabling its implementors to parallelize its operations
> > (using forkjoin tools) in a straightforward manner?
> >
> I think high performance matrix multiply is usually done with the matrix
> divided into tiles or blocks, such that a couple of tiles fit neatly
> into the processor cache. This technique is applied for each cache level
> (blocks within blocks ...). While the forkJoin tools can help, I'm not
> sure how the parallel array mechanisms help here.

Agreed. I think David's example is mainly about comparing a
ThreadPoolExecutor design to a ForkJoin design, not just in performance but
in succinctness. I don't think it was intended as a serious example of doing
high-performance matrix multiplication.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/attachments/20080129/55a70652/attachment.html 

More information about the Concurrency-interest mailing list