[concurrency-interest] forkjoin matrix multiply

Joe Bowbeer joe.bowbeer at gmail.com
Tue Jan 29 15:49:27 EST 2008

On Jan 29, 2008 10:51 AM, David J. Biesack wrote:
> > I just posted a more detailed example to the wiki, comparing
> > a sequential matrix multiplication algorithm, then the same
> > algorithm rewritten to run concurrently with an ExecutorService
> > and then rewritten using ForkJoin
> >
> >    http://artisans-serverintellect-com.si-eioswww6.com/default.asp?W40
> Tim Peierls found a concurrency bug in my example that my unit tests missed.
> I've corrected that (and my UTs!) and updated the page. Thanks, Tim.

I see what the PA example is doing now!  It took a moment to sink in...

While there is less code involved when using PA in this way, I'm
put-off by the use of PA in this way -- because I think of PA as an
object with data that operates on that data.

I think that providing a tool class with static methods would be a
better match for this kind of problem.  I'm referring to the
ParallelLoop and ParallelRange ideas.  Though in each case, I don't
understand why these would be objects and not simply tool classes?
The FJexecutor is the only state, right?  But what's the advantage in
creating an object simply to hold an FJexec?  Why not provide static
methods instead?

I'm intrigued by the matrix multiply example.  Should I assume this is
a typical domain for forkoin and friends?

If so, would it be beneficial for us to look inside jama.Matrix, with
the goal of enabling its implementors to parallelize its operations
(using forkjoin tools) in a straightforward manner?


A quick search found another test subject at jmathtools.sourceforge.net :



These are tool classes with methods that operate on double arrays.
(Variable argument lists... Wow.)

Should we strive to enable their implementors to parallize using forkjoin tools?


More information about the Concurrency-interest mailing list