[concurrency-interest] Some interesting (confusing?) benchmark results

Kirk Pepperdine kirk at kodewerk.com
Tue Feb 12 14:30:46 EST 2013


oooo, thanks!!!

On 2013-02-12, at 8:20 PM, thurston at nomagicsoftware.com wrote:

> Here is all of the relevant code in a single blob.  I think it's selef-explanatory; sample test is at the end.
> 
> http://pastebin.com/WrfBHYSG
> 
> 
> 
> On 2013-02-12 01:55, thurston at nomagicsoftware.com wrote:
>> Yes, I will push the code and my measurements some time this week.
>> I'm really interested in seeing what others' results would be.  What I
>> can say is that the timings are surprisingly stable (range 30-40ms)
>> and the input data is randomly generated for each test run which can
>> certainly affect merge(int[], int[])'s performance
>> 
>> On 2013-02-12 01:50, Kirk Pepperdine wrote:
>>> Hi,
>>> On 2013-02-11, at 9:52 PM, thurston at nomagicsoftware.com wrote:
>>> Hi T,
>>> 
>>> Can you pub the raw data from your runs? Average tends to hide
>>> effects and it's difficult to say anything with only that measure.
>>> 
>>> Regards,
>>> Kirk
>>> 
>>> 
>>>> Hello,
>>>> 
>>>> I made some initial attempts at using ForkJoin framework for some rather obvious recursively parallel use-cases, namely summing the elements of an int[] and also sorting an int[] (randomly filled).
>>>> 
>>>> The problem-sizes I used ranged from 1M - 10M.
>>>> 
>>>> I really enjoy using the framework and find it relatively easy to reason about my programs (if not necessarily the internals of the framework).
>>>> But I was disappointed with the results: in both cases they were slower than the corresponding Java serial implementation.
>>>> 
>>>> I completely understand the principle of YMMV, and I wasn't expecting 2x speedup, but especially in the case of the sort, but I was expecting that ForkJoin would do at least a little better than the single-threaded version.  I guess what I'm asking is: are these results surprising?  Does it mean I'm doing something wrong?
>>>> 
>>>> I'll give a brief description of my implementation:
>>>> single-threaded:  Arrays.sort(int[])
>>>> 
>>>> ForkJoin: I'm using a merge-sort, with THRESHOLD = 250 (arrived at by trial-and-error)
>>>> 
>>>> int[] compute()
>>>> {
>>>> if int[].length < THRESHOLD
>>>>   return insertionSort(int[])
>>>> left = new MergeSort(int[] //split in half)
>>>> right = new MergeSort(int[] //other half)
>>>> return merge(right.compute(), left.join())
>>>> }
>>>> 
>>>> The insertionSort() and merge() methods are just standard implementations; there is a bit of apples to oranges comparison since Arrays.sort() uses an optimized quicksort, but we're still talking O(nlog(n))
>>>> 
>>>> Just ran it on my laptop:
>>>> Windows 7 64-bit
>>>> 1.7.0_03; Java HotSpot(TM) 64-Bit Server VM 22.1-b02
>>>> Core 2 Duo==> 2 cores 2GHz
>>>> 
>>>> 2M int[]:
>>>> single-threaded: ~330ms
>>>> ForkJoin (2 workers): ~390ms
>>>> 
>>>> Would appreciate any feedback and hopefully the #s are somewhat helpful (and please no scoffawing at my antiquated machine)
>>>> 
>>>> -T
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Concurrency-interest mailing list
>>>> Concurrency-interest at cs.oswego.edu
>>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
> 
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest




More information about the Concurrency-interest mailing list