[concurrency-interest] Some interesting (confusing?) benchmark results
viktor.klang at gmail.com
Wed Feb 13 03:33:59 EST 2013
On Tue, Feb 12, 2013 at 10:47 PM, Nathan Reynolds <
nathan.reynolds at oracle.com> wrote:
> > best performance when only loading 60-70% of the cores
> What do you mean by performance? Do you mean you achieve the highest
Do you mean you achieve the lowest response times? Do you mean something
> The early implementations of hyper-threading on Intel processors sometimes
> ran into trouble depending upon the workload. Enabling hyper-threading
> actual hurt performance and throughput. A lot of people quickly learned to
> disable hyper-threading. They are so entrenched in that decision that it
> is hard to help them see that hyper-threading is actually beneficial now.
This resonates very well with my experience.
> The Linux thread scheduler is smart enough to put 1 thread on each
> physical core first and then double up on physical cores. So, I am not
> surprised that loading 60-70% cores yields best performance on the above
> mentioned processors. This creates a few more threads than physical cores
> which in a way disables hyper-threading.
Also, in my experience the workload type is essentially everything when it
comes to what you can get out of hyperthreading.
> Later implementations of hyper-threading improved considerably. I am not
> aware of any workloads which perform worse with hyper-threading enabled.
Me neither. But a hyper thread is not equivalent to a "real" core.
> With a modern processor (i.e. Westmere or newer), it would be interesting
> if you ran your workload with hyper-threading enabled and disabled. Then
> find the optimal thread count for each configuration. If hyper-threading
> disabled performs better, then that definitely would be an interesting
> workload and result.
Is there a way to know how many physical processors are available on the
> Nathan Reynolds<http://psr.us.oracle.com/wiki/index.php/User:Nathan_Reynolds>| Architect |
> Oracle PSR Engineering <http://psr.us.oracle.com/> | Server Technology
> On 2/12/2013 2:18 PM, √iktor Ҡlang wrote:
> On Tue, Feb 12, 2013 at 8:28 PM, Kirk Pepperdine <kirk at kodewerk.com>wrote:
>> > Do you agree that thread pool sizing depends on type of work? (IO bound
>> vs CPU bound, bursty vs steady etc etc)
>> > Do you agree that a JVM Thread is not a unit of parallelism?
>> > Do you agree that having more JVM Threads than hardware threads is bad
>> for CPU-bound workloads?
>> No, even with CPU bound workloads I have found that the hardware/OS is
>> much better at managing many workloads across many threads than I am. So a
>> few more threads is ok, many more threads is bad fast.
> That's an interesting observation. Have any more data on that? (really
> As I said earlier, for CPU-bound workloads we've seen the best performance
> when only loading 60-70% of the cores (other threads exist on the machine
> of course).
> *Viktor Klang*
> *Director of Engineering*
> Typesafe <http://www.typesafe.com/> - The software stack for applications
> that scale
> Twitter: @viktorklang
> Concurrency-interest mailing listConcurrency-interest at cs.oswego.eduhttp://cs.oswego.edu/mailman/listinfo/concurrency-interest
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
*Director of Engineering*
Typesafe <http://www.typesafe.com/> - The software stack for applications
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Concurrency-interest