[concurrency-interest] Benchmark to demonstrate improvement in thread management over the years.

√iktor Ҡlang viktor.klang at gmail.com
Tue Aug 13 08:43:50 EDT 2013


And, of course, it is interesting to see how a loop over a short sleep
contrasts to longer periods of sleep (jitter etc).


On Tue, Aug 13, 2013 at 1:45 PM, Kirk Pepperdine <kirk at kodewerk.com> wrote:

> I have a bench that uses sleep as it's unit of work and that bench is very
> heavily affected by the number of threads running along with the OS it's
> running in. Version of JVM seems to make very little difference nor does
> choice of hardware (until you hit extreme conditions for the core count).
> Add virtualization and the numbers get much much worse *even for single
> threaded runs*. It's a fun bench to play with but you need a number of
> different hardware platforms to run it on to start making sense of it. Run
> it on once peace of hardware with one OS and it's a rather boring exercise.
>
> Regards,
> Kirk
>
> On 2013-08-13, at 1:20 PM, Unmesh Joshi <unmeshjoshi at gmail.com> wrote:
>
> >>Really, you just need to do your own load testing specific to your own
> application,
> Agreed. But some decisions need to be taken upfront, before you can load
> test the application.
> Based on my experience, I have not seen number of threads causing issues
> anytime, even with thousands of threads. When load goes beyond that, its
> better to have more servers because memory and CPU requirements increase as
> well (Just because of the CPU bound tasks, something as simple as parsing
> response XMLs or preparing HTML response).
> So I was trying to understand if there are any well known benchmarks to
> show how many threads a JVM can manage well for typical tasks like simple
> DB Access, preparing XML or HTML response etc, on a typical system (e.g.
> say a quad core, 16GB system).
>
>
>
> On Tue, Aug 13, 2013 at 12:08 PM, James Roper <james.roper at typesafe.com>wrote:
>
>> On Tue, Aug 13, 2013 at 2:59 PM, Unmesh Joshi <unmeshjoshi at gmail.com>wrote:
>>
>>> Hi James,
>>>
>>> At what number of threads JVM or OS performance starts degrading? Or
>>> number of threads start becoming the main bottleneck in the system?
>>>
>>
>> Really, you just need to do your own load testing specific to your own
>> application, hardware and OS requirements.  The current Linux scheduler
>> runs in O(log N), so technically, that means performance starts degrading
>> at 2 threads, since every thread added increases the amount of time the
>> scheduler takes.  But of course, that degradation is negligible compared to
>> the amount of time your app spends waiting for IO.  So it all depends on
>> your app, what it's doing, and what its requirements are.
>>
>> It's not just scheduling that gets impacted, another obvious one that I
>> already I pointed out was memory consumption, so once the thread stacks
>> have consumed all available RAM, then they won't just be the bottleneck,
>> your application will slow to a crawl or even crash.
>>
>>
>>> Thanks,
>>> Unmesh
>>>
>>>
>>> On Mon, Aug 12, 2013 at 7:57 PM, James Roper <james.roper at typesafe.com>wrote:
>>>
>>>> It's also worth pointing out that the thread per request model is
>>>> becoming less feasible even for simple web apps. Modern service oriented
>>>> architectures often require that a single web request may make many
>>>> requests to other backend services. At the extreme, we see users writing
>>>> Play apps that make hundreds of backend API calls per request. In order to
>>>> provide acceptable response times, these requests must be made in parallel.
>>>> With blocking IO, that would mean a single request might take 100 threads,
>>>> if you had just 100 concurrent requests, that's 10000 threads, if each
>>>> thread stack takes 100kb of real memory, that's 1GB memory just for thread
>>>> stacks. That's not cheap.
>>>>
>>>> Regards,
>>>>
>>>> James
>>>> On Aug 13, 2013 12:08 AM, "Vitaly Davidovich" <vitalyd at gmail.com>
>>>> wrote:
>>>>
>>>>> I don't have any benchmarks to give, but I don't think the touted
>>>>> benefits of an evented model includes CPU performance.  Rather, using an
>>>>> evented model allows you to scale.  Specific to a web server, you want to
>>>>> be able to handle lots of concurrent connections (most of them are probably
>>>>> idle at any given time) while minimizing resource usage to accomplish that.
>>>>>
>>>>> With a thread-per-request (threaded) model, you may end up using lots
>>>>> of threads but most of them are blocked on i/o at any given time.  A slow
>>>>> client/consumer can tie up a thread for a very long time.  This also makes
>>>>> the server susceptible to a DDoS attack whereby new connections are
>>>>> established, but the clients are purposely slow to tie up the server
>>>>> threads.  Resource usage is also much higher in the threaded model when you
>>>>> have tens of thousands of connections since you're going to pay for stack
>>>>> space for each thread (granted it's VM space, but still).
>>>>>
>>>>> With an evented model, you don't have the inefficiency of having
>>>>> thousands of threads alive but that are blocked/waiting on i/o.  A single
>>>>> thread dedicated to multiplexing i/o across all the connections will
>>>>> probably be sufficient.  The rest is worker threads (most likely = # of
>>>>> CPUs for a dedicated machine) that actually handle the request processing,
>>>>> but don't do any (significant) i/o.  This design also means that you can
>>>>> handle slow clients in a more robust manner.
>>>>>
>>>>> So, the cost of threads can be "heavy" in the case of very busy web
>>>>> servers.  The Linux kernel should handle a few thousand threads (most
>>>>> blocked on io) quite well, but I don't think that will be the case for tens
>>>>> or hundreds of thousands.  Even if there's sufficient RAM to handle that
>>>>> many, there may be performance issues coming from the kernel itself, e.g.
>>>>> scheduler.  At the very least, you'll be using resources of the machine
>>>>> inefficiently under that setup.
>>>>>
>>>>> Vitaly
>>>>>
>>>>> Sent from my phone
>>>>> On Aug 12, 2013 9:13 AM, "Unmesh Joshi" <unmeshjoshi at gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Most of the books on node.js, Akka, Play or any other event IO based
>>>>>> system frequently talk about 'Threads' being heavy and there is cost we
>>>>>> have to pay for all the booking the OS or the JVM has to do with all the
>>>>>> threads.
>>>>>> While I agree that there must be some cost and for doing CPU
>>>>>> intensive tasks like matrix multiplication, and fork-join kind of framework
>>>>>> will be more performant, I am not sure if for web server kind of IO
>>>>>> intensive application that's the case.
>>>>>>
>>>>>> On the contrary, I am seeing web servers running on tomcat with 1000
>>>>>> + threads without issues.  For web servers. I think that Linux level thread
>>>>>> management has improved a lot in last 10 years. Same is with the JVM.
>>>>>>
>>>>>> Do we have any benchmark which shows how much Linux thread management
>>>>>> and JVM thread management have improved over the years?
>>>>>>
>>>>>> Thanks,
>>>>>> Unmesh
>>>>>>
>>>>>> _______________________________________________
>>>>>> Concurrency-interest mailing list
>>>>>> Concurrency-interest at cs.oswego.edu
>>>>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> Concurrency-interest mailing list
>>>>> Concurrency-interest at cs.oswego.edu
>>>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>>
>>>>>
>>>
>>
>>
>> --
>> *James Roper*
>> *Software Engineer*
>> *
>> *
>> Typesafe <http://typesafe.com/> – Build reactive apps!
>> Twitter: @jroper <https://twitter.com/jroper>
>>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>


-- 
*Viktor Klang*
*Director of Engineering*
Typesafe <http://www.typesafe.com/>

Twitter: @viktorklang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20130813/178518ad/attachment-0001.html>


More information about the Concurrency-interest mailing list