[concurrency-interest] Benchmark to demonstrate improvement in thread management over the years.

Vitaly Davidovich vitalyd at gmail.com
Mon Aug 12 09:50:18 EDT 2013


I don't have any benchmarks to give, but I don't think the touted benefits
of an evented model includes CPU performance.  Rather, using an evented
model allows you to scale.  Specific to a web server, you want to be able
to handle lots of concurrent connections (most of them are probably idle at
any given time) while minimizing resource usage to accomplish that.

With a thread-per-request (threaded) model, you may end up using lots of
threads but most of them are blocked on i/o at any given time.  A slow
client/consumer can tie up a thread for a very long time.  This also makes
the server susceptible to a DDoS attack whereby new connections are
established, but the clients are purposely slow to tie up the server
threads.  Resource usage is also much higher in the threaded model when you
have tens of thousands of connections since you're going to pay for stack
space for each thread (granted it's VM space, but still).

With an evented model, you don't have the inefficiency of having thousands
of threads alive but that are blocked/waiting on i/o.  A single thread
dedicated to multiplexing i/o across all the connections will probably be
sufficient.  The rest is worker threads (most likely = # of CPUs for a
dedicated machine) that actually handle the request processing, but don't
do any (significant) i/o.  This design also means that you can handle slow
clients in a more robust manner.

So, the cost of threads can be "heavy" in the case of very busy web
servers.  The Linux kernel should handle a few thousand threads (most
blocked on io) quite well, but I don't think that will be the case for tens
or hundreds of thousands.  Even if there's sufficient RAM to handle that
many, there may be performance issues coming from the kernel itself, e.g.
scheduler.  At the very least, you'll be using resources of the machine
inefficiently under that setup.

Vitaly

Sent from my phone
On Aug 12, 2013 9:13 AM, "Unmesh Joshi" <unmeshjoshi at gmail.com> wrote:

> Hi,
>
> Most of the books on node.js, Akka, Play or any other event IO based
> system frequently talk about 'Threads' being heavy and there is cost we
> have to pay for all the booking the OS or the JVM has to do with all the
> threads.
> While I agree that there must be some cost and for doing CPU intensive
> tasks like matrix multiplication, and fork-join kind of framework will be
> more performant, I am not sure if for web server kind of IO intensive
> application that's the case.
>
> On the contrary, I am seeing web servers running on tomcat with 1000 +
> threads without issues.  For web servers. I think that Linux level thread
> management has improved a lot in last 10 years. Same is with the JVM.
>
> Do we have any benchmark which shows how much Linux thread management and
> JVM thread management have improved over the years?
>
> Thanks,
> Unmesh
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20130812/8452c5aa/attachment.html>


More information about the Concurrency-interest mailing list