[concurrency-interest] Blocking vs. non-blocking

Vitaly Davidovich vitalyd at gmail.com
Wed Aug 6 13:14:35 EDT 2014


I don't think the point of NIO (or event-based i/o in general) is to have
better absolute latency/throughput in all cases than blocking i/o.
 Instead, it's really intended for being able to scale to tens (and
hundreds) of thousands of concurrent (and mostly idle at a point in time)
connections on a single server.  This makes intuitive sense since pretty
much just one core dedicated to doing i/o can saturate a NIC (given
sufficient read/write workload); the rest of the compute resources can be
dedicated to the CPU bound workload.  Creating a thread-per-connection in
those circumstances either doesn't make sense or simply won't work at all.


On Wed, Aug 6, 2014 at 12:16 PM, DT <dt at flyingtroika.com> wrote:

> We have done multiple experiments in respect to nio and io java APIs and
> we have not seen that much improvement in throughput or latencies with NIO.
> Got almost the same stats for udp , tcp and http based packets (running on
> windows and linux platforms). Though we noticed that the more traffic we
> handle the better results we got with NIO implementation in terms of
> latencies and overall throughput of the application (there is some sort of
> threshold when system starts reacting better). The idea was to move to java
> NIO APIs but due to the results we decided to make some more research. Its
> difficult to make a benchmark just because even a small change in linux
> kernel/nic can lead to different results. When we converted java logic into
> C++/C code and used linux non blocking/event based calls we have got much
> better optimization/performance. Good example is to compare nginx socket
> event module and Java NIO APIs.  Probably we shoud not compare java
> non-blocking calls to c/c++ calls and implementation though I thought its a
> good idea to get a benchmark this way.
>
> Thanks,
> DT
>
> On 8/5/2014 4:51 PM, Zhong Yu wrote:
>
>> On Tue, Aug 5, 2014 at 6:41 PM, Stanimir Simeonoff <stanimir at riflexo.com>
>> wrote:
>>
>>>
>>>
>>>
>>>
>>>  There's a dilemma though - if the application code is writing bytes to
>>>> the response body with blocking write(), isn't it tying up a thread if
>>>> the client is slow? And if the write() is non-blocking, aren't we
>>>> buffering up too much data? I think this problem can often be solved
>>>> by a non-blocking write(obj) that buffers `obj` with lazy
>>>> serialization, see "optimization technique" in
>>>> http://bayou.io/draft/response.style.html#Response_body
>>>>
>>>> Zhong Yu
>>>>
>>>
>>> The lazy serialization unfortunately requires the object to be fully
>>> fetched
>>> (not depending on any context or an active database connection) which is
>>> not
>>> that different than "buffering too much" - it's just not plain ByteBuffer
>>>
>> There's a difference if the objects are shared among responses which
>> is a reasonable assumption for a lot of web applications.
>>
>>  (or byte[]).
>>> Personally I don't like lazy serialization as that leaves objects in the
>>> queues and the latter may have implications of module (classes) redeploys
>>> with slow clients. Also it makes a lot hard quantifying the expected
>>> queue
>>> length per connection and shutting down slow connection.
>>>
>>> Stanimir
>>>
>>>
>>>
>>>>  Alex
>>>>>
>>>>>
>>>>> On 03/08/2014 20:06, Zhong Yu wrote:
>>>>>
>>>>>> Also, apparently, in heavy I/O scenarios, you may have a much better
>>>>>>> system
>>>>>>> throughput waiting for things to happen in I/O (blocking I/O) vs
>>>>>>> being
>>>>>>> notified of I/O events (Selector-based I/O):
>>>>>>> http://www.mailinator.com/tymaPaulMultithreaded.pdf. Paper is 6
>>>>>>> years
>>>>>>> old
>>>>>>> and kernel/Java realities might have changed, YMMV, but the
>>>>>>> difference
>>>>>>> is(was?) impressive. Also, Apache HTTP Client still swears by
>>>>>>> blocking
>>>>>>> I/O
>>>>>>> vs non-blocking one in terms of efficiency:
>>>>>>>
>>>>>>> http://wiki.apache.org/HttpComponents/HttpClient3vsHttpClient4vsHttp
>>>>>>> Core
>>>>>>>
>>>>>> To add a small data point to this discussion, Tomcat with NIO is
>>>>>> apparently slower than Tomcat with Blocking-IO by 1,700ns for a simple
>>>>>> request-response, according to a benchmark I did recently [1]. But!
>>>>>> The difference is very small, and I would argue that it is negligible.
>>>>>>
>>>>>> Paul Tyma's claim (that the throughput of Blocking-IO is 30% more than
>>>>>> NIO) is not very meaningful for real applications. I did once
>>>>>> replicate his claim with a test that does nothing with the bytes being
>>>>>> transferred; but as soon as you at least read each byte once, the
>>>>>> throughput difference becomes very unimpressive (and frankly I suspect
>>>>>> it's largely due to Java's implementation of NIO).
>>>>>>
>>>>>> [1] http://bayou.io/draft/Comparing_Java_HTTP_Servers_Latencies.html
>>>>>>
>>>>>> Zhong Yu
>>>>>> bayou.io
>>>>>> _______________________________________________
>>>>>> Concurrency-interest mailing list
>>>>>> Concurrency-interest at cs.oswego.edu
>>>>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>>>
>>>>>
>>>>>  _______________________________________________
>>>> Concurrency-interest mailing list
>>>> Concurrency-interest at cs.oswego.edu
>>>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>>>
>>>
>>>  _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>>
>
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20140806/e983e419/attachment.html>


More information about the Concurrency-interest mailing list