[concurrency-interest] Blocking vs. non-blocking

Zhong Yu zhong.j.yu at gmail.com
Tue Aug 5 13:59:49 EDT 2014


On Tue, Aug 5, 2014 at 9:12 AM, Oleksandr Otenko
<oleksandr.otenko at oracle.com> wrote:
> That still leaves everyone wonder why one would prefer to code in
> continuation-passing style compared to straightforward blocking IO. Both IO
> being on par is not reason enough to switch.

The async-everywhere fad is brought by nodejs, which really has no
other choice, because javascript is single threaded.

On other platforms like Java, I believe that, in most use cases, the
*application code* that reads a request and generates the response
should be synchronous/blocking/threaded. The code usually involves
blocking IOs with internal components like DB server and file system
which are fast and predictable. The page views per second per machine
is often at single-digit, according to reports from companies like
stackoverflow and facebook. Even if the application needs to handle
thousands of requests/second/machine, we are only talking about a few
hundred application threads at any time.

However, the code of HTTP stack is preferred to be async, because
awaiting requests, reading requests, and writing responses can often
be affected by slow clients. Async-ness in HTTP stack reduces overhead
without imposing complexity to the application code.

There's a dilemma though - if the application code is writing bytes to
the response body with blocking write(), isn't it tying up a thread if
the client is slow? And if the write() is non-blocking, aren't we
buffering up too much data? I think this problem can often be solved
by a non-blocking write(obj) that buffers `obj` with lazy
serialization, see "optimization technique" in
http://bayou.io/draft/response.style.html#Response_body

Zhong Yu
bayou.io


>
> Alex
>
>
> On 03/08/2014 20:06, Zhong Yu wrote:
>>>
>>> Also, apparently, in heavy I/O scenarios, you may have a much better
>>> system
>>> throughput waiting for things to happen in I/O (blocking I/O) vs being
>>> notified of I/O events (Selector-based I/O):
>>> http://www.mailinator.com/tymaPaulMultithreaded.pdf. Paper is 6 years old
>>> and kernel/Java realities might have changed, YMMV, but the difference
>>> is(was?) impressive. Also, Apache HTTP Client still swears by blocking
>>> I/O
>>> vs non-blocking one in terms of efficiency:
>>> http://wiki.apache.org/HttpComponents/HttpClient3vsHttpClient4vsHttpCore
>>
>> To add a small data point to this discussion, Tomcat with NIO is
>> apparently slower than Tomcat with Blocking-IO by 1,700ns for a simple
>> request-response, according to a benchmark I did recently [1]. But!
>> The difference is very small, and I would argue that it is negligible.
>>
>> Paul Tyma's claim (that the throughput of Blocking-IO is 30% more than
>> NIO) is not very meaningful for real applications. I did once
>> replicate his claim with a test that does nothing with the bytes being
>> transferred; but as soon as you at least read each byte once, the
>> throughput difference becomes very unimpressive (and frankly I suspect
>> it's largely due to Java's implementation of NIO).
>>
>> [1] http://bayou.io/draft/Comparing_Java_HTTP_Servers_Latencies.html
>>
>> Zhong Yu
>> bayou.io
>> _______________________________________________
>> Concurrency-interest mailing list
>> Concurrency-interest at cs.oswego.edu
>> http://cs.oswego.edu/mailman/listinfo/concurrency-interest
>
>


More information about the Concurrency-interest mailing list