[concurrency-interest] CompletableFuture improvements

Chris Purcell chris.purcell.39 at gmail.com
Mon Aug 24 07:36:23 EDT 2015


Thanks for the context, Doug. It seems like cross-thread helping *is* the
only thing my design loses (it's still non-blocking, atomic and avoids
stack-overflow), which is nice to confirm in case I do decide my library is
worth releasing properly.

I'm pretty confident the thread-local back-channel approach can be adapted
to "pull up" work to the thread's current CompletableFuture handler without
throwing away the trampoline/assistance feature. I look forward to hearing
if it works out for you!

Cheers,
Chris


P.S. java.util.concurrent has been a great example of cutting-edge
concurrent algorithms in the real world since my PhD, a horrifyingly long
time ago. Thanks for all your inspiring work!

On Tue, 18 Aug 2015 at 14:14 Doug Lea <dl at cs.oswego.edu> wrote:

> On 08/17/2015 02:27 PM, Chris Purcell wrote:
>
> > I may be way off-base on /why/ CompletableFuture has a work queue of
> callbacks.
> > I assumed it was because, if you have a long chain (a couple of
> thousand) of
> > dependent futures, and you call dependent callbacks recursively within
> the
> > upstream future's completion handler, then the call stack will overflow.
> You can
> > easily demonstrate this with CompletableFutures:
>
> CompletableFuture uses a Treiber-stack-like completion list for
> several related reasons: it is non-blocking, atomically resolves
> races (like when adding to a CF that is in the midst of completing),
> allows helping across threads, and helps avoids stack-overflow
> (especially with the 8u update to do so even across threads).
>
> As discussed last year, we'd like to catch as many of these
> cases as possible, because StackOverflowError is otherwise
> so problematic on JVMs -- some are never caught or handled.
> This is not special to completion designs but more common with them.
> (Aside: it is an odd state of affairs that you need to emulate
> stacks on heaps, with greater peak total memory overhead,
> to avoid stack overflow.) Without some JVM-level tail-recursion
> support (and maybe even with it), the options are limited in cases
> where the recursion occurs in the bodies of lambdas where we
> cannot see it. That's the main difference in your two examples:
>
>                CompletableFuture<Integer> newTail = new
> CompletableFuture<>();
>                tail.thenAccept(v -> newTail.complete(v + 1));
>                tail = newTail;
> vs
>
>                tail = tail.thenApply(v -> v + 1);
>
> We can "trampoline" thenAccept and thenApply, but we don't
> even see the newTail.complete(v + 1) because it is inside a
> lambda. On the other hand, if the first usage were async, the
> cross-thread mechanisms kick in to prevent SOE:
>
>      public void async_recursive_callbacks() {
>          CompletableFuture<Integer> head = new CompletableFuture<>();
>          CompletableFuture<Integer> tail = head;
>          for (int i = 0; i < 2_000; i++) {
>              CompletableFuture<Integer> newTail = new
> CompletableFuture<>();
>              tail.thenAcceptAsync(v -> newTail.complete(v + 1));
>              tail = newTail;
>          }
>          head.complete(7);
>          assertEquals(2007, (int) tail.join());
>      }
>
> It is possible that we could do better here. I'll investigate.
> Your thread-local back-channel approach handles some cases,
> but I don't think can coexist with other support.
>
> -Doug
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://cs.oswego.edu/pipermail/concurrency-interest/attachments/20150824/998e12b8/attachment-0001.html>


More information about the Concurrency-interest mailing list