[concurrency-interest] Conja-- an accidental jsr166y alternative

David Holmes davidcholmes at aapt.net.au
Thu Nov 19 01:04:10 EST 2009


Looks very F# like :)

One thing (and I was just browsing) your DepthFirstThreadPoolExecutor
lifecycle management is not thread-safe. You can leak instances; there's no
sync or volatile to ensure visibility of the newly created instance, or of
updated cpu counts. There's no sync between shutting down and recreating. Is
the expectation that this will not be used directly but only from other
library internals which ensures a single-thread manages the executor? If not
then you need synchronization in various places.

David Holmes

> -----Original Message-----
> From: concurrency-interest-bounces at cs.oswego.edu
> [mailto:concurrency-interest-bounces at cs.oswego.edu]On Behalf Of David
> Soergel
> Sent: Thursday, 19 November 2009 3:40 PM
> To: concurrency-interest at cs.oswego.edu
> Subject: [concurrency-interest] Conja-- an accidental jsr166y
> alternative
> Hi all,
> I wrote a Java concurrency library a while back before I knew
> about jsr166y, and finally got around to writing up how it works.
>  I realize I'm very late to the party here, but still I hope some
> of the ideas I implemented may interest you.
> The main advantage of my library that I see at the moment is that
> it hides all the hard stuff behind syntactic sugar, and so should
> be very easy for a novice to adopt.  I suspect it would be
> straightforward to provide a similarly easy-to-use wrapper around
> the jsr166y internals.  I haven't yet done a detailed comparison
> though, so it may well be that jsr166y provides functionality
> that Conja lacks.
> The project home page is at
> http://dev.davidsoergel.com/trac/conja/, and the most important
> design issues are described briefly at
> http://dev.davidsoergel.com/trac/conja/wiki/PrinciplesOfOperation
> A couple of those issues are: 1) I schedule nested tasks in
> depth-first order, with advantages much like work stealing; 2) I
> employ various strategies to conserve memory (primarily by not
> leaving tasks waiting around in queues); and 3) I construct
> Runnables lazily and concurrently from an iterator of task
> inputs.  One consequence is that "pipelines" consisting of nested
> mapping iterators (i.e., iterators that apply a function to
> elements from an underlying iterator) can be used to provide the
> inputs, in which case the mappings are computed lazily and concurrently.
> I've been using this for some time with excellent performance, so
> I think it works at least :)
> Looking forward to any feedback you may have,
> -ds
> _______________________________________________________
> David Soergel                            (650) 303-5324
> dev at davidsoergel.com        http://www.davidsoergel.com
> _______________________________________________________
> _______________________________________________
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
> http://cs.oswego.edu/mailman/listinfo/concurrency-interest

More information about the Concurrency-interest mailing list