[concurrency-interest] ReadWriteLocks and Conditions

Joe Bowbeer joe.bowbeer at gmail.com
Wed Feb 7 14:21:03 EST 2007


Peter,

I think your implementation looks reasonable and nothing pops out as
incorrect (but this is no seal of approval of course).

It's implementing some interesting behavior (waiting for a value to
appear) that's sort of Future-like (or queue like).  But then you
allow users to change the value...  And then you allow unlimited
checkouts...  The checkouts, once the ref exists, might use an
unbounded semaphore.  Delegating more of the implementation to objects
such as these (future/semaphore) may be an improvement -- but your
implementation is already fairly concise.


By the way, have you looked at JavaSpaces?  What you are describing
sounds like a JavaSpaces kind of thing.  Its transactions add a nice
bit of robustness to the checkouts.

http://www.dancres.org/cottage/javaspaces.html

(Though adding JavaSpaces to your project may add a whole 'nother
level of pain -- or does Blitz make it all better now??)


On 2/7/07, Peter Veentjer <alarmnummer at gmail.com> wrote:
> Hi Gregg,
>
> I see what you mean. You are using a future as a latch (a point
> threads can fall through when a condition has been met). That would
> also be a good alternative.
>
> ps:
> I'm trying to give each synchronization stone a specific features. So
> integrating the runnable with the lendablereference would not be my
> first solution. The LendableReference is (like the name says) a
> reference that can be lend to multiple threads (if there is one
> available) and if no reference is available, they block untill one is
> available. It could be used to pass runnable instances through, but it
> also could be used for other types of references.
>
>
>
> On 2/6/07, Gregg Wonderly <gregg at cytetech.com> wrote:
> >
> >
> > Peter Veentjer wrote:
> > > I don't see how a Future would fit in, maybe you could elaborate on this?
> >
> > >>>I'm working on a structure called the LendeableReference. This
> > >>>structure makes it possible to lend a reference to multiple threads
> > >>>(values are taken from the LendableReference) and after a thread is
> > >>>done with the reference, it takes it back so that a new reference can
> > >>>be set. If no reference is available, taking threads block until a
> > >>>reference is available.
> >
> > I don't know that I understand the constraints that you want to maintain, but
> > based on your comments, it seems to me that the lended reference should access a
> > Future.  The user of that object would 'get' the value, and thus block when
> > there is no reference available (yet).  The algorithm that would apply in that
> > case, is that they 'readers' would always ask a factory for the appropriate
> > Future and thus use a relevant new Future when needed.
> >
> > Here's something that you can pass around, and the users can "get" the value at
> > anytime.  You can expand this to do more things about deferring object creation
> > beyond the simple setValue() implementation, but this is what I was thinking about.
> >
> > Maybe you could elaborate on the specifics of what else you need if this is not
> > appropriate.
> >
> > public class LendableReference<T> implements Runnable {
> >         volatile FutureTask<T> fut;
> >         volatile T val;
> >         public LendableReference( T value ) {
> >                 setValue( value );
> >         }
> >
> >         public LendableReference( Callable<T> call ) {
> >                 setValue(call);
> >         }
> >
> >         public void setValue( T value ) {
> >                 val = value;
> >                 fut = new FutureTask<T>( this, value );
> >         }
> >
> >         public void setValue( Callable<T> call ) {
> >                 fut = new FutureTask<T>( call );
> >         }
> >
> >         public T get() {
> >                 return val = fut.get();
> >         }
> >
> >         /**
> >          *  Do nothing to create value.  If you need to do something, override
> >          *  run to do the work.
> >          */
> >         public void run() {}
> > }
> >
> >
>


More information about the Concurrency-interest mailing list