[concurrency-interest] Attempt at yet another mostly-concurrent result cache

Joe Bowbeer joe.bowbeer at gmail.com
Tue Jan 9 18:00:53 EST 2007


Note that it's a tiny timing window that would need to be exploited in
order to mess things up.  A compareAndSet loop is at the foundation of
java.util.concurrent.locks, hence, ConcurrentHashMap, etc., so tiny
timing windows are not taboo.  I don't know how to estimate the risk
in this case.  Perhaps someone else on this list can provide more
insight.

One way to avoid some of the complexity (and uncertainty) would be to
strip or segment the locking by using multiple caches.  Preload the
ConcurrentHashMap with the cache segments -- or initialize the
segments on demand using putIfAbsent.  You can use HashMap for the
cache segments.  Once you've acquired the cache associated with a
particular batch, synchronize on the cache during access.  This gives
you n-way concurrency where as before you only had 1-way.
(ConcurrentHashMap uses segments internally...)

--Joe

On 1/9/07, Holger Hoffstätte <holger at wizards.de> wrote:
> One last time..
>
> Joe Bowbeer wrote:
> > I think a batch can always be lost between the time it is cached and
> > the time it is synchronized:
> >
> >   Batch prev = cache.putIfAbsent(batchKey, batch);
> >   if (prev != null) batch = prev; // flip refs
> >   // LOST HERE
> >   synchronized (batch) { ... }
>
> OK, I now understand why both of my versions can be affected by this. The
> batch holder does not really solve the underlying problem; thanks. Seems
> the loop is really the best way to go.
>
> > You can plug the hole by retesting inside a loop:
> >
> > BatchResult process(Message msg) {
> >   String batchKey = msg.getBatchId();
> >   while (true) {
> >     Batch batch = cache.get(batchKey);
> >     if (batch == null) {
> >       batch = new Batch();
> >       Batch prev = cache.putIfAbsent(batchKey, batch);
> >       if (prev != null) batch = prev; // flip refs
> >     }
> >     // we have a batch that was cached at one time
> >     synchronize (batch) {
> >       if (batch != cache.get(batchKey))
> >         continue;
> >       batch.add(msg);
> >       if (!batch.isReady())
> >         return null;
> >       cache.remove(batchKey);
> >       return batch.toResult();
> >     }
> >   }
> > }
> >
> > If you added a Lock to Batch then you might be able to improve
> > efficiency by locking the newly constructed Batch before adding it to
> > the cache.  Then you'd be sure no other thread could steal your new
> > batch before you got a chance to use it.
> >
> > Note that adding a loop raises the possibility of starvation (live-lock).
> >
> > (What's the best way to plug that hole?)
>
> Assuming you mean the case where one thread hogs the Map by continuously
> hitting the same Map segment:
>
>   if (batch != cache.get(batchKey)) {
>     Thread.sleep(1); // since yield can be a nop
>     continue;
>   }
>
> but that seems counterproductive since the monitor won't be released for
> the duration of the sleep, so more like this:
>
>   boolean cachemiss = false;
>   while (true)
>   {
>     if (cachemiss) {
>       Thread.sleep(1);
>     }
>   ...
>   if (batch != cache.get(batchKey)) {
>     cachemiss = true;
>     continue;
>   }
>
> If that was not what you meant re: starvation then this young Jedi could
> need a hint. :)
>
> Thanks much
> Holger
>



More information about the Concurrency-interest mailing list