[concurrency-interest] Using concurrent to write a load balancer

David Holmes dcholmes at optusnet.com.au
Mon Sep 18 07:53:24 EDT 2006


Hello again Bob :)

Welcome to concurrency-interest.

> The only possible problem I see with CyclicBarrier is, what happens when,
> say, a request comes in, I launch my 5 (for example) threads, one responds
> but before the other 4 can respond I get another request? Then
> I'll have to wait until they all respond to re-launch. Ideally I'd like to
re-launch
> immediately with the 1 thread I have available, but I don't know
> if this is possible ...

This seems to be the crux of your problem - you can't interrupt the PING so
if requests come in faster than your ping timeout then your threads are
going to be delayed. In that sense a CountDownLatch is more suitable than a
barrier - despite having to keep recreating it - because you want to be able
to "open" it immediately. But even then having your threads tied up on a
previous ping isn't going to help you with the next one.

Presuming your machines reply to this "ping" based on actual load then you
might be able to use the order of responses to maintain a list of "next host
to use", and you only need issue an actual ping when the list is empty
again. So the basic operation would be like this:
    // "ping" task
    while (!stopped) {
        wait-for-request
        do_ping();
        queue.add(host id);
    }

    // request processing thread
    while (!stopped) {
        Request req = getNextRequesr();
        HostID host;
        if (queue.isEmpty())
           signal-request-waiting
        host = queue.take();
        process(req, host);
    }

Hmmm looking at it now I think perhaps CyclicBarrier will work - assuming
the model I outlined suites what you want to do.

Cheers,
David Holmes



More information about the Concurrency-interest mailing list