[concurrency-interest] j.u.c/backport performance on Windows

Szabolcs Ferenczi szabolcs.ferenczi at gmail.com
Mon Apr 2 16:49:09 EDT 2007


On 02/04/07, Peter Kovacs <peter.kovacs.1.0rc at gmail.com> wrote:

> getNextInputImmediate uses LinkedBlockingQueue.put while

We are talking about this fragment:

        synchronized (inputProducerLock) {
            if (inputProducer.hasNext()) {
                input = inputProducer.getNext();
            }
            scheduledWorkUnitData = new ScheduledWorkUnitData(input);
            outputQueue.put(scheduledWorkUnitData);
        }

Basically it is ok that you use the put method on the LBQ which
provides you with the long term scheduling of the threads. However,
you wrap it into a higher level critical section using the extra lock
inputProducerLock. Due to the inner put method, a worker thread might
stay for indefinitely long time in the inner critical section. Consequently,
the other threads might be hanged on the lock inputProducerLock
waiting for entrance into the upper critical region. Critical sections
are intended for short term scheduling and in this case threads are
waiting for entrance for a long time. (You mention this situation in
your 19 March message.)

Threads waiting to enter the critical section for a long time might
unnecessarily consume processing power depending how the waiting is
implemented. Usually it is implemented by some spin lock. That means
threads are scheduled with the assumption that they will gain access
to the resource shortly. Waiting to enter into a critical section for
a long time might be the cause for the performance loss. There might
be significant differences between the different platforms.

On top of all that, you mention (in your 19 March message) that the
consumer cannot keep up with the producers. That means that as soon as
the buffer gets full, the work is necessarily serialized and the
consumer determines the speed of the processing. The producer-consumer
pattern is a solution for the case when the speed of producing the
pieces of data and the speed of processing them are varying.

Best Regards,
Szabolcs


More information about the Concurrency-interest mailing list