[concurrency-interest] Spacing out a workload burst?
gregg at cytetech.com
Wed Jan 19 22:30:11 EST 2011
On 1/16/2011 4:22 PM, Bryan Thompson wrote:
> I am using a fixed size thread pool backed by a queue. What I am looking
> to do is space out the onset of the task processing on the service in order
> to avoid having them all spike the resource demand at nearly the same moment.
> Essentially, even though the original set of requests appears in a burst
> and each set of satisfied requests brings on another burst, I would like
> the service to stagger out the requests so they begin to appear at times
> predicted by a uniform distribution rather than centered around a periodic
> task arrival spike.
The basic issue, is that in any system, the slowest point through the system
will provide the highest contention and cause the most number of entities to
gather there. The simple math is that you can add all of the times of each
phase of processing, and then use that as the denominator of a fraction with the
time of each segment as the numerator. That will tell you the fraction of the
total participants which will bes grouped at each phase. You can then use this
to create measurement values that you can use to check that the system is
maintaining your expected latency/throughput.
You can do something with "delays" to try and space things out more, but the
faster phases will still allow requests to come back around to the slower phases
fast enough that you can't "change" the system behavior measurably. You will
just cause the appropriate number of tasks to group at the delay point.
If I have phases with execution time unit values such as the following:
2 5 2 8 20 2 5
then the total time through is 44 time units. 20/44 of the total participants
will be at the phase that takes 20 time units on the average, in a continuous
If you have 20 tasks, and only 10 will fit through the system at any time, then
you can use throttling such as the thread pool you say you are using to control
how many are running at any time. But, then you've changed the picture to be
2 5 2 8 20*(n/10) 20 2 5
where n is the number of participants, because as soon as there are more than 10
participants, then the extra tasks, will have to wait (20 * (n/10)) time units
for someone to get through the phase that takes 20 time units, and then they
will take another 20 time units to get through that point.
Horizontal scaling at the slowest phase is what becomes necessary, or a
reduction of the latency through that point by algorithmic or other related changes.
Where ever you put the delay, you will paint a picture like this. It can help
the overall throughput to do this because of contention reduction on
non-scalable resources. But, you just have to figure out what the right choice
is before you really go to horizontal scaling of the work load.
> From: concurrency-interest-bounces at cs.oswego.edu [concurrency-interest-bounces at cs.oswego.edu] On Behalf Of Joe Bowbeer [joe.bowbeer at gmail.com]
> Sent: Sunday, January 16, 2011 2:37 PM
> To: concurrency-interest
> Subject: Re: [concurrency-interest] Spacing out a workload burst?
> Can I assume you've read Chapter 8, Applying Thread Pools in Java Concurrency in Practice?
> Here's an earlier take on the same material:
> Off hand, I'd recommend using a thread pool backed by a queue. The queue's job is to space-out the bursts.
> If you need more throttling, you can use a bounded queue and a saturation policy such as CallerRunsPolicy.
> On Sun, Jan 16, 2011 at 11:14 AM, Bryan Thompson wrote:
> I was hoping that someone could point me to either some literature, s/w or patterns which we could use to space out a sudden workload burst. This shows up in a benchmark where a number of client threads all start pounding on the service at once. Watching the clients returning, it is pretty clear that the requests tend to take around the same amount of time and that requests complete and new requests are issued more or less in batch bursts as a result. It seems to me that we might have better overall throughput if we could space out a burst of requests so the resource utilization has an opportunity to become more uniform.
> Thanks in advance,
> Concurrency-interest mailing list
> Concurrency-interest at cs.oswego.edu
More information about the Concurrency-interest