A counterexample

Consider the queueing system in Fig. 9.2. Customers arrive according to a random process. If there is a free server, service is started immediately; otherwise, the customer joins the queue and waits. Service time is random as well, and whenever a server completes a service, the first customer in the queue (if any) starts her service. To keep it simple, let us assume that the system is open 24/7, so that there is no issue with closing times. Now, what is the first and foremost feature to measure quality of service experienced by customers? If you have some experience in long queues, I guess that waiting time is what comes to your mind. In fact, a common management problem is determination of the number of servers in such a way to strike a compromise between cost and service quality. Hence, we would like to assess the average waiting time, given the number of servers. Of course, an average does not fully capture our problem, as we would also like to make sure that a prescribed maximum waiting time is rarely, if ever, exceeded, but let us stick to the simplest performance measure we may think of.

images

Fig. 9.2 A queueing system with multiple servers.

A first step in the analysis is modeling uncertainty. A common assumption is that customers arrive according to a Poisson process, i.e., that time between two consecutive arrivals is exponentially distributed with a rate λ. If service times were exponentially distributed, it would be good news, as there are easy formulas giving the average waiting time for a queueing system involving exponential distributions only. Unfortunately, we know that the exponential distribution is memoryless,8 and this makes it a poor candidate for modeling service times. Furthermore, customer arrivals may have a time-varying rate; indeed, at retail stores we observe hours at which there is a burst of arrivals, whereas other hours are pretty quiet. Luckily, we may rely on computer simulation programs to analyze complex queueing systems, which can make a faithful model of reality. We will get a clue about how such models work in Section 9.7. For now, let us just say that we have a way to collect a sequence of observed waiting times. Let Wk be the waiting time of the kth customer, and say that we simulate the system long enough, collecting waiting times for customers k = 1,…, n. To make “long enough” more precise, we could apply formula (9.10) to calculate a confidence interval. If the interval is too wide, we can simulate more customers to get a satisfactory estimate of average waiting time.

Now I have two questions for you. The first one is pretty easy; the second one is a bit more challenging. Question 1 is: Can we apply the procedure above to come up with the confidence interval we need?

Please! Sit down, think a while, and give your answer before reading further. This is the easy question, after all…..

When I ask this question in class, almost all students agree. Then the tougher question 2 comes: How many mistakes have you made, if your answer was yes?

  1. To begin with, if we start our simulation with an empty system, what is the waiting time of the first customer? Of course, it is zero; the first few customers will find an empty system, and their waiting times do not tell us much. In fact, this transient phase may affect the statistics we collect. We should discard the first observations to warm the system up and avoid this issue. We should start collecting waiting times only when the system is in steady state. From a more general perspective, confidence intervals assume a sample of identically distributed random variables, but the initial waiting times have a different probability distribution.
  2. A more general issue is that the waiting times are unlikely to be normally distributed, and the resulting confidence interval will only be an approximation; as we have said, however, this is a fairly good one for a large sample.
  3. Actually, the really serious mistake is that waiting times of successive customers are not independent. If customer k undergoes a long waiting time, this means that this unlucky customer arrived when the system is congested and there is a long queue. Hence, we might expect the waiting time for customer k + 1 to be large as well. Formally, waiting times of successive customers are positively correlated.

The important message is that you should never take independence for granted. In this specific case, it can be shown by a deeper analysis that the blindfold application of standard statistical procedures results in an underestimation of the variability of waiting times. Hence, the width of the confidence interval is underestimated as well, and the net result is that we are overconfident in our conclusions.

In practice, the way out requires batching observations. If the system is stable and the queue does not explode to infinity, we should expect that if there is a moment of congestion, it will be resolved after a while and the queue length will revert back to a normal level. Hence, we should also expect that the waiting times of two faraway customers, say, Wk and Wk+1000, are practically independent. In other words, intuition suggests that the random variables Wk and Wk+s should have some positive correlation, but this tends to fade out for increasing values of s. Then, we group observations into m batches, each one consisting of n customers, amounting to a total sample of nm customers. Now consider the m batch means

images

where batch j = 1 consists of customers 1, 2,…, n, batch j = 2 consists of customers n + 1, n + 2,…, 2n, etc. Each sample mean images is, at least approximately, independent from the other ones, and we may apply the standard procedure on them. The further good news is that, courtesy of the central limit theorem, they should be more or less normal,9 providing further justification for the approach.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *