Diffusion Approximations for Complex Repair Systems [report]

Donald L. Iglehart, Atam Lalchandani
1991 unpublished
A wide variety of complex repair systems can be modeled as continuous time Markov chains. These systems are closed networks of queues with a total of n jobs circulating in the network. The process of interest is the number of jobs, X n (t), at the various repair centers at time t. After appropriate translation and scaling, we show that the processes {X n (t) : t > 0} converge weakly to a limiting multi-variate Ornstein-Uhlenbeck process. This limit process is then used to obtain computable
more » ... ximations for Xn(t). Numerical results are presented for three specific repairman models and the approximations are compared with exact results obtained through product form formulae. In most cases the approximation is quite accurate. and Atam P. Lalchandani Introduction. For many stochastic models in applied probability complicated Markov chains arise which are impossible to analyze directly. A classical approach to this problem, dating back to BACHELIER (1900), is to show that a sequence of Markov chains with appropriate time and state scales converges at a given time point (or weakly) to a limiting diffusion process. In these instances the limiting diffusion process may hold out the only hope for providing useful approximations to practical problems. When the Markov chains are one-dimensional birth-death processes in either discrete or continuous time, STONE (1961), (1963) has developed a complete theory for the weak convergence of these Markov chains, to a limiting diffusion. Roughly speaking, Stone's results require convergence of the infinitesimal mean and variance to those of the limiting diffusion plus convergence of boundary conditions when appropriate. In this paper we shall apply a comparable development in higher dimensions for a restricted class of limiting diffusions: multivariate Ornstein-Uhlenbeck (m.O.U.) processes. These results will then be applied to three generalized repairman models. A special case of a m.O.U. was introduced in IGLEHART (1968) and the general case in SCHACH (1971). Problems involving the convergence of Markov chains to a m.O.U. process arise frequently in practice; see, for example, KARLIN and McGREGOR (1964), (1965), IGLEHART (1968), SCHACH (1971), and McNEIL and SCHACH (1973). We shall treat sequences of Markov chains in continuous time whose state spaces are subsets of Z d , the integer lattice points of d-dimensional Euclidean space, R d . We view elements x e R d as column vectors. From a given point in the state space we shall only allow jumps in one step to a finite number of states. Thus multivariate birth-death processes in which transitions are only allowed to neighboring states are special cases. The typical situation for a sequence of continuous time chains, say, {X n (t):t>0}. n = 1,2,... , is to form a sequence of processes y"(f) = (*n(')-nc)/n" 2 ,t>0, where c6fi d is a fixed vector. With this setup we would like to conclude under appropriate conditions that Y n (t) => Y(t), where ^ denotes weak convergence and Y is a m.O.U. process. Also of interest is the convergence of the m.O.U. process as t -► oo:Y(t) =» y(+oo), when this is appropriate. In applications we would approximate the random vector X n (t) by n l/2 Y(t) + nc for large n. A m.O.U. process is a ^-dimensional (d > 2) diffusion, that is a strong Markov process with continuous paths. Furthermore, if the initial state is either a constant or Gaussian, then the process is Gaussian. It is characterized by two real d x d matrices A and £, where A is symmetric and positive definite. The stationary transition probability density of a m.O.U. process is given by (1.1) P(t,*,y) = (27r)-d/2 |£(')r 1/2 exp{4/(r,*,s,)}, where x,j/€ R d , t> 0, /(t,x,y) = (y-/i(t))'E-l (t)(j/-/x(t)), M (t) = e" Br x, and £(*) = ff BT At" B ' r dr. o Here B' is the transpose of B. As A is symmetric, positive definite and e" Br nonsingular, it is easy to show that £(*) is symmetric, positive definite. The convergence of a sequence of Markov processes has a long history. We mention next some of the relevant literature. KHINCHINE (1933), Chapter 3, approaches the problem through the Kolmogorov backward partial differential equation. Semi-group treatments of these problems have been given by SKOROHOD (195S), 1 Table 2. Parameters for Limiting Process -Two Item Model Next we make a number of qualitative remarks about the behavior of this system when n is large. 11 J
doi:10.21236/ada238100 fatcat:5ogsk37xgfgn7nnwldyum334wa