scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Operations Research in 1987"


Journal ArticleDOI
TL;DR: The main focus is a state of the art summary of analytical and numerical methods used to solve computer system availability models and will consider both transient and steady-state availability measures and for transient measures, both expected values and distributions.
Abstract: System availability is becoming an increasingly important factor in evaluating the behavior of commercial computer systems. This is due to the increased dependence of enterprises on continuously operating computer systems and to the emphasis on fault-tolerant designs. Thus, we expect availability modeling to be of increasing interest to computer system analysts and for performance models and availability models to be used to evaluate combined performance/availability (performability) measures. Since commercial computer systems are repairable, availability measures are of greater interest than reliability measures. Reliability measures are typically used to evaluate nonrepairable systems such as occur in military and aerospace applications. We will discuss system aspects which should be represented in an availability model; however, our main focus is a state of the art summary of analytical and numerical methods used to solve computer system availability models. We will consider both transient and steady-state availability measures and for transient measures, both expected values and distributions. We are developing a program package for system availability modeling and intend to incorporate the best solution methods.

142 citations


Journal ArticleDOI
TL;DR: An algorithm for analyzing approximately open exponential queueing networks with blocking is presented and approximate results obtained were compared against exact numerical data, and they seem to have an acceptable error level.
Abstract: An algorithm for analyzing approximately open exponential queueing networks with blocking is presented. The algorithm decomposes a queueing network with blocking into individual queues with revised capacity, and revised arrival and service processes. These individual queues are then analyzed in isolation. Numerical experience with this algorithm is reported for three-node and four-node queueing networks. The approximate results obtained were compared against exact numerical data, and they seem to have an acceptable error level.

80 citations


Journal ArticleDOI
TL;DR: In this paper, a method is provided to extend a decomposition method to large systems in which machines are allowed to take different lengths of time performing operations on parts.
Abstract: A transfer line is a tandem production system, i.e. a series of machines separated by buffers. Material flows from outside the system to the first machine, then to the first buffer, then to the second machine, the second buffer, and so forth. In some earlier models, buffers are finite, machines are unreliable, and the times that parts spend being processed at machines are equal at all machines. In this paper, a method is provided to extend a decomposition method to large systems in which machines are allowed to take different lengths of time performing operations on parts. Numerical and simulation results are provided.

65 citations


Journal ArticleDOI
TL;DR: Empirical test results over a wide range of systems indicate the SDA method is quite accurate, and time-dependent approximations of mean and variance of the number of entities in the system and thenumber of busy servers are obtained.
Abstract: Nonstationary phase processes are defined and a surrogate distribution approximation (SDA) method for analyzing transient and nonstationary queueing systems with nonstationary phase arrival processes is presented. Regardless of system capacityc, the SDA method requires the numerical solution of only 6K differential equations, whereK is the number of phases in the arrival process, compared to theK(c+1) Kolmogorov forward equations required for the classical method of solution. Time-dependent approximations of mean and variance of the number of entities in the system and the number of busy servers are obtained. Empirical test results over a wide range of systems indicate the SDA is quite accurate.

52 citations


Journal ArticleDOI
TL;DR: In this article, a bivariate central limit theorem involving a point estimator and the asymptotic variance of this point estimate is proved involving a sequence of independent, identically distributed random vectors in ℝd with mean vector μ.
Abstract: Let {V(k) :K⩾1} be a sequence of independent, identically distributed random vectors in ℝd with mean vector μ. The mappingg is a twice differentiable mapping from ℝd to ℝ1. Setr=g(μ). A bivariate central limit theorem is proved involving a point estimator forr and the asymptotic variance of this point estimate. This result can be applied immediately to the ratio estimation problem that arises in regenerative simulation. Numerical examples show that the variance of the regenerative variance estimator is not necessarily minimized by using the “return state” with the smallest expected cycle length.

32 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived a simple method to find this variance for the case of a Markov process and applied it to obtain the variance of a time average for a birth-death process.
Abstract: The variance of a time average is important for planning, running and interpreting experiments. This paper derives a simple method to find this variance for the case of a Markov process. This method is then applied to obtain the variance of a time average for the case of a birth-death process.

19 citations


Journal ArticleDOI
TL;DR: This play-analysis methodology is a geostochastic system for petroleum resource appraisal in explored as well as frontier areas and based upon conditional probability theory and a closed form solution of all means and standard deviations, along with the probabilities of occurrence.
Abstract: An analytic probabilistic methodology for resource appraisal of undiscovered oil and gas resources in play analysis is presented. This play-analysis methodology is a geostochastic system for petroleum resource appraisal in explored as well as frontier areas. An objective was to replace an existing Monte Carlo simulation method in order to increase the efficiency of the appraisal process. Underlying the two methods is a single geologic model which considers both the uncertainty of the presence of the assessed hydrocarbon and its amount if present. The results of the model are resource estimates of crude oil, nonassociated gas, dissolved gas, and gas for a geologic play in terms of probability distributions. The analytic method is based upon conditional probability theory and a closed form solution of all means and standard deviations, along with the probabilities of occurrence.

18 citations


Journal ArticleDOI
TL;DR: A sequential parameter control technique previously introduced by the author is modified in this paper to make it simple in practice for a queueing system with an embedded Markov chain.
Abstract: A sequential parameter control technique previously introduced by the author is modified in this paper so as to make it simple in practice. The detailed procedure involving two phases, a warning phase with control limits and a testing phase using an appropriate test is illustrated for a queueing system with an embedded Markov chain. Operating characteristics of the procedure are also examined.

16 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the estimation of the parameters of the three-parameter Weibull distribution, with particular emphasis on the unknown endpoint of the distribution, and conclude that there are practical advantages to the Bayesian approach, but the study also suggests ways in which the maximum likelihood analysis may be improved.
Abstract: We consider the estimation of the parameters of the three-parameter Weibull distribution, with particular emphasis on the unknown endpoint of the distribution. We summarize recent results on the asymptotic behaviour of maximum likelihood estimators. We continue with an example in which maximum likelihood and Bayesian estimators arc compared. We conclude that there are practical advantages to the Bayesian approach, but the study also suggests ways in which the maximum likelihood analysis may be improved.

15 citations


Journal ArticleDOI
David D. Yao1
TL;DR: In this article, the authors considered an open Jackson network of queues and studied majorization and arrangement orderings to order, respectively, various loading and server-assignment policies, and established stochastic and likelihood ratio orderings for the maximum and minimum queue lengths and for the total number of jobs in the network.
Abstract: Consider an open Jackson network of queues. Majorization and arrangement orderings are studied to order, respectively, various loading and server-assignment policies. It is shown that under these order relations, stochastic and likelihood ratio orderings can be established for the maximum and the minimum queue lengths and for the total number of jobs in the network. Stochastic majorization and stochastic orderings are also established, respectively, for the queue-length vector and the associated order-statistic vector. Implications of the results on loading and assignment decisions are discussed.

14 citations


Journal ArticleDOI
TL;DR: The issues of the validity and utility of queueing models of service systems in which adaptive behavior by the (human) customers or servers is likely are examined.
Abstract: Based on observations made during an extensive study of police patrol operations in New York City, we examine the issues of the validity and utility of queueing models of service systems in which adaptive behavior by the (human) customers or servers is likely We find that in addition to depending on the technical accuracy of its assumptions, the accuracy of such a model will also depend upon the level of managerial control of the system and adequacy of resources We recommend that queueing models of human service systems be used in a normative fashion and incorporated in the management feedback loop

Journal ArticleDOI
TL;DR: In this article, the authors extended this approach to cover a number of bulk-service queues discussed by Chaudhry et al. and discussed in the present paper, and extended it to cover the bulk-arrival queues discussed in this paper.
Abstract: Queueing theorists have presented, as solutions to many queueing models, probability generating functions in which state probabilities are expressed as functions of the roots of characteristic equations, evaluation of the roots in particular cases being left to the reader. Many users have complained that such solutions are inadequate. Some queueing theorists, in particular Neuts [6], rather than use Rouche's theorem to count roots and an equation-solver to find them, have developed new algorithms to solve queueing problems numerically, without explicit calculation of roots. Powell [7] has shown that in many bulk service queues arising in transportation models, characteristic equations can be solved and state probabilities can be found without serious difficulty, even when the number of roots to be found is large. We have slightly modified Powell's method, and have extended his work to cover a number of bulk-service queues discussed by Chaudhry et al. [1] and a number of bulk-arrival queues discussed in the present paper.

Journal ArticleDOI
TL;DR: In this paper, a queue with Poisson arrivals and a bulk service rule with two thresholds N andm on the group sizes is studied, and the joint stationary distribution of the waiting time of an arriving customer and of the size of the group in which he is eventually served is analyzed.
Abstract: We study a queue with Poisson arrivals and a bulk service rule with two thresholdsN andm on the group sizes. No group of fewer thanN or more thanm customers is served. When the number of available customers is betweenN andm, all customers are served together. The principal results deal with the joint stationary distribution of the waiting time of an arriving customer and of the size of the group in which he is eventually served. After prior computation of the stationary queue length density, the evaluation of the waiting time distribution is reduced to the solution of a system of linear differential equations and a single integral equation. The process describing the waiting time is in general non-Markovian.

Journal ArticleDOI
TL;DR: Some theorems to support an approximation to the probability that the required servers are all free were approximately a product, where each term is the probability a required node has a free server.
Abstract: Network models in which each node is a loss system frequently arise in telephony. Models with several hundred nodes are common. Suppose a customer requires a server from each of several nodes. It would be convenient if the probability that the required servers are all free were approximately a product, where each term is the probability a required node has a free server. We present some theorems to support this approximation. Most of the theorems are restricted to nodes with one server. Some of the difficulties in analyzing nodes with multiple servers are described.

Journal ArticleDOI
TL;DR: Numerical results to illustrate the difference between deterministic and stochastic models are presented and some areas for further work are pointed out.
Abstract: This paper reviews recent developments in the field of stochastic combat models. A simple heterogeneous model with attrition rates dependent on the number of surviving forces is considered as a Markov process. Various characteristics of system dynamics are evaluated and expressed in explicit form. Numerical results to illustrate the difference between deterministic and stochastic models are presented. Some areas for further work are pointed out.

Journal ArticleDOI
TL;DR: In this paper, the authors compare the validity and power of micro likelihood ratio tests with their macro counterparts, previously developed by the authors to complement standard least-squares point estimates, considering five specific null hypotheses, including parameter stationarity, entity homogeneity, a zero-order process, a specified probability value, and equal diagonal probabilities.
Abstract: We estimate the parameters of a Markov chain model using two types of simulated data: micro, or actual interstate transition counts, and macro aggregate frequency. We compare, by means of Monte Carlo experiments, the validity and power for micro likelihood ratio tests with their macro counterparts, previously developed by the authors to complement standard least-squares point estimates. We consider five specific null hypotheses, including parameter stationarity, entity homogeneity, a zero-order process, a specified probability value, and equal diagonal probabilities. The results from these micro-macro comparisons should help to indicate whether micro panel data collection is justified over the use of simpler state frequency counts.

Journal ArticleDOI
TL;DR: In this article, the authors describe a method for dealing with the volume of experimental and/or simulation data required for large-scale design studies using response surface methodology, which is illustrated by its application to a problem in vehicle collision research.
Abstract: Managing the volume of experimental and/or simulation data required for large-scale design studies can be a significant problem. This paper describes a method for dealing with this problem, using response surface methodology. The method involves (1) determining a summary parameterization of the response of the underlying process mechanism generating the data, in order to characterize this response in terms of a manageable set of performance measures, and (2) deriving a model of the data, in order to summarize the dependence of the performance measures on selected predictor or design variables. The method is illustrated by its application to a problem in vehicle collision research.

Journal ArticleDOI
TL;DR: In this article, a control variate approach based on a linear approximation of the nonlinear model is introduced to reduce the Monte Carlo sampling necessary to achieve a given accuracy, which has several advantages: its moments and other properties are known, it is easy to implement, and there is a correspondence to asymptotic results that permits assessment of control variates effectiveness prior to sampling via measures of nonlinearity.
Abstract: The sampling distribution of parameter estimators can be summarized by moments, fractiles or quantiles. For nonlinear models, these quantities are often approximated by power series, approximated by transformed systems, or estimated by Monte Carlo sampling. A control variate approach based on a linear approximation of the nonlinear model is introduced here to reduce the Monte Carlo sampling necessary to achieve a given accuracy. The particular linear approximation chosen has several advantages: its moments and other properties are known, it is easy to implement, and there is a correspondence to asymptotic results that permits assessment of control variate effectiveness prior to sampling via measures of nonlinearity. Empirical results for several nonlinear problems are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss conditions under which recursive techniques are numerically stable and efficient, giving applications to descriptive and control models for queues, and discuss the conditions for their stability.
Abstract: Recursive methods have been proposed for the numerical solution of equations arising in the analysis and control of Markov processes. Two examples are (i) solving for the equilibrium probabilities, and (ii) solving for the minimal expected cost or time to reach state zero, in a Markov process with left-skip-free transition structure. We discuss conditions under which recursive techniques are numerically stable and efficient, giving applications to descriptive and control models for queues.

Journal ArticleDOI
TL;DR: A random translation of a marked point process is considered and the main concern is to make inferences about the functionh(·) for different types of data.
Abstract: A random translation of a marked point process is considered. The distribution of random translation is assumed to be dependent upon the mark through a certain functionh(·). The main concern is to make inferences about the functionh(·) for different types of data. Complete identification and estimation are not possible in general, but some interesting particular solutions are presented.

Journal ArticleDOI
TL;DR: In this article, the authors reviewed the Kermack-McKendrick and Whittle threshold theorem for the general epidemic and extended these results to the case of general epidemic with bunching where the βxy homogeneous mixing term is replaced byβxy/(x+y)α, 0≤α≤1.
Abstract: This note begins by reviewing the Kermack-McKendrick and Whittle Threshold Theorems for the general epidemic. It then extends these results to the case of the general epidemic with bunching where theβxy homogeneous mixing term is replaced byβxy/(x+y)α, 0≤α≤1.

Journal ArticleDOI
TL;DR: In this paper, the stationary probability vector of queueing models whose infinitesimal generator is of block Hessenberg form is computed and shown to be equal to the first column of the inverse of the coefficient matrix.
Abstract: We introduce a numerical method to compute the stationary probability vector of queueing models whose infinitesimal generator is of block Hessenberg form. It is shown that the stationary probability vector is equal to the first column of the inverse of the coefficient matrix. Furthermore, it is shown that the first column of the inverse of an upper (or lower) Hessenberg matrix may be obtained in a relatively small number of operations. Together, these results allow us to define a powerful algorithm for solving certain queueing models. The efficiency of this algorithm is discussed and a comparison with the method of Neuts is undertaken. A relationship with the method of Gaussian elimination is established and used to develop some stability results.

Journal ArticleDOI
TL;DR: In this article, Sumita and Kijima developed a bivariate Laguerre transform for evaluating repeated combinations of bivariate continuum operations such as bivariate convolutions, marginal convolutions and double tail integration, partial differentiation and multiplication by bivariate polynomials.
Abstract: In a recent paper by Sumita and Kijima [7], the bivariate Laguerre transform was developed, which provides a systematic numerical tool for evaluating repeated combinations of bivariate continuum operations such as bivariate convolutions, marginal convolutions, double tail integration, partial differentiation and multiplication by bivariate polynomials. The formalism is an extension of the univariate Laguerre transform developed by Keilson, Nunn and Sumita [1,2,6], using the product orthonormal basis generated from Laguerre functions. In this paper, the power of the procedure is demonstrated by studying numerically a bivariate Lindley process arising from certain queueing systems. Various descriptive distributions reflecting transient behavior of such queueing systems are explicitly evaluated via the bivariate Laguerre transform.

Journal ArticleDOI
TL;DR: In this article, the risk profiles of low-probability high-consequence events where the final consequence results from a number of intermediate events are obtained, and algorithms are developed to compute these equations.
Abstract: Low-probability high-consequence events play an important role is assessing the risk of catastrophic loss. Their risk profiles, however, can be difficult to obtain. This paper obtains the risk profiles of low-probability high-consequence events where the final consequence results from a number of intermediate events. Called composite events, these events occur, for example, in accidents releasing hazardous material. The structure of composite events is described and risk profile equations developed, both on a per-event and per-annum basis. The many extremely low valued terms, especially in the tail of the risk profile, make calculation nontrivial. Accordingly, algorithms are developed to compute these equations. In addition, formulae for means and variances are obtained, and an illustrative example is provided.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the concept of hierarchical or random parameter stochastic process models, which arise when members of a population each generate a stochastically process governed by certain parameters and the values of the parameters may be viewed as single realizations of random variables.
Abstract: This paper introduces and illustrates the concept of hierarchical or random parameter stochastic process models. These models arise when members of a population each generate a stochastic process governed by certain parameters and the values of the parameters may be viewed as single realizations of random variables. The paper treats the estimation of the individual parameter values and the parameters of the superpopulation distribution. Examples from system reliability, pharmacokinetic compartment models, and criminal careers are introduced; a reliability (Poisson process-exponential interval) process is examined in greater detail. An explicit, approximate, robust estimator of individual (log) failure rates is presented for the case of a long-tailed (Studentt) superpopulation. This estimator exhibits desirable limited shrinkage properties, refusing to borrow unjustified strength. Numerical properties of such estimators are described more fully elsewhere.

Journal ArticleDOI
I. V. Basawa1
TL;DR: In this paper, the efficiency properties of least-squares predictors when the parameters are estimated are reviewed. And the criterion of asymptotic best unbiased predictors for general stochastic models is a natural analogue of the minimum mean-square error criterion used traditionally in linear prediction for linear models.
Abstract: The main purpose of this paper is to review the efficiency properties of least-squares predictors when the parameters are estimated. It is shown that the criterion of asymptotic best unbiased predictors for general stochastic models is a natural analogue of the minimum mean-square error criterion used traditionally in linear prediction for linear models. The results are applied to log-linear models and autoregressive processes. Both stationary and non-stationary processes are considered.

Journal ArticleDOI
TL;DR: This paper is based on an invited lecture given by the author at the ORSA/TIMS Special Interest Group on Applied Probability Conference onStatistical and Computational Problems in Probability Modeling, held at Williamsburg, Virginia, January 7–9, 1985 and published in this volume of the Annals of Operations Research.
Abstract: This paper is based on an invited lecture given by the author at the ORSA/TIMS Special Interest Group on Applied Probability Conference onStatistical and Computational Problems in Probability Modeling, held at Williamsburg, Virginia, January 7–9, 1985.

Journal ArticleDOI
TL;DR: This paper deals with the selection and evaluation of statistical techniques for use in the modeling and forecasting of water quality time series, and a summary of experience with analyzing archival data on the Niagara River and the use of a fractionally differenced model.
Abstract: This paper deals with the selection and evaluation of statistical techniques for use in the modeling and forecasting of water quality time series. The focus is on statistical concepts relevant to the analysis of flows and concentrations. A selection of time series procedures has been used for auditing water quality archival data, including the screening of data sets, correlation and spectrum calculations, and iterative model fitting. A summary is provided of experience with analyzing archival data on the Niagara River and the use of a fractionally differenced model.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the model of zero-mean random variables, having a strictly positive density, and obtained necessary and sufficient conditions for ergodicity for this process to be transient.
Abstract: We consider the model $$Z_t = \sum\limits_{i = 1}^k {\phi (i,j)Z_{t - i} } + a_t (j)when\left[ {Z_{t - 1} ,Z_{t - 2,...,} Z_{t - k} } \right]^\prime \in R(j),$$ where {R(j);1⩽j⩽ l}is a partition of ℝk, and for each 1⩽j⩽l,{at(j);t⩾ 0} are i.i.d. zero-mean random variables, having a strictly positive density. Sufficient conditions are obtained for this process to be transient. In addition, for a particular class of such models, necessary and sufficient conditions for ergodicity are obtained. Least-squares estimators of the parameters are obtained and are, under mild regularity conditions, shown to be strongly consistent and asymptotically normal.

Journal ArticleDOI
TL;DR: In this article, the renewal model has been applied to the analysis of the busy period in M/G/1 models, where the authors show that the omni-method lifts the "Laplace veil" from much of the physical reality underlying the models considered.
Abstract: Some basic results of the renewal model are effectively summarized by $$E\psi '(r) = E[\psi (x) - \psi (0)]/Ex,$$ (1) wherex is the random variableservice time, r is its associatedresidual time, Ψ ( ) is an arbitrary “well behaved” function, andE is the expectation operator. The process “waiting time for service of a new arrival”, denoted byw, is effectively summarized in the modelM/G/1 by $$E\psi (w) = (1 - \rho )\psi (0) + \rho E\psi (w + r).$$ (2) We refer toEΨ (Z) as theomni-transform of the random variable or processZ, and to equations typified by (2) asomni-equations, i.e. equations valid for an arbitrary well-behaved functionΨ ( ). The omni-transform owes its flexibility to the arbitrariness ofΨ ( ) and its ease of handling to its simplicity when applied to mixtures and sums of random variables. From (2) we obtain the moments ofw by puttingΨ (w)=wk, the Laplace transform ofw by puttingΨ(w)=e−sw, and the convolution equation (2a) for the distribution ofw by puttingΨ(w)=1 ifw≥t andΨ(w)=0 otherwise: $$\Pr (w \leqslant t) = (1 - \rho ) + \rho \Pr (w + r \leqslant t),$$ (2a) a result equivalent to the Takacs integro-differential equation. Using repeatedly the so-called shift property of omni-equations, (2a) can be solved by representing the distribution ofw as an infinite series of convolutions: $$\Pr (w \leqslant t) = (1 - \rho ) + (1 - \rho )\rho \Pr (r_1 \leqslant t) + (1 - \rho )\rho ^2 \Pr (r_1 + r_2 \leqslant t) + + ,$$ (3) where theri are a set of independent random variables, each distributed liker. Equation (3) is equivalent to a theorem by Benes. An analogy between the process “waiting time inM/G/1” and the process “toss a coin till heads shows up” where the tossing time is a random variable is also pointed out. The omni-calculus also sheds some light on the modelG/G/1. In forthcoming publications, we will apply the omni-calculus to the process “number in queue” inM/G/1, to the analysis of the busy period inM/G/1, and to some modifiedM/G/1 models, e.g. a vacationing server. In these publications too, the omni-method lifts the “Laplace veil” from much of the physical reality underlying the models considered.