scispace - formally typeset
Search or ask a question

Showing papers in "Probability in the Engineering and Informational Sciences in 1991"


Journal ArticleDOI
TL;DR: In this article, it was shown that the internal path length of a recursive tree of order n converges to a limiting distribution and that there exists a random variable I such that (In n In n)/n→I almost surely and in quadratic mean, as n → α.
Abstract: The depth of insertion and the internal path length of recursive trees are studied. Luc Devroye has recently shown that the depth of insertion in recursive trees is asymptotically normal. We give a direct alternative elementary proof of this fact. Furthermore, via the theory of martingales, we show that In, the internal path length of a recursive tree of order n, converges to a limiting distribution. In fact, we show that there exists a random variable I such that (In – n In n)/n→I almost surely and in quadratic mean, as n → α. The method admits, in passing, the calculation of the first two moments of In.

72 citations


Journal ArticleDOI
TL;DR: The existence of an optimal randomized simple policy is proved, which is a policy that randomizes between two stationary policies, that differ in at most one state.
Abstract: A Markov decision chain with countable state space incurs two types of costs: an operating cost and a holding cost. The objective is to minimize the expected discounted operating cost, subject to a constraint on the expected discounted holding cost. The existence of an optimal randomized simple policy is proved. This is a policy that randomizes between two stationary policies, that differ in at most one state. Several examples from the control of discrete time queueing systems are discussed.

46 citations


Journal ArticleDOI
TL;DR: This paper considers scheduling n jobs on one machine to minimize the expected weighted flowtime and the number of late jobs and finds the optimal policies for the preemptive repeat model.
Abstract: This paper considers scheduling n jobs on one machine to minimize the expected weighted flowtime and the number of late jobs. The processing times of the jobs are independent random variables. The machine is subject to failure and repair where the uptimes are exponentially distributed. We find the optimal policies for the preemptive repeat model.

36 citations


Journal ArticleDOI
TL;DR: In this article, the authors established strong consistency (i.e., almost sure convergence) of infinitesimal perturbation analysis (IPA) estimators of derivatives of steady-state means for a broad class of systems.
Abstract: We establish strong consistency (i.e., almost sure convergence) of infinitesimal perturbation analysis (IPA) estimators of derivatives of steady-state means for a broad class of systems. Our results substantially extend previously available results on steady-state derivative estimation via IPA.Our basic assumption is that the process under study is regenerative, but our analysis uses regenerative structure in an indirect way: IPA estimators are typically biased over regenerative cycles, so straightforward differentiation of the regenerative ratio formula does not necessarily yield a valid estimator of the derivative of a steady-state mean. Instead, we use regeneration to pass from unbiasedness over fixed, finite time horizons to convergence as the time horizon grows. This provides a systematic way of extending results on unbiasedness to strong consistency.Given that the underlying process regenerates, we provide conditions under which a certain augmented process is also regenerative. The augmented process includes additional information needed to evaluate derivatives; derivatives of time averages of the original process are time averages of the augmented process. Thus, through this augmentation we are able to apply standard renewal theory results to the convergence of derivatives.

35 citations


Journal ArticleDOI
TL;DR: The connection between separation and strong stationary times was drawn by Aldous and Diaconis (1987) (Advances in Applied Mathematics 8: 69−97) for discrete time chains.
Abstract: Separation is one measure of distance from stationarity for Markov chains. Strong stationary times provide bounds on separation and so aid in the analysis of mixing rates. The precise connection between separation and strong stationary times was drawn by Aldous and Diaconis (1987) (Advances in Applied Mathematics 8: 69−97) for discrete time chains. We develop the corresponding foundational theory for continuous time chains; several new and interesting mathematical issues arise.

32 citations


Journal ArticleDOI
Viên Nguyen1
TL;DR: The optimal policy is shown to be a “generalized trunk reservation policy”; in other words, the optimal policy accepts higher-paying customers whenever possible and accepts lower- paying customers only if fewer than c1 servers are busy, where i is the number of busy servers in the overflow queue.
Abstract: This paper discusses an optimal dynamic policy for a queueing system with M servers, no waiting room, and two types of customers. Customer types differ with respect to the reward that is paid on commencement of service, but service times are exponentially distributed with the same mean for both types of customers. The arrival stream of one customer type is generated by a Poisson process, and the other customer type arrives according to the overflow process of an M/M/m/m queue. The objective is to determine a policy for admitting customers to maximize the expected long-run average reward. By posing the problem in the framework of Markov decision processes and exploiting properties of submodular functions, the optimal policy is shown to be a “generalized trunk reservation policy”; in other words, the optimal policy accepts higher-paying customers whenever possible and accepts lower-paying customers only if fewer than c1 servers are busy, where i is the number of busy servers in the overflow queue. Computational issues are also discussed. More specifically, approximations of the overflow process by an interrupted Poisson process and a Poisson process are investigated.

24 citations



Journal ArticleDOI
TL;DR: In this article, it was shown that stochastic Petri nets and generalized semi-Markov processes have the same modeling power as GSMP, and conditions for time-average convergence and convergence in distribution.
Abstract: Generalized semi-Markov processes and stochastic Petri nets provide building blocks for specification of discrete event system simulations on a finite or countable state space. The two formal systems differ, however, in the event scheduling (clock-setting) mechanism, the state transition mechanism, and the form of the state space. We have shown previously that stochastic Petri nets have at least the modeling power of generalized semi-Markov processes. In this paper we show that stochastic Petri nets and generalized semi-Markov processes, in fact, have the same modeling power. Combining this result with known results for generalized semi-Markov processes, we also obtain conditions for time-average convergence and convergence in distribution along with a central limit theorem for the marking process of a stochastic Petri net.

22 citations



Journal ArticleDOI
TL;DR: In this article, the loss probabilities of customers in the M x /GI/1/k, GI/M x /l/k and their related queues such as server vacation models are compared with respect to the convex order of several characteristics, for example, batch size, of the arrival or service process.
Abstract: The loss probabilities of customers in the M x /GI/1/k, GI/M x /l/k and their related queues such as server vacation models are compared with respect to the convex order of several characteristics, for example, batch size, of the arrival or service process. In the proof, we give a characterization of a truncation expression for a stationary distribution of a finite Markov chain, which is interesting in itself.

17 citations


Journal ArticleDOI
TL;DR: This work finds that considering the last two stations, the departure process is stochastically faster if the slower station is last, consistent with the “bowl shape” phenomenon that has been observed in serial queueing systems with zero buffer capacity.
Abstract: We consider tandem queueing systems with a general arrival process and exponential service distribution. The queueing system consists of several stations with finite intermediate buffer capacity between the stations. We address the problem of determining the optimal arrangement for the stations. We find that considering the last two stations, the departure process is stochastically faster if the slower station is last. Our results are consistent with the “bowl shape” phenomenon that has been observed in serial queueing systems with zero buffer capacity.


Journal ArticleDOI
TL;DR: A set of stochastic jobs is to be processed on a single machine that is subject to breakdown, and it is demonstrated the existence of a nonpreemptive policy that is optimal in the class of all preemptive ones.
Abstract: A set of stochastic jobs is to be processed on a single machine that is subject to breakdown. All jobs make progress as they are processed in the absence of machine breakdowns. However, breakdowns cause setbacks to (possibly) all jobs in the system, except those that have already been completed. With machines subject only to fairly mild restrictions on this “damage” process, we demonstrate the existence of a nonpreemptive policy that is optimal in the class of all preemptive ones.

Journal ArticleDOI
TL;DR: It is shown that it is better for the faster server to be first when there is no queue capacity, and the optimal order for some systems where both servers have the same average service time and different service distributions is found.
Abstract: In this paper we consider the problem of finding the optimal order for two servers in series when there is no queue capacity. We show that it is better for the faster server to be first. The strength of this conclusion will depend on the strength of the assumption made about the service distribution. We also find the optimal order for some systems where both servers have the same average service time and different service distributions.


Journal ArticleDOI
TL;DR: This paper considers the reliability model where each link fails independently with probability p, the nodes always work, and the network fails if it is not strongly connected, and shows how to use it to obtain the most reliable double-loop networks.
Abstract: A double-loop network G(h1, h2) has n nodes represented by the n residues modulo n and 2n links given by i ↦i + h1, i ↦ i + h2, i = 0,1,…,n – 1.We consider the reliability model where each link fails independently with probability p, the nodes always work, and the network fails if it is not strongly connected. There exists no known polynomial time algorithm to compute the reliabilities of general double-loop networks. When p is small, the reliability is dominated by the link connectivity. As all strongly connected double.loop networks have link connectivity exactly 2, a finer measure of reliability is needed. In this paper we give such a measure and show how to use it to obtain the most reliable double-loop networks.

Journal ArticleDOI
TL;DR: In this paper, the ergodicity and convergence of the Laplace-Stieltjes transforms in a neighborhood of 0 are verified for two-dimensional versions of the ALOHA and coupled processors models.
Abstract: μ-Geometric ergodicity of two-dimensional versions of the ALOHA and coupled processors models is verified by checking μ-geometric recurrence. Ergodicity and convergence of the Laplace-Stieltjes transforms in a neighborhood of 0 are necessary and sufficient conditions for the first model. The second model is exponential, for which ergodicity suffices to establish the required results.




Journal ArticleDOI
TL;DR: In this paper, the authors estimate the expected reward accumulated up to hitting an absorbing set in a Markov chain, starting from state j. Their method has broader scope than LRM: they can estimate sensitivity to opening arcs.
Abstract: Let x(j) be the expected reward accumulated up to hitting an absorbing set in a Markov chain, starting from state j. Suppose the transition probabilities and the one-step reward function depend on a parameter, and denote by y(j) the derivative of x(j) with respect to that parameter. We estimate y(0) starting from the respective Poisson equations that x = [x(0),x(l),…] and y = [y(0),y(l),…] satisfy. Relative to a likelihood-ratio-method (LRM) estimator, our estimator generally has (much) smaller variance; in a certain sense, it is a conditional expectation of that estimator given x. Unlike LRM, however, we have to estimate certain components of x. Our method has broader scope than LRM: we can estimate sensitivity to opening arcs.

Journal ArticleDOI
TL;DR: In this paper, the authors explore and discuss some graph-theoretic and conditional independence properties of decomposable probabilistic influence diagrams, which are helpful in providing an efficient algorithm for obtaining a posterior decomposition of probabilistically influence diagrams given the state of observed nodes.
Abstract: Probabilistic influence diagrams are a useful stochastic modeling tool. To calculate probabilities of interest relative to a probabilistic influence diagram efficiently, it will be helpful for us to use an associated decomposable-directed graph. We first explore and discuss some graph-theoretic and conditional independence properties of decomposable probabilistic influence diagrams. These properties are helpful in providing an efficient algorithm for obtaining a posterior decomposable probabilistic influence diagram given the state of one or more observed nodes. The connection between Shachter's “sequential creation of conditionally barren nodes” concept and Lauritzen and Spiegeihalter's “moralization and triangulation” algorithm for calculating probabilities relative to a probabilistic influence diagram is made explicit. We also discuss how to use wisely the concepts of “sequential creation of conditionally barren nodes” and “merging nodes” together with the graph-theoretic properties of decomposable directed graphs to compute probabilities relative to probabilistic influence diagrams.

Journal ArticleDOI
TL;DR: This paper analyzes a scheduling system where a fixed number of nonpreemptive jobs is to be processed on multiple parallel processors with different processing speeds and shows that there exists a simple threshold strategy that slochastically minimizes the total delay of all jobs.
Abstract: This paper analyzes a scheduling system where a fixed number of nonpreemptive jobs is to be processed on multiple parallel processors with different processing speeds. Each processor has an exponential processing time distribution and the processors are ordered in ascending order of their mean processing times. Each job has its own deadline that is exponentially distributed with rate s 1 , independent of the deadlines of other jobs and also independent of job processing times. A job departs the system as soon as either its processing completes or its deadline occurs. We show that there exists a simple threshold strategy that slochastically minimizes the total delay of all jobs. The policy depends on distributions of processing times and deadlines, but is independent of the rate of deadlines. When the rate of the deadline distribution is 0 (no deadlines), the total delay reduces to the flowtime (the sum of completion times of all jobs). If each job has its own probability of being correctly processed, then an extension of this policy stochastically maximizes the total number of correctly processed, nontardy jobs. We discuss possible generalizations and limitations of this result.

Journal ArticleDOI
TL;DR: In this article, an approximation scheme for evaluating Wiener integrals by simulating Brownian motion is studied, and the rate of convergence and numerical results are given, including an application to the heat equation by using the Feynman-Kac formula.
Abstract: An approximation scheme for evaluating Wiener integrals by simulating Brownian motion is studied. The rate of convergence and numerical results are given, including an application to the heat equation by using the Feynman-Kac formula.

Journal ArticleDOI
TL;DR: This paper considers one-machine scheduling problems with or without a perfect machine and random processing times and derive among other results elimination criteria for different classes of cost functions.
Abstract: In this paper we consider one-machine scheduling problems with or without a perfect machine and random processing times and derive among other results elimination criteria for different classes of cost functions.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo sampling plan is proposed for estimating variation in system reliability in response to variation in component reliabilities, which is based on the concept of confidence intervals.
Abstract: : Sensitivity analysis, which represents an integral part of virtually every study of system reliability, measures variation in this quantity in response to changes in component reliabilities or in system design. Replacing old components with new ones with higher reliabilities affects system reliability. As time elapses, system reliability deteriorates when a non replacement policy for components all affect system reliability. Sampling variation in component reliability estimates induce sampling variation in the corresponding system reliability estimate. Having access to a model that accurately predicts these changes in system behavior allows one to make considerably more well informed decisions for maintaining or enhancing performance. This paper presents a method for estimating variation in system reliability in response to variation in component reliabilities. It describes a Monte Carlo sampling plan that on each of w sets of distinct component reliabilities. It describes a Monte Carlo sampling plan that on each replication provides sample data that contribute to the estimation of system reliability for each w sets of distinct component reliabilities. The sets may represent alternative component replacement plans, deteriorating component reliabilities at a succession of time points or extremal points of simultaneous component reliability interval estimates (Fishman 1987). For purposes of exposition, we focus on s-t reliability but emphasize that the concepts discussed here also apply to other definitions of system reliability. Keywords: Coefficients; Polynomials; Confidence intervals.

Journal ArticleDOI
TL;DR: It is shown that for various assumptions on the distribution of job-processing times of a flow shop, certain scheduling policies following the stochastic analogy of the Earliest Due Date (EDD) rule yield optimal results.
Abstract: In this paper we consider stochastic flow-shop scheduling with reference to certain lateness-related performance measures. We show that for various assumptions on the distribution of job-processing times of a flow shop, certain scheduling policies following the stochastic analogy of the Earliest Due Date (EDD) rule yield optimal results.

Journal ArticleDOI
TL;DR: A single link is considered and a weak convergence result is proved to justify a piecewise-deterministic Markov process approximation to the system, which is a generalization of the Erlang fixed-point approximation for loss networks and is justified via a diverse routing limit theorem.
Abstract: We consider a communication network that can support both wideband video calls and narrowband data traffic First we consider a single link and prove a weak convergence result to justify a piecewise-deterministic Markov process approximation to the system We then generalize this approximation to allow priorities and more than one link This second approximation is a generalization of the Erlang fixed-point approximation for loss networks and is justified via a diverse routing limit theorem

Journal ArticleDOI
TL;DR: The importance of a component is defined to be probability that the failure of the component or a module that is part of a system can be derived directly from the role of the components or the module in the failureOf the system.