scispace - formally typeset
Search or ask a question

Showing papers on "Markov chain published in 1975"



Journal ArticleDOI
TL;DR: For an m-state homogeneous Markov chain whose one-step transition matrix is T, the group inverse, $A^#$, of the matrix $A = I - T$ is shown to play a central role as discussed by the authors.
Abstract: For an m-state homogeneous Markov chain whose one-step transition matrix is T, the group inverse, $A^#$, of the matrix $A = I - T$ is shown to play a central role. For an ergodic chain, it is demon...

354 citations


Journal ArticleDOI
M. Reiser1, Hisashi Kobayashi1
TL;DR: This paper shows how open and closed subchains interact with each other in such systems with N servers and L chains and derives efficient algorithms derived from the generating function representation.
Abstract: In this paper a recent result of Baskett, Chandy, Muntz, and Palacios is generalized to the case in which customer transitions are characterized by more than one closed Markov chain. Generating functions are used to derive closed-form solutions to stability, normalization constant, and marginal distributions. For such a system with N servers and L chains the solutions are considerably more complicated than those for systems with one subchain only. It is shown how open and closed subchains interact with each other in such systems. Efficient algorithms are then derived from our generating function representation.

292 citations


Journal ArticleDOI
TL;DR: This paper shows that a previously developed technique for analyzing simulations of GI/G/s queues and Markov chains applies to discrete-event simulations that can be modeled as regenerative processes.
Abstract: This paper shows that a previously developed technique for analyzing simulations of GI/G/s queues and Markov chains applies to discrete-event simulations that can be modeled as regenerative processes. It is possible to address questions of simulation run duration and of starting and stopping simulations because of the existence of a random grouping of observations that produces independent identically distributed blocks in the course of the simulation. This grouping allows one to obtain confidence intervals for a general function of the steady-state distribution of the process being simulated and for the asymptotic cost per unit time. The technique is illustrated with a simulation of a retail inventory distribution system.

271 citations


Journal ArticleDOI
TL;DR: This paper presents Markov chain models for analyzing the extent of memory interference in multiprocessor systems with a crosspoint switch for processor-memory communication and the results predicted are compared with some simulation results and some actual measurements on C.mmp, a multipROcessor system being built at Carnegie-Mellon University.
Abstract: This paper presents Markov chain models for analyzing the extent of memory interference in multiprocessor systems with a crosspoint switch for processor-memory communication. Processor behavior is simplified to an ordered sequence of a memory request followed by a certain amount of processing time. The results predicted by the model are compared with some simulation results and some actual measurements on C.mmp, a multiprocessor system being built at Carnegie-Mellon University.

227 citations


Journal ArticleDOI
TL;DR: In this paper, the distribution functions of the total amount of precipitation and the largest daily precipitation occurring in an n-day period were derived for the Markov chain-exponential model.
Abstract: General expressions are derived for the distribution functions of the total amount of precipitation and the largest daily precipitation occurring in an n-day period. Two special cases are considered: (i) the probability of occurrence of precipitation on any day in an n-day period is a constant (binomial counting process) and (ii) the probability of occurrence of precipitation on any day depends on whether the previous day was wet or dry (Markov chain counting process). The distribution function for daily precipitation was assumed to be exponential. Analytic expressions are derived for the distribution functions for total precipitation or precipitation greater than a threshold. For the numerical example chosen, the Markov chain-exponential model is slightly superior to the binomial-exponential model. This stochastic model seems to have several advantages over present approaches.

205 citations


Journal ArticleDOI
TL;DR: In this article, the phase transition on the infinite tree $T_N$ in which every point has exactly $N + 1$ neighbors is studied and the main results ascertain for which ones of these chains there are other Markov random fields with the same conditional probabilities.
Abstract: Phase transition is studied on the infinite tree $T_N$ in which every point has exactly $N + 1$ neighbors. For every assignment of conditional probabilities which are invariant under graph isomorphism there is a Markov chain with these conditional probabilities and the main results ascertain for which ones of these chains there are other Markov random fields with the same conditional probabilities.

201 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied optimum control of random differential equations of the form \[\dot x = f^{r(t)} (t,x,u)) in which $ r(t)$ is a jump Markov process and gave optimality conditions of dynamic programming type and stochastic minimum principles.
Abstract: This paper studies optimum control of random differential equations of the form \[\dot x = f^{r(t)} (t,x,u)\] in which $r(t)$ is a jump Markov process. Optimality conditions of dynamic programming type and stochastic minimum principles are given. The problems posed involve terminal conditions, and transversality conditions are shown to hold.

179 citations



Journal ArticleDOI
TL;DR: In this paper, an objective procedure for the determination of the order of an ergodic Markov chain with a finite number of states is presented, using Akaike's information criterion.
Abstract: Using Akaike's information criterion, we have presented an objective procedure for the determination of the order of an ergodic Markov chain with a finite number of states. The procedure exploits the asymptotic properties of the maximum likelihood ratio statistics and Kullback and Leibler's mean information for the discrimination between two distributions. Numerical illustrations are given, using data from Bartlett (1966), Good and Gover (1967) and some weather records.

160 citations


Journal ArticleDOI
TL;DR: The notion of $(d)$--Markov property was introduced for discrete random fields by R.L. Dobrushin and E. Nelson as mentioned in this paper, and it plays a significant role in the theory of Euclidean Bose fields.
Abstract: The notion of $(d)$--Markov property was introduced for discrete random fields by R.L. Dobrushin \cite{1.}. E. Nelson \cite{2.} formulated the Markov property in the continuous case and showed that this notion plays a significant role in the theory of Euclidean Bose fields. The attempt of extending Nelson's method to the case of Fermi fields naturally leads to the problem of defining a noncommutative Markov property.(...)

Journal ArticleDOI
TL;DR: A survey of the major results and applications of Markov renewal equations in an informal setting is given in this article, where some real problems are modelled and the lines of attack are indicated; an extensive bibliography is provided to get a further glimpse of the variety of applications.
Abstract: The objective is to survey the major results and applications in an informal setting. The exposition is restricted to finite state spaces; some real problems are modelled and the lines of attack are indicated; and an extensive bibliography is provided to get a further glimpse of the variety of applications. Throughout, the parallels with renewal theory are brought out, and the unity of thought afforded by the formalism of Markov renewal equations is stressed.


Journal ArticleDOI
TL;DR: It is shown that this dependence upon elapsed time can be ignored, if attention is restricted to the distribution of the final sizes of epidemics.
Abstract: For many diseases, the infectiousness of an individual depends upon the time elapsed since his own infection. This feature greatly complicates the analysis of the spread of infection within a population. Here it is shown that this dependence upon elapsed time can be ignored, if attention is restricted to the distribution of the final sizes of epidemics. For every epidemic, there is a corresponding Markov chain which has the same final size distribution. Such Markov chains are constructed (a) for models where there are several types of infectives, but only one type of susceptible individual, and (b) if there are several types of susceptibles, but only one infective type. Some qualitative differences between these models are illustrated.

Journal ArticleDOI
TL;DR: The overall failure process is described exactly and asymptotically for highly reliable sub-systems and an application to process-control computer software is suggested.
Abstract: A system is considered in which switching takes place between sub-systems according to a continuous parameter Markov chain. Failures may occur in Poisson processes in the sub-systems, and in the transitions between subsystems. All failure processes are independent. The overall failure process is described exactly and asymptotically for highly reliable sub-systems. An application to process-control computer software is suggested.

Journal ArticleDOI
01 Jul 1975
TL;DR: In this paper, two sets of conditions on the Q-matrix of an irreducible Markov process on a countably infinite state space were found, which ensure that Q is regular; the first set also implies that the process is ergodic, the second that it is recurrent.
Abstract: Two sets of conditions are found on the Q-matrix of an irreducible Markov process on a countably infinite state space which ensure that Q is regular; the first set also implies that the Markov process is ergodic, the second that it is recurrent. If the process is not irreducible, the conditions still imply regularity, and then either non-dissipativity or ‘ultimate recurrence’ (reducible analogues of ergodicity and recurrence) of the process. Conditions sufficient for ergodicity or recurrence of the process in the non-regular and regular case are also given. The results parallel (and use) results for discrete time Markov chains, and the known discrete time recurrence condition is extended to the reducible case. The conditions are illustrated by a competition process example.

Journal ArticleDOI
TL;DR: In this paper, a stochastic minimom principle whose adjoints satisfy deterministic integral equations is defined and defined to be necessary and sufficient for optimality, and a deterministic optimality criterion is defined.
Abstract: -Control of stochastic differential equations of the form dot{x}=f^{r(t)}(t,x,u) in which r(t) is a fiie-state Markov p n m s is discussed Dynamic programming optimalityconditions are shown to be necessary and sufficient for oplimality. A stochastic minimom principle whose adjoints satisfy deterministic integral equations is defiied and shorn to be necessary and snffiaent for optimality.

Journal ArticleDOI
TL;DR: In this article, a non-stationary Bayesian dynamic decision model with general state, action and parameter spaces is considered and it is shown that this model can be reduced to a nonMarkovian (resp. Markovian) decision model, with completely known transition probabilities.
Abstract: We consider a non-stationary Bayesian dynamic decision model with general state, action and parameter spaces. It is shown that this model can be reduced to a non-Markovian (resp. Markovian) decision model with completely known transition probabilities. Under rather weak convergence assumptions on the expected total rewards some general results are presented concerning the restriction on deterministic generalized Markov policies, the criteria of optimality and the existence of Bayes policies. These facts are based on the above transformations and on results of Hindererand Schal.

Journal ArticleDOI
TL;DR: English is modeled as a Markov source and the Viterbi algorithm is used to do maximum a posteriori sequence estimation on the output of an optical character reader (OCR).
Abstract: The results of an experiment are described in which contextual information is used to improve the performance of an optical character reader when reading English text. Specifically, English is modeled as a Markov source and the Viterbi algorithm is used to do maximum a posteriori sequence estimation on the output of an optical character reader (OCR).


Journal ArticleDOI
TL;DR: In this paper, a Markov approximation to the propagation of waves in an extended, irregular medium is discussed in an astrophysical context, and a new derivation is presented which is simple and which shows that the assumption of Gaussian statistics used by previous authors is irrelevant.
Abstract: The Markov approximation to the propagation of waves in an extended, irregular medium is discussed in an astrophysical context. A new derivation is presented which is simple and which shows that the assumption of Gaussian statistics used by previous authors is irrelevant. We discuss the relevance of the approximation and show that it may apply in many situations of interest, including interstellar scintillations of pulsar signals. The approximation does not require the assumption of weak scattering or Gaussian correlation functions. The Markov equation for the angular spectrum is particularly simple, and solutions are discussed for typical turbulence spectra. It is found that the equation for the angular spectrum is very nearly that used by previous authors, and the present discussion shows that these results are much more general than previously thought. A possible observational test for distinguishing between Gaussian and power-law interstellar density spectra is discussed.


Journal ArticleDOI
TL;DR: New improved bounds on the optimal return function in finite state and action, infinite horizon, discounted stationary Markov decision chains, and improved tests for suboptimal decisions are developed by solving a single-constraint, bounded-variable linear program.
Abstract: This paper develops new improved bounds on the optimal return function in finite state and action, infinite horizon, discounted stationary Markov decision chains. The bounds are obtained by solving a single-constraint, bounded-variable linear program. They can be used for algorithmic termination criteria and improved tests for suboptimal decisions. We show how to implement these tests so that little additional computational effort is required. We consider several transformations that can be used to convert a process into an equivalent one that may be easier to solve. We examine whether the transformations reduce the spectral radius and/or the norm maximum row sum of the process. Gauss-Seidel iteration and Jacobi iteration are shown to be special cases of the general transformations. Gauss-Seidel iteration is given additional consideration. Another special case not only preserves equality of the row sums and sparsity but, when applicable, can dramatically reduce the norm. It reduces each element in a column by that column's smallest element. Several possible computational approaches are applied to a small numerical example.


Journal ArticleDOI
TL;DR: A lower bound on the minimal mean-square error in estimating nonlinear Markov processes is presented, based on the Van Trees' version of the Cramer-Rao inequality.
Abstract: A lower bound on the minimal mean-square error in estimating nonlinear Markov processes is presented. The bound holds for causal and uncausal filtering. The derivation is based on the Van Trees' version of the Cramer-Rao inequality.

Journal ArticleDOI
TL;DR: In this article, the authors describe the nature of conflict and communication by answering the following question: How does conflict function in the process of achieving group consensus? Transcriptions of classroom groups were subjected to a Markov statistical analysis.
Abstract: Previous research on conflict emanates from a variety of theoretical perspectives and yields inconsistent conclusions. The purpose of this study was to describe the nature of conflict and communication by answering the following question: How does conflict function in the process of achieving group consensus? Transcriptions of classroom groups were subjected to a Markov statistical analysis. The results of the study indicated that phases of conflict are present throughout group interaction. Three phases were discovered and designated as interpersonal conflict, confrontation, and substantive conflict. During the interpersonal conflict phase, conflict is indirect and probably stems from individual differences. While conflict is direct and most abundant during the confrontation phase, it functions positively (i.e., generates interpretation) and facilitates decision making during the substantive conflict phase. Results from this analysis suggest numerous implications concerning the nature of conflict and communication.

Journal ArticleDOI
TL;DR: In this article, Markov mixture models combine a Markov model for transitions between low and normal streamflow states with a mixture model blending two normal subpopulations for generating synthetic streamflow records with long and severe droughts.
Abstract: Markov mixture models combine a Markov model for transitions between low and normal streamflow states with a mixture model blending two normal subpopulations. The models are particularly effective for generating synthetic streamflow records with long and severe droughts. Their use in a hypothetical planning problem illustrates the application of a set of modeling precepts.

Journal ArticleDOI
TL;DR: Performance in terms of meansquare reconstruction error versus bit rate can be shown to parallel the theoretical rate distortion function for the first-order Markov process by about 0.6 bits/sample at low bit rates.
Abstract: Predictive coders have been suggested for use as analog data compression devices. Exact expressions for reconstructed signal error have been rare in the literature. In fact most results reported in the literature are based on the assumption of Gaussian statistics for prediction error. Predictive coding of first-order Gaussian Markov sequences are considered in this paper. A numerical iteration technique is used to solve for the prediction error statistics expressed as an infinite series in terms of Hermite polynomials. Several interesting properties of predictive coding are thereby demonstrated. First, prediction error is in fact close to Gaussian, even for the binary quantizer. Sencond, quantizer levels may be optimized at each iteration according to the calculated density. Finally, the existence of correlation between successive quantizer outputs is shown. Using the series solutions described above, performance in terms of meansquare reconstruction error versus bit rate can be shown to parallel the theoretical rate distortion function for the first-order Markov process by about 0.6 bits/sample at low bit rates.

Journal ArticleDOI
TL;DR: In this article, eight algorithms are considered for the computation of the stationary distribution of a finite Markov chain with associated probability transition matrix P. The recommended algorithm is based on solving l´(I-P+eu)=u, where e is the column vector of ones and u´ is a row vector satisfying u´e ≠ 0.
Abstract: Eight algorithms are considered for the computation of the stationary distribution l´ of a finite Markov chain with associated probability transition matrix P. The recommended algorithm is based on solving l´(I—P+eu)=u, where e is the column vector of ones and u´ is a row vector satisfying u´e ≠0.An error analysis is presented for any such u including the choices u= ejP and u=e´j where ej is the jth row of the identity matrix. Computationalcomparisons between five of the algorithms are made based on twenty 8 x 8, twenty 20 x 20, and twenty 40 x 40 transition matrices. The matrix (I—P+eu)−1 is shown to be a non-singular generalized inverse of I—P when the unit root of P is simple and ue ≠ 0. A simple closed form expression is obtained for the Moore-Penrose inverse of I—P whenI—P has nullity one

Journal ArticleDOI
TL;DR: This study along with other open system studies suggests that Markov chain models are more generally adequate than has been concluded from closed cohort analyses.
Abstract: The principal purposes of this paper are to introduce a more general area of application for a refreshing approach to occupational mobility (open systems), to illustrate the general open system approach with two currently existing alternative models including an explication of the two models' distinct representations of the process and to test each model's predictive accuracy for a given job system. The approach focuses on intragenerational occupational mobility in a continuously operative job system. One model views mobility in terms of manflows, the other as interrelated job vacancy moves. Application is to an internal labor market-a state police system. The data are continuous for 43 years (I 927-1970). Both models are found to be quite accurate in predictions once stationarity was approximated: 1949-1970. This study along with other open system studies suggests that Markov chain models are more generally adequate than has been concluded from closed cohort analyses. The models have general applicability to job systems, whether or not formally organized.