scispace - formally typeset
Topic

Markov process

About: Markov process is a(n) research topic. Over the lifetime, 29777 publication(s) have been published within this topic receiving 738279 citation(s).

...read more

Papers
  More

Open accessBook
15 Apr 1994-
Abstract: From the Publisher: The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature. Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a "theorem-proof" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms. Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historic

...read more

11,593 Citations


Journal ArticleDOI: 10.2307/1912559
01 Mar 1989-Econometrica
Abstract: This paper proposes a very tractable approach to modeling changes in regime. The parameters of an autoregression are viewed as the outcome of a discrete-state Markov process. For example, the mean growth rate of a nonstationary series may be subject to occasional, discrete shifts. The econometrician is presumed not to observe these shifts directly, but instead must draw probabilistic inference about whether and when they may have occurred based on the observed behavior of the series. The paper presents an algorithm for drawing such probabilistic inference in the form of a nonlinear iterative filter

...read more

Topics: Markov process (53%)

8,684 Citations


Open accessBook
Sean P. Meyn1, Richard L. Tweedie2Institutions (2)
01 Jan 1993-
Abstract: Meyn & Tweedie is back! The bible on Markov chains in general state spaces has been brought up to date to reflect developments in the field since 1996 - many of them sparked by publication of the first edition. The pursuit of more efficient simulation algorithms for complex Markovian models, or algorithms for computation of optimal policies for controlled Markov models, has opened new directions for research on Markov chains. As a result, new applications have emerged across a wide range of topics including optimisation, statistics, and economics. New commentary and an epilogue by Sean Meyn summarise recent developments and references have been fully updated. This second edition reflects the same discipline and style that marked out the original and helped it to become a classic: proofs are rigorous and concise, the range of applications is broad and knowledgeable, and key ideas are accessible to practitioners with limited mathematical background.

...read more

Topics: Markov chain (57%), Markov process (55%), Markov model (54%)

5,655 Citations



Open accessJournal ArticleDOI: 10.1109/MASSP.1986.1165342
01 Jan 1986-IEEE Assp Magazine
Abstract: The basic theory of Markov chains has been known to mathematicians and engineers for close to 80 years, but it is only in the past decade that it has been applied explicitly to problems in speech processing. One of the major reasons why speech models, based on Markov chains, have not been developed until recently was the lack of a method for optimizing the parameters of the Markov model to match observed signal patterns. Such a method was proposed in the late 1960's and was immediately applied to speech processing in several research institutions. Continued refinements in the theory and implementation of Markov modelling techniques have greatly enhanced the method, leading to a wide range of applications of these models. It is the purpose of this tutorial paper to give an introduction to the theory of Markov models, and to illustrate how they have been applied to problems in speech recognition.

...read more

  • Figure 6. Illustration of the computation required for the calculation of the joint event that the system is in state qi at time t. and state qj at ti~e t + 1. This event occurs with probability atU) (which accounts for the path terminating in state qi at time t). times Bij bj W t+ 1) (which accounts for the local transition from state qj). times {3t+1(j) (which accounts for the path being in state j at time t + 1 and then being unconstrained until the end of the observation sequence).
    Figure 6. Illustration of the computation required for the calculation of the joint event that the system is in state qi at time t. and state qj at ti~e t + 1. This event occurs with probability atU) (which accounts for the path terminating in state qi at time t). times Bij bj W t+ 1) (which accounts for the local transition from state qj). times {3t+1(j) (which accounts for the path being in state j at time t + 1 and then being unconstrained until the end of the observation sequence).
  • Figure 6. Illustration of the computation required for the calculation of the joint event that the system is in state qi at time t. and state qj at ti~e t + 1. This event occurs with probability atU) (which accounts for the path terminating in state qi at time t). times Bij bj W t+ 1) (which accounts for the local transition from state qj). times {3t+1(j) (which accounts for the path being in state j at time t + 1 and then being unconstrained until the end of the observation sequence).
    Figure 6. Illustration of the computation required for the calculation of the joint event that the system is in state qi at time t. and state qj at ti~e t + 1. This event occurs with probability atU) (which accounts for the path terminating in state qi at time t). times Bij bj W t+ 1) (which accounts for the local transition from state qj). times {3t+1(j) (which accounts for the path being in state j at time t + 1 and then being unconstrained until the end of the observation sequence).
Topics: Markov model (71%), Variable-order Markov model (69%), Markov chain (68%) ...read more

4,293 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021918
2020918
2019943
20181,023
20171,160

Top Attributes

Show by:

Topic's top 5 most impactful authors

Vikram Krishnamurthy

85 papers, 3.1K citations

François Dufour

77 papers, 1K citations

Kishor S. Trivedi

62 papers, 4.1K citations

Peng Shi

60 papers, 3.7K citations

Robert J. Elliott

50 papers, 1K citations

Network Information
Related Topics (5)
Markov chain

51.9K papers, 1.3M citations

95% related
Stochastic process

31.2K papers, 898.7K citations

95% related
Random variable

29.1K papers, 674.6K citations

91% related
Markov model

19.2K papers, 618.1K citations

90% related
Probability distribution

40.9K papers, 1.1M citations

90% related