scispace - formally typeset
Search or ask a question
Topic

Markov chain

About: Markov chain is a research topic. Over the lifetime, 51900 publications have been published within this topic receiving 1375044 citations. The topic is also known as: Markov process & Markov chains.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a mathematical text suitable for students of engineering and science who are at the third year undergraduate level or beyond is presented, which is a book of applicable mathematics, which avoids the approach of listing only the techniques, followed by a few examples.
Abstract: This is a mathematical text suitable for students of engineering and science who are at the third year undergraduate level or beyond. It is a book of applicable mathematics. It avoids the approach of listing only the techniques, followed by a few examples, without explaining why the techniques work. Thus, it provides not only the know-how but also the know-why. Equally, the text has not been written as a book of pure mathematics with a list of theorems followed by their proofs. The authors' aim is to help students develop an understanding of mathematics and its applications. They have refrained from using clichés like “it is obvious” and “it can be shown”, which may be true only to a mature mathematician. On the whole, the authors have been generous in writing down all the steps in solving the example problems.The book comprises ten chapters. Each chapter contains several solved problems clarifying the introduced concepts. Some of the examples are taken from the recent literature and serve to illustrate the applications in various fields of engineering and science. At the end of each chapter, there are assignment problems with two levels of difficulty. A list of references is provided at the end of the book.This book is the product of a close collaboration between two mathematicians and an engineer. The engineer has been helpful in pinpointing the problems which engineering students encounter in books written by mathematicians.

2,846 citations

Book
01 Jan 1987
TL;DR: In this paper, a simple Markovian model for queueing theory at the Markovians level is proposed, which is based on the theory of random walks and single server queueing.
Abstract: Preface SIMPLE MARKOVIAN MODELS: Markov Chains Markov Jump Processes Queueing Theory at the Markovian Level BASIC MATHEMATICAL TOOLS: Basic Renewal Theory Regenerative Processes Further Topics in Renewal Theory and Regenerative Processes Random Walks SPECIAL MODELS AND METHODS: Steady-state Properties of GI/G/1 Explicit Examples in the Theory of Random Walks and Single Server Queues Multi-Dimensional Methods Many-server Queues Conjugate Processes Insurance Risk, Dam and Storage Models Selected Background and Notation.

2,757 citations

Posted Content
TL;DR: In this article, a generative adversarial network (GAN) is proposed to estimate generative models via an adversarial process, in which two models are simultaneously trained: a generator G and a discriminator D that estimates the probability that a sample came from the training data rather than G.
Abstract: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.

2,657 citations

Book ChapterDOI
10 Jul 1994
TL;DR: A Q-learning-like algorithm for finding optimal policies and its application to a simple two-player game in which the optimal policy is probabilistic is demonstrated.
Abstract: In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsis-tic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.

2,643 citations

Book
01 Dec 2008
TL;DR: Markov Chains and Mixing Times as mentioned in this paper is an introduction to the modern approach to the theory of Markov chains and its application in the field of probability theory and linear algebra, where the main goal is to determine the rate of convergence of a Markov chain to the stationary distribution.
Abstract: This book is an introduction to the modern approach to the theory of Markov chains. The main goal of this approach is to determine the rate of convergence of a Markov chain to the stationary distribution as a function of the size and geometry of the state space. The authors develop the key tools for estimating convergence times, including coupling, strong stationary times, and spectral methods. Whenever possible, probabilistic methods are emphasized. The book includes many examples and provides brief introductions to some central models of statistical mechanics. Also provided are accounts of random walks on networks, including hitting and cover times, and analyses of several methods of shuffling cards. As a prerequisite, the authors assume a modest understanding of probability theory and linear algebra at an undergraduate level. ""Markov Chains and Mixing Times"" is meant to bring the excitement of this active area of research to a wide audience.

2,573 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
88% related
Probabilistic logic
56K papers, 1.3M citations
87% related
Bounded function
77.2K papers, 1.3M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
86% related
Robustness (computer science)
94.7K papers, 1.6M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20243
20231,336
20223,183
20212,007
20202,222
20192,294