scispace - formally typeset
Search or ask a question

Showing papers on "Markov chain published in 2018"


Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this article, a self-attention based sequential model (SASRec) is proposed, which uses an attention mechanism to identify which items are'relevant' from a user's action history, and use them to predict the next item.
Abstract: Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the 'context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are 'relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.

1,202 citations


Journal ArticleDOI
TL;DR: An overview of the MSM field to date is presented, presented for a general audience as a timeline of key developments in the field, and the current frontiers of methods development are highlighted, as well as exciting applications in experimental design and drug discovery.
Abstract: Markov state models (MSMs) are a powerful framework for analyzing dynamical systems, such as molecular dynamics (MD) simulations, that have gained widespread use over the past several decades. This perspective offers an overview of the MSM field to date, presented for a general audience as a timeline of key developments in the field. We sequentially address early studies that motivated the method, canonical papers that established the use of MSMs for MD analysis, and subsequent advances in software and analysis protocols. The derivation of a variational principle for MSMs in 2013 signified a turning point from expertise-driving MSM building to a systematic, objective protocol. The variational approach, combined with best practices for model selection and open-source software, enabled a wide range of MSM analysis for applications such as protein folding and allostery, ligand binding, and protein–protein association. To conclude, the current frontiers of methods development are highlighted, as well as excit...

555 citations


Journal ArticleDOI
TL;DR: A deep learning framework that automates construction of Markov state models from MD simulation data is introduced that performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.
Abstract: There is an increasing demand for computing the relevant structures, equilibria, and long-timescale kinetics of biomolecular processes, such as protein-drug binding, from high-throughput molecular dynamics simulations. Current methods employ transformation of simulated coordinates into structural features, dimension reduction, clustering the dimension-reduced data, and estimation of a Markov state model or related model of the interconversion rates between molecular structures. This handcrafted approach demands a substantial amount of modeling expertise, as poor decisions at any step will lead to large modeling errors. Here we employ the variational approach for Markov processes (VAMP) to develop a deep learning framework for molecular kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire mapping from molecular coordinates to Markov states, thus combining the whole data processing pipeline in a single end-to-end framework. Our method performs equally or better than state-of-the-art Markov modeling methods and provides easily interpretable few-state kinetic models.

474 citations


Journal ArticleDOI
TL;DR: This article provides a very basic introduction to MCMC sampling, and describes what MCMC is, and what it can be used for, with simple illustrative examples.
Abstract: Markov Chain Monte–Carlo (MCMC) is an increasingly popular method for obtaining information about distributions, especially for estimating posterior distributions in Bayesian inference. This article provides a very basic introduction to MCMC sampling. It describes what MCMC is, and what it can be used for, with simple illustrative examples. Highlighted are some of the benefits and limitations of MCMC sampling, as well as different approaches to circumventing the limitations most likely to trouble cognitive scientists.

360 citations


Journal ArticleDOI
TL;DR: This work considers how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries.
Abstract: This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens, in terms of the presence of Markov blankets under the active inference scheme-a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets-all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment.

262 citations


Book
12 Jul 2018
TL;DR: Thompson Sampling as discussed by the authors is an algorithm for online decision problems where actions are taken sequentially in a manner that must balance between exploiting what is known to maximize immediate performance and investing to accumulatenew information that may improve future performance.
Abstract: Thompson sampling is an algorithm for online decision problemswhere actions are taken sequentially in a manner thatmust balance between exploiting what is known to maximizeimmediate performance and investing to accumulatenew information that may improve future performance. Thealgorithm addresses a broad range of problems in a computationallyefficient manner and is therefore enjoying wideuse. This tutorial covers the algorithm and its application,illustrating concepts through a range of examples, includingBernoulli bandit problems, shortest path problems, productrecommendation, assortment, active learning with neuralnetworks, and reinforcement learning in Markov decisionprocesses. Most of these problems involve complex informationstructures, where information revealed by taking anaction informs beliefs about other actions. We will also discusswhen and why Thompson sampling is or is not effectiveand relations to alternative algorithms.

257 citations


Journal ArticleDOI
TL;DR: A new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework and achieves better performance on one synthetic data set and two benchmark HSI data sets in a number of experimental settings.
Abstract: This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework. First, we formulate the HSI classification problem from a Bayesian perspective. Then, we adopt a convolutional neural network (CNN) to learn the posterior class distributions using a patch-wise training strategy to better use the spatial information. Next, spatial information is further considered by placing a spatial smoothness prior on the labels. Finally, we iteratively update the CNN parameters using stochastic gradient decent and update the class labels of all pixel vectors using $\alpha $ -expansion min-cut-based algorithm. Compared with the other state-of-the-art methods, the classification method achieves better performance on one synthetic data set and two benchmark HSI data sets in a number of experimental settings.

257 citations


Journal ArticleDOI
TL;DR: The method of multiple Lyapunov functions and the structure of semi-Markov process provides sufficient conditions of stochastic asymptotic stability in the large for semi- Markov switched Stochastic systems without the constraint of bounded transition rates.

203 citations


Journal ArticleDOI
TL;DR: Some novel sufficient conditions are obtained to guarantee that the closed-loop system reaches a specified cost value under the designed jumping state feedback control law in terms of linear matrix inequalities.
Abstract: This paper is concerned with the guaranteed cost control problem for a class of Markov jump discrete-time neural networks (NNs) with event-triggered mechanism, asynchronous jumping, and fading channels. The Markov jump NNs are introduced to be close to reality, where the modes of the NNs and guaranteed cost controller are determined by two mutually independent Markov chains. The asynchronous phenomenon is considered, which increases the difficulty of designing required mode-dependent controller. The event-triggered mechanism is designed by comparing the relative measurement error with the last triggered state at the process of data transmission, which is used to eliminate dispensable transmission and reduce the networked energy consumption. In addition, the signal fading is considered for the effect of signal reflection and shadow in wireless networks, which is modeled by the novel Rice fading models. Some novel sufficient conditions are obtained to guarantee that the closed-loop system reaches a specified cost value under the designed jumping state feedback control law in terms of linear matrix inequalities. Finally, some simulation results are provided to illustrate the effectiveness of the proposed method.

199 citations


Book ChapterDOI
TL;DR: This chapter collects several probabilistic tools that have proven to be useful in the analysis of randomized search heuristics, including classic material such as the Markov, Chebyshev, and Chernoff inequalities, but also lesser-known topics such as stochastic domination and coupling.
Abstract: This chapter collects several probabilistic tools that have proven to be useful in the analysis of randomized search heuristics. This includes classic material such as the Markov, Chebyshev, and Chernoff inequalities, but also lesser-known topics such as stochastic domination and coupling, and Chernoff bounds for geometrically distributed random variables and for negatively correlated random variables. Most of the results presented here have appeared previously, but some only in recent conference publications. While the focus is on presenting tools for the analysis of randomized search heuristics, many of these may be useful as well for the analysis of classic randomized algorithms or discrete random structures.

177 citations


Journal ArticleDOI
TL;DR: In this paper, a Markov chain Monte Carlo (MCMC) method is proposed for high-dimensional models that are log-concave and nonsmooth, a class of models that is central in imaging sciences.
Abstract: Modern imaging methods rely strongly on Bayesian inference techniques to solve challenging imaging problems. Currently, the predominant Bayesian computation approach is convex optimization, which scales very efficiently to high-dimensional image models and delivers accurate point estimation results. However, in order to perform more complex analyses, for example, image uncertainty quantification or model selection, it is necessary to use more computationally intensive Bayesian computation techniques such as Markov chain Monte Carlo methods. This paper presents a new and highly efficient Markov chain Monte Carlo methodology to perform Bayesian computation for high-dimensional models that are log-concave and nonsmooth, a class of models that is central in imaging sciences. The methodology is based on a regularized unadjusted Langevin algorithm that exploits tools from convex analysis, namely, Moreau--Yoshida envelopes and proximal operators, to construct Markov chains with favorable convergence properties. ...

Journal ArticleDOI
TL;DR: The aim is to design an optimized slow state feedback controller such that the stability of MJSPSs is guaranteed even in faulty case, and the upper bound of singular perturbation parameter (SPP) ϵ is improved simultaneously.

Journal ArticleDOI
TL;DR: Some novel sufficient conditions are obtained for ensuring the exponential stability in mean square and the switching topology-dependent filters are derived such that an optimal disturbance rejection attenuation level can be guaranteed for the estimation disagreement of the filtering network.
Abstract: In this paper, the distributed ${H_{\infty }}$ state estimation problem is investigated for a class of filtering networks with time-varying switching topologies and packet losses. In the filter design, the time-varying switching topologies, partial information exchange between filters, the packet losses in transmission from the neighbor filters and the channel noises are simultaneously considered. The considered topology evolves not only over time, but also by event switches which are assumed to be subjects to a nonhomogeneous Markov chain, and its probability transition matrix is time-varying. Some novel sufficient conditions are obtained for ensuring the exponential stability in mean square and the switching topology-dependent filters are derived such that an optimal ${H_{\infty }}$ disturbance rejection attenuation level can be guaranteed for the estimation disagreement of the filtering network. Finally, simulation examples are provided to demonstrate the effectiveness of the theoretical results.

Journal ArticleDOI
TL;DR: The purpose of this paper is to develop a cloud broker architecture for cloud service selection by finding a pattern of the changing priorities of User Preferences (UPs), and it is shown that the method outperforms the Analytic Hierarchy Process (AHP).
Abstract: Due to the increasing number of cloud services, service selection has become a challenging decision for many organisations. It is even more complicated when cloud users change their preferences based on the requirements and the level of satisfaction of the experienced service. The purpose of this paper is to overcome this drawback and develop a cloud broker architecture for cloud service selection by finding a pattern of the changing priorities of User Preferences (UPs). To do that, a Markov chain is employed to find the pattern. The pattern is then connected to the Quality of Service (QoS) for the available services. A recently proposed Multi Criteria Decision Making (MCDM) method, Best Worst Method (BWM), is used to rank the services. We show that the method outperforms the Analytic Hierarchy Process (AHP). The proposed methodology provides a prioritized list of the services based on the pattern of changing UPs. The methodology is validated through a case study using real QoS performance data of Amazon Elastic Compute (Amazon EC2) cloud services.

Proceedings Article
03 Jul 2018
TL;DR: Temporal difference learning (TD) is a simple iterative algorithm widely used for policy evaluation in Markov reward processes as discussed by the authors, and Bhandari et al. prove finite time convergence rates for TD learning w...
Abstract: Temporal difference learning (TD) is a simple iterative algorithm widely used for policy evaluation in Markov reward processes. Bhandari et al. prove finite time convergence rates for TD learning w...

Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this article, an attributed spatial And-Or graph (S-AOG) is proposed to represent indoor scenes, in which the terminal nodes are object entities including room, furniture, and supported objects.
Abstract: We present a human-centric method to sample and synthesize 3D room layouts and 2D images thereof, to obtain large-scale 2D/3D image data with the perfect per-pixel ground truth. An attributed spatial And-Or graph (S-AOG) is proposed to represent indoor scenes. The S-AOG is a probabilistic grammar model, in which the terminal nodes are object entities including room, furniture, and supported objects. Human contexts as contextual relations are encoded by Markov Random Fields (MRF) on the terminal nodes. We learn the distributions from an indoor scene dataset and sample new layouts using Monte Carlo Markov Chain. Experiments demonstrate that the proposed method can robustly sample a large variety of realistic room layouts based on three criteria: (i) visual realism comparing to a state-of-the-art room arrangement method, (ii) accuracy of the affordance maps with respect to ground-truth, and (ii) the functionality and naturalness of synthesized rooms evaluated by human subjects.

Journal ArticleDOI
TL;DR: Combining the utilization of a novel fuzzy singular-perturbation-parameter-dependent Markovian Lyapunov function with the introduction of the slack matrix variable, sufficient conditions on the existence of the reliable fuzzy controller are presented, which are dependent on the upper bounds on the time derivatives of membership functions.
Abstract: In this paper, the reliable control problem of nonlinear singularly perturbed systems subject to random actuator faults is studied. A Takagi-Sugeno fuzzy model is utilized to describe the nonlinear plant, and a Markov chain with partly unknown transition probabilities is adopted to characterize the random behaviors of the actuator faults, in contrast with the existing fault modes in which all the transition probabilities are required to be known. Combining the utilization of a novel fuzzy singular-perturbation-parameter-dependent Markovian Lyapunov function with the introduction of the slack matrix variable, sufficient conditions on the existence of the reliable fuzzy controller are presented, which are dependent on the upper bounds on the time derivatives of membership functions. A search algorithm is provided to obtain the maximum stabilization bound. Moreover, conditions based on single Lyapunov function are also established. The effectiveness and the applicability of the proposed new design technique are verified by an example of an electronic circuit system.

Journal ArticleDOI
TL;DR: In this paper, an integrated Markov Chain and Cellular Automata modelling (CA MARKOV), multicriteria evaluation techniques have been applied to produce transition probability, and the unsupervised method was employed.
Abstract: An integrated Markov Chain and Cellular Automata modelling (CA MARKOV), multicriteria evaluation techniques have been applied to produce transition probability. The unsupervised method was employed...

Journal ArticleDOI
TL;DR: Two forecasting methods including a Markov chain model and an artificial back propagation neural network based on real driving cycles are compared, showcasing significant superiority of the Markov Chain especially in computational efficiency.
Abstract: In order to develop a practicality oriented low-cost energy management controller for a plug-in hybrid electric bus, besides minimizing energy consumption, algorithmic time efficiency should be put great attention so as to substantially lower the requirement of the controller hardware. This paper first compares two forecasting methods including a Markov chain model and an artificial back propagation neural network based on real driving cycles, showcasing significant superiority of the Markov chain especially in computational efficiency. Moreover, an adaptive reference state-of-charge (SOC) advisement, which is tuned iteratively by taking advantage of speed forecasts in each prediction horizon, is provided with the aim of guiding the battery to discharge reasonably. Then, the Markov chain-based model predictive control is conducted and compared with a linear SOC reference model. Moreover, numerous influencing factors of the computational efficiency, including the prediction horizon length, the sampling width of the optimal power sequence, and the discretization size of state/control variables for solving the dynamic programming problem, are systematically investigated. The results show that the proposed reference SOC advisory is superior to the linear model. We further introduce several ways of accelerating the operational efficiency for the model predictive controller. Comparisons with common dynamic programming and charge-depleting and charge-sustaining solutions are also carried out to show the improved performance of the proposed control approach.

Journal ArticleDOI
TL;DR: A robust state estimation algorithm against FDI attack is presented and it is shown that the proposed method is able to detect malicious attack, which is undetectable by traditional bad data detection (BDD) methods.
Abstract: The evolution of traditional energy networks toward smart grids increases security vulnerabilities in the power system infrastructure. State estimation plays an essential role in the efficient and reliable operation of power systems, so its security is a major concern. Coordinated cyber-attacks, including false data injection (FDI) attack, can manipulate smart meters to present serious threats to grid operations. In this paper, a robust state estimation algorithm against FDI attack is presented. As a solution to mitigate such an attack, a new analytical technique is proposed based on the Markov chain theory and Euclidean distance metric. Using historical data of a set of trusted buses, a Markov chain model of the system normal operation is formulated. The estimated states are analyzed by calculating the Euclidean distance from the Markov model. States, which match the lower probability, are considered as attacked states. It is shown that the proposed method is able to detect malicious attack, which is undetectable by traditional bad data detection (BDD) methods. The proposed robust dynamic state estimation algorithm is built on a Kalman filter, and implemented on the massively parallel architecture of graphic processing unit using fine-grained parallel programming techniques. Numerical simulations demonstrate the efficiency and accuracy of the proposed mechanism.

Journal ArticleDOI
01 Oct 2018-Energy
TL;DR: Results indicate that the proposed energy management strategy can greatly improve the fuel economy and be employed in real-time when compared with the stochastic dynamic programming and conventional RL approaches.

Journal ArticleDOI
TL;DR: In this article, the distance of the nth step distributions of two Markov chains when one of them satisfies a Wasserstein ergodicity condition is shown to be bounded.
Abstract: Perturbation theory for Markov chains addresses the question of how small differences in the transition probabilities of Markov chains are reflected in differences between their distributions. We prove powerful and flexible bounds on the distance of the nth step distributions of two Markov chains when one of them satisfies a Wasserstein ergodicity condition. Our work is motivated by the recent interest in approximate Markov chain Monte Carlo (MCMC) methods in the analysis of big data sets. By using an approach based on Lyapunov functions, we provide estimates for geometrically ergodic Markov chains under weak assumptions. In an autoregressive model, our bounds cannot be improved in general. We illustrate our theory by showing quantitative estimates for approximate versions of two prominent MCMC algorithms, the Metropolis-Hastings and stochastic Langevin algorithms.

Journal ArticleDOI
TL;DR: The analytically derive the epidemic threshold regarding the disease propagation, which is correlated with the multiplex network topology and the coupling relationship between two transmission dynamics, and it is clearly found that the achieved analytical results concur with the MC simulations.

Journal ArticleDOI
TL;DR: This paper proposes a continuous-time Markov chain model describing the architecture of the ER-based system, and chooses electricity trading to propose a Markov decision process model based on an ER subsystem to describe the trading behavior.
Abstract: Energy router (ER) based system is a crucial part of the energy transmission and management under the circumstance of energy Internet for green cities. During its design process, a sound formal verification and a performance monitoring scheme are needed to check its reliability and meaningful quantitative properties. In this paper, we provide formal verification solutions for an ER-based system by proposing a continuous-time Markov chain model describing the architecture of the ER-based system. To verify real-world function of the ER-based system, we choose electricity trading to propose a Markov decision process model based on an ER subsystem to describe the trading behavior. To monitor the system performance, we project the energy scheduling process in the ER-based system, and then implement this scheduling process on top of a cloud computing experiment tool. Finally, we perform extensive experiment evaluations to investigate the system reliability properties, quantitative properties, and scheduling behaviors. The experiment verifies the effectiveness of the proposed models and the monitoring scheme.

Journal ArticleDOI
TL;DR: As the penetration of electric vehicles (EVs) increases, their patterns of use need to be well understood for future system planning and operating purposes, and an uncertainty analysis on the network impact due to EV charging is undertaken.

Journal ArticleDOI
TL;DR: A data-driven condition-based policy for the inspection and maintenance of track geometry is developed and results in an approximately 10% savings in the total maintenance costs for every 1 mile of track.
Abstract: Railway big data technologies are transforming the existing track inspection and maintenance policy deployed for railroads in North America. This paper develops a data-driven condition-based policy for the inspection and maintenance of track geometry. Both preventive maintenance and spot corrective maintenance are taken into account in the investigation of a 33-month inspection dataset that contains a variety of geometry measurements for every foot of track. First, this study separates the data based on the time interval of the inspection run, calculates the aggregate track quality index (TQI) for each track section, and predicts the track spot geo-defect occurrence probability using random forests. Then, a Markov chain is built to model aggregated track deterioration, and the spot geo-defects are modeled by a Bernoulli process. Finally, a Markov decision process (MDP) is developed for track maintenance decision making, and it is optimized by using a value iteration algorithm. Compared with the existing maintenance policy using Markov chain Monte Carlo (MCMC) simulation, the maintenance policy developed in this paper results in an approximately 10% savings in the total maintenance costs for every 1 mile of track.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors constructed a tensor based on multi-view transition probability matrices of the Markov chain and employed the tensor rotation operator to better investigate the relationship among views as well as reduce the computation complexity.
Abstract: Multi-view clustering attracts much attention recently, which aims to take advantage of multi-view information to improve the performance of clustering. However, most recent work mainly focus on self-representation based subspace clustering, which is of high computation complexity. In this paper, we focus on the Markov chain based spectral clustering method and propose a novel essential tensor learning method to explore the high order correlations for multi-view representation. We first construct a tensor based on multi-view transition probability matrices of the Markov chain. By incorporating the idea from robust principle component analysis, tensor singular value decomposition (t-SVD) based tensor nuclear norm is imposed to preserve the low-rank property of the essential tensor, which can well capture the principle information from multiple views. We also employ the tensor rotation operator for this task to better investigate the relationship among views as well as reduce the computation complexity. The proposed method can be efficiently optimized by the alternating direction method of multipliers~(ADMM). Extensive experiments on six real world datasets corresponding to five different applications show that our method achieves superior performance over other state-of-the-art methods.

Journal ArticleDOI
TL;DR: The distributed estimation framework is applied to a chemical process to illustrate the effectiveness of the proposed methodology and the superiority of the MCCs framework featured by channel switching.
Abstract: This paper addresses a distributed estimator design problem for linear systems deployed over sensor networks within a multiple communication channels (MCCs) framework. A practical scenario is taken into account such that the channel used for communication can be switched and the switching is governed by a Markov chain. With the existence of communicational imperfections and external disturbances, an estimation algorithm is proposed such that the developed distributed estimators are able to give accurate state estimates against the channel switching phenomenon. The distributed estimation framework is applied to a chemical process to illustrate the effectiveness of the proposed methodology and the superiority of the MCCs framework featured by channel switching.

Journal ArticleDOI
TL;DR: The existence–uniqueness theorem for the adjoint equations is proved, which is represented by an anticipated backward stochastic differential equation with jumps and regimes, and the results are illustrated by a problem of optimal consumption problem from a cash flow with delay and regimes.
Abstract: We study a stochastic optimal control problem for a delayed Markov regime-switching jump-diffusion model. We establish necessary and sufficient maximum principles under full and partial information for such a system. We prove the existence---uniqueness theorem for the adjoint equations, which are represented by an anticipated backward stochastic differential equation with jumps and regimes. We illustrate our results by a problem of optimal consumption problem from a cash flow with delay and regimes.

Proceedings ArticleDOI
15 Apr 2018
TL;DR: It is proved that the average age of information (AoI) is lower for the zero-wait sampling, but the probability of error for the sample-at-change strategy is lower or equal to that of thezero-wait strategy.
Abstract: It has recently been shown that for the remote estimation problem, the sampling strategy that minimizes the age of information (AoI) does not minimize the estimation error. We seek to obtain an alternative metric, called effective age, for which a lower effective age necessarily yields a lower estimation error. The problem we consider as our basis for developing an effective age is the remote estimation of a Markov source, in which samples of the source signal are transmitted over a delay channel to be estimated at a destination. We compare two sampling strategies: zero-wait and sample-at-change. We prove that the average age of information (AoI) is lower for the zero-wait sampling, but the probability of error for the sample-at-change strategy is lower or equal to that of the zero-wait strategy. The sample-at-change strategy uses the knowledge of the source signal to choose the sample at the exact moment the state changes, providing the freshest information with respect to each change of the signal. With this insight, we propose two effective age metrics, sampling age and cumulative marginal error. The sampling age tracks the age of the samples relative to the ideal sampling time, while the cumulative marginal error tracks the total error during a particular sampling period. Some intuitive justification is provided for each metric for the given Markov source system.