scispace - formally typeset
Search or ask a question

Showing papers on "Markov chain published in 2019"


Journal ArticleDOI
TL;DR: An age of information timeliness metric is formulated and a general result for the AoI that is applicable to a wide variety of multiple source service systems is derived that makes AoI evaluation to be comparable in complexity to finding the stationary distribution of a finite-state Markov chain.
Abstract: We examine multiple independent sources providing status updates to a monitor through simple queues. We formulate an age of information (AoI) timeliness metric and derive a general result for the AoI that is applicable to a wide variety of multiple source service systems. For first-come first-served and two types of last-come first-served systems with Poisson arrivals and exponential service times, we find the region of feasible average status ages for multiple updating sources. We then use these results to characterize how a service facility can be shared among multiple updating sources. A new simplified technique for evaluating the AoI in finite-state continuous-time queuing systems is also derived. Based on stochastic hybrid systems, this method makes AoI evaluation to be comparable in complexity to finding the stationary distribution of a finite-state Markov chain.

552 citations


Journal ArticleDOI
TL;DR: Visualization is helpful in each of these stages of the Bayesian workflow and it is indispensable when drawing inferences from the types of modern, high dimensional models that are used by applied researchers.
Abstract: Bayesian data analysis is about more than just computing a posterior distribution, and Bayesian visualization is about more than trace plots of Markov chains. Practical Bayesian data analysis, like all data analysis, is an iterative process of model building, inference, model checking and evaluation, and model expansion. Visualization is helpful in each of these stages of the Bayesian workflow and it is indispensable when drawing inferences from the types of modern, high dimensional models that are used by applied researchers.

440 citations


Journal ArticleDOI
TL;DR: AFLFast is compared to the symbolic executor Klee in terms of vulnerability detection, AFLFast is significantly more effective than Klee on the same subject programs that were discussed in the original Klee paper.
Abstract: Coverage-based Greybox Fuzzing (CGF) is a random testing approach that requires no program analysis. A new test is generated by slightly mutating a seed input. If the test exercises a new and interesting path, it is added to the set of seeds; otherwise, it is discarded. We observe that most tests exercise the same few “high-frequency” paths and develop strategies to explore significantly more paths with the same number of tests by gravitating towards low-frequency paths. We explain the challenges and opportunities of CGF using a Markov chain model which specifies the probability that fuzzing the seed that exercises path $i$ i generates an input that exercises path $j$ j . Each state (i.e., seed) has an energy that specifies the number of inputs to be generated from that seed. We show that CGF is considerably more efficient if energy is inversely proportional to the density of the stationary distribution and increases monotonically every time that seed is chosen. Energy is controlled with a power schedule. We implemented several schedules by extending AFL. In 24 hours, AFLFast exposes 3 previously unreported CVEs that are not exposed by AFL and exposes 6 previously unreported CVEs 7x faster than AFL. AFLFast produces at least an order of magnitude more unique crashes than AFL. We compared AFLFast to the symbolic executor Klee. In terms of vulnerability detection, AFLFast is significantly more effective than Klee on the same subject programs that were discussed in the original Klee paper. In terms of code coverage, AFLFast only slightly outperforms Klee while a combination of both tools achieves best results by mitigating the individual weaknesses.

239 citations


Journal ArticleDOI
TL;DR: In this paper, a multivariate framework for terminating simulation in MCMC is presented, which requires strongly consistent estimators of the covariance matrix in the Markov chain central limit theorem (CLT), and a lower bound on the number of minimum effective samples required for a desired level of precision.
Abstract: Markov chain Monte Carlo (MCMC) produces a correlated sample for estimating expectations with respect to a target distribution. A fundamental question is when should sampling stop so that we have good estimates of the desired quantities? The key to answering this question lies in assessing the Monte Carlo error through a multivariate Markov chain central limit theorem (CLT). The multivariate nature of this Monte Carlo error largely has been ignored in the MCMC literature. We present a multivariate framework for terminating simulation in MCMC. We define a multivariate effective sample size, estimating which requires strongly consistent estimators of the covariance matrix in the Markov chain CLT; a property we show for the multivariate batch means estimator. We then provide a lower bound on the number of minimum effective samples required for a desired level of precision. This lower bound depends on the problem only in the dimension of the expectation being estimated, and not on the underlying stochastic process. This result is obtained by drawing a connection between terminating simulation via effective sample size and terminating simulation using a relative standard deviation fixed-volume sequential stopping rule; which we demonstrate is an asymptotically valid procedure. The finite sample properties of the proposed method are demonstrated in a variety of examples.

219 citations


Journal ArticleDOI
TL;DR: A novel SOH estimation method by using a prior knowledge-based neural network (PKNN) and the Markov chain for a single LIB and the maximum estimation error of the SOH is reduced to less than 1.7% by adopting the proposed method.
Abstract: The state of health (SOH) of lithium-ion batteries (LIBs) is a critical parameter of the battery management system. Because of the complex internal electrochemical properties of LIBs and uncertain external working environment, it is difficult to achieve an accurate SOH determination. In this paper, we have proposed a novel SOH estimation method by using a prior knowledge-based neural network (PKNN) and the Markov chain for a single LIB. First, we extract multiple features to capture the battery aging process. Due to its effective fitting ability for complex nonlinear problems, the neural network with a prior knowledge-based optimization strategy is adopted for the battery SOH prediction. The Markov chain, with the advantageous prediction performance for the long-term system, is established to modify the PKNN estimation results based on the prediction error. Experimental results show that the maximum estimation error of the SOH is reduced to less than 1.7% by adopting the proposed method. By comparing with the group method of data handling and the back-propagation neural network in conjunction with the Levenberg–Marquardt algorithm, the proposed estimation method obtains the highest SOH accuracy.

189 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of generalized state estimation for an array of Markovian coupled networks under the round-Robin protocol and redundant channels is investigated by using an extended dissipative property.
Abstract: In this paper, the problem of generalized state estimation for an array of Markovian coupled networks under the round-Robin protocol (RRP) and redundant channels is investigated by using an extended dissipative property. The randomly varying coupling of the networks under consideration is governed by a Markov chain. With the aid of using the RRP, the transmission order of nodes is availably orchestrated. In this case, the probability of occurrence data collisions through a shared constrained network may be reduced. The redundant channels are also used in the signal transmission to deal with the frangibility of networks caused by a single channel in the networks. The network induced phenomena, that is, randomly occurring packet dropouts and randomly occurring quantization are fully considered. The main purpose of the research is to find a desired estimator design approach such that the extended $({\Omega _{1},\Omega _{2},\Omega _{3}) - \gamma }$ -stochastic dissipativity property of the estimation error system is guaranteed. In terms of the Lyapunov–Krasovskii methodology, the Kronecker product and an improved matrix decoupling approach, sufficient conditions for such an addressed problem are established by means of handling some convex optimization problems. Finally, the serviceability of the proposed method is explained by providing an illustrated example.

180 citations


Journal ArticleDOI
TL;DR: A general variational approach to determine the steady state of open quantum lattice systems via a neural-network approach is presented and applied to the dissipative quantum transverse Ising model.
Abstract: We present a general variational approach to determine the steady state of open quantum lattice systems via a neural-network approach. The steady-state density matrix of the lattice system is constructed via a purified neural-network Ansatz in an extended Hilbert space with ancillary degrees of freedom. The variational minimization of cost functions associated to the master equation can be performed using a Markov chain Monte Carlo sampling. As a first application and proof of principle, we apply the method to the dissipative quantum transverse Ising model.

160 citations


Journal ArticleDOI
TL;DR: In this paper, a Markov chain update scheme using a machine-learned flow-based generative model is proposed for Monte Carlo sampling in lattice field theories, which can be optimized (trained) to produce samples from a distribution approximating the desired Boltzmann distribution determined by the lattice action of the theory being studied.
Abstract: A Markov chain update scheme using a machine-learned flow-based generative model is proposed for Monte Carlo sampling in lattice field theories. The generative model may be optimized (trained) to produce samples from a distribution approximating the desired Boltzmann distribution determined by the lattice action of the theory being studied. Training the model systematically improves autocorrelation times in the Markov chain, even in regions of parameter space where standard Markov chain Monte Carlo algorithms exhibit critical slowing down in producing decorrelated updates. Moreover, the model may be trained without existing samples from the desired distribution. The algorithm is compared with HMC and local Metropolis sampling for ${\ensuremath{\phi}}^{4}$ theory in two dimensions.

157 citations


Posted Content
TL;DR: In this paper, a graph Markov neural network (GMNN) was proposed to combine the advantages of both statistical relational learning and graph neural networks for semi-supervised object classification.
Abstract: This paper studies semi-supervised object classification in relational data, which is a fundamental problem in relational data modeling. The problem has been extensively studied in the literature of both statistical relational learning (e.g. relational Markov networks) and graph neural networks (e.g. graph convolutional networks). Statistical relational learning methods can effectively model the dependency of object labels through conditional random fields for collective classification, whereas graph neural networks learn effective object representations for classification through end-to-end training. In this paper, we propose the Graph Markov Neural Network (GMNN) that combines the advantages of both worlds. A GMNN models the joint distribution of object labels with a conditional random field, which can be effectively trained with the variational EM algorithm. In the E-step, one graph neural network learns effective object representations for approximating the posterior distributions of object labels. In the M-step, another graph neural network is used to model the local label dependency. Experiments on object classification, link classification, and unsupervised node representation learning show that GMNN achieves state-of-the-art results.

157 citations


Journal ArticleDOI
TL;DR: This paper is concerned with reliable fuzzy tracking control for a near-space hypersonic vehicle (NSHV) subject to aperiodic measurement information and stochastic actuator failures.
Abstract: This paper is concerned with reliable fuzzy tracking control for a near-space hypersonic vehicle (NSHV) subject to aperiodic measurement information and stochastic actuator failures. The NSHV dynamics is approximated by the Takagi–Sugeno fuzzy models and the stochastic failures are characterized by a Markov chain. Different with the existing tracking results on NSHV, only the aperiodic sampling measurements are available during system operation. To realize the tracking objective, a reliable fuzzy sampled-data tracking control strategy is presented. An appropriate time-dependent Lyapunov function is constructed to fully capture the real sampling pattern. The sampling-interval-dependent mean square exponential stability criterion with disturbance attenuation is then established. The solution of the tracking controller gains can be obtained by solving an optimization problem. Finally, the simulation studies on NSHV dynamics in the entry phase are performed to verify the validity of the developed fuzzy tracking control strategy.

157 citations


Proceedings ArticleDOI
10 Jul 2019
TL;DR: By proving a stability result for the Ho-Kalman algorithm and combining it with the sample complexity results for Markov parameters, it is shown how much data is needed to learn a balanced realization of the system up to a desired accuracy with high probability.
Abstract: We consider the problem of learning a realization for a linear time-invariant (LTI) dynamical system from input/output data. Given a single input/output trajectory, we provide finite time analysis for learning the system's Markov parameters, from which a balanced realization is obtained using the classical Ho-Kalman algorithm. By proving a stability result for the Ho-Kalman algorithm and combining it with the sample complexity results for Markov parameters, we show how much data is needed to learn a balanced realization of the system up to a desired accuracy with high probability.

Posted Content
TL;DR: This work develops a variational inference framework for deep latent Gaussian models via stochastic automatic differentiation in Wiener space, where the variational approximations to the posterior are obtained by Girsanov (mean-shift) transformation of the standard Wiener process and the computation of gradients is based on the theory of Stochastic flows.
Abstract: In deep latent Gaussian models, the latent variable is generated by a time-inhomogeneous Markov chain, where at each time step we pass the current state through a parametric nonlinear map, such as a feedforward neural net, and add a small independent Gaussian perturbation. This work considers the diffusion limit of such models, where the number of layers tends to infinity, while the step size and the noise variance tend to zero. The limiting latent object is an Ito diffusion process that solves a stochastic differential equation (SDE) whose drift and diffusion coefficient are implemented by neural nets. We develop a variational inference framework for these \textit{neural SDEs} via stochastic automatic differentiation in Wiener space, where the variational approximations to the posterior are obtained by Girsanov (mean-shift) transformation of the standard Wiener process and the computation of gradients is based on the theory of stochastic flows. This permits the use of black-box SDE solvers and automatic differentiation for end-to-end inference. Experimental results with synthetic data are provided.

Journal ArticleDOI
TL;DR: This paper focuses on the Markov chain-based spectral clustering method and proposes a novel essential tensor learning method to explore the high-order correlations for multi-view representation and achieves superior performance over other state-of-the-art methods.
Abstract: Recently, multi-view clustering attracts much attention, which aims to take advantage of multi-view information to improve the performance of clustering. However, most recent work mainly focuses on the self-representation-based subspace clustering, which is of high computation complexity. In this paper, we focus on the Markov chain-based spectral clustering method and propose a novel essential tensor learning method to explore the high-order correlations for multi-view representation. We first construct a tensor based on multi-view transition probability matrices of the Markov chain. By incorporating the idea from the robust principle component analysis, tensor singular value decomposition (t-SVD)-based tensor nuclear norm is imposed to preserve the low-rank property of the essential tensor, which can well capture the principle information from multiple views. We also employ the tensor rotation operator for this task to better investigate the relationship among views as well as reduce the computation complexity. The proposed method can be efficiently optimized by the alternating direction method of multipliers (ADMM). Extensive experiments on seven real-world datasets corresponding to five different applications show that our method achieves superior performance over other state-of-the-art methods.

Journal ArticleDOI
TL;DR: Nested sampling is introduced to phylogenetics and its performance is analysed under different scenarios and compared to established methods to conclude that NS is a competitive and attractive algorithm for phylogenetic inference.
Abstract: Bayesian inference methods rely on numerical algorithms for both model selection and parameter inference. In general, these algorithms require a high computational effort to yield reliable estimates. One of the major challenges in phylogenetics is the estimation of the marginal likelihood. This quantity is commonly used for comparing different evolutionary models, but its calculation, even for simple models, incurs high computational cost. Another interesting challenge relates to the estimation of the posterior distribution. Often, long Markov chains are required to get sufficient samples to carry out parameter inference, especially for tree distributions. In general, these problems are addressed separately by using different procedures. Nested sampling (NS) is a Bayesian computation algorithm, which provides the means to estimate marginal likelihoods together with their uncertainties, and to sample from the posterior distribution at no extra cost. The methods currently used in phylogenetics for marginal likelihood estimation lack in practicality due to their dependence on many tuning parameters and their inability of most implementations to provide a direct way to calculate the uncertainties associated with the estimates, unlike NS. In this article, we introduce NS to phylogenetics. Its performance is analysed under different scenarios and compared to established methods. We conclude that NS is a competitive and attractive algorithm for phylogenetic inference. An implementation is available as a package for BEAST 2 under the LGPL licence, accessible at https://github.com/BEAST2-Dev/nested-sampling.

Journal ArticleDOI
TL;DR: A new stochastic reliable nonuniform sampling controller with Markov switching topologies is designed for the first time to reflect more realistic control behaviors and to guarantee that UCNNs are synchronous exponentially under probabilistic actuator and sensor faults.

Journal ArticleDOI
TL;DR: A fault-tolerant event-triggered control protocol is developed to obtain the leader-following consensus of the multi-agent systems and an appropriate Lyapunov–Krasovskii functional is derived.

Book ChapterDOI
TL;DR: This chapter surveys the following practical issues of interest to the user who wishes to implement the SA algorithm for its particular application: finite-time approximation of the theoretical SA, polynomial-time cooling, Markov-chain length, stopping criteria, and simulation-based evaluations.
Abstract: Simulated Annealing (SA) is one of the simplest and best-known metaheuristic method for addressing difficult black box global optimization problems whose objective function is not explicitly given and can only be evaluated via some costly computer simulation. It is massively used in real-life applications. The main advantage of SA is its simplicity. SA is based on an analogy with the physical annealing of materials that avoids the drawback of the Monte-Carlo approach (which can be trapped in local minima), thanks to an efficient Metropolis acceptance criterion. When the evaluation of the objective-function results from complex simulation processes that manipulate a large-dimension state space involving much memory, population-based algorithms are not applicable and SA is the right answer to address such issues. This chapter is an introduction to the subject. It presents the principles of local search optimization algorithms, of which simulated annealing is an extension, and the Metropolis algorithm, a basic component of SA. The basic SA algorithm for optimization is described together with two theoretical properties that are fundamental to SA: statistical equilibrium (inspired from elementary statistical physics) and asymptotic convergence (based on Markov chain theory). The chapter surveys the following practical issues of interest to the user who wishes to implement the SA algorithm for its particular application: finite-time approximation of the theoretical SA, polynomial-time cooling, Markov-chain length, stopping criteria, and simulation-based evaluations. To illustrate these concepts, this chapter presents the straightforward application of SA to two classical and simple classical NP-hard combinatorial optimization problems: the knapsack problem and the traveling salesman problem. The overall SA methodology is then deployed in detail on a real-life application: a large-scale aircraft trajectory planning problem involving nearly 30,000 flights at the European continental scale. This exemplifies how to tackle nowadays complex problems using the simple scheme of SA by exploiting particular features of the problem, by integrating astute computer implementation within the algorithm, and by setting user-defined parameters empirically, inspired by the SA basic theory presented in this chapter.

Journal ArticleDOI
TL;DR: This paper addresses the distributed adaptive event-triggered filtering problem for a class of sector-bounded nonlinear system over a filtering network with time-varying and switching topology by introducing the dynamic threshold parameter, which provides benefits in data scheduling.
Abstract: This paper addresses the distributed adaptive event-triggered ${H}_{\infty}$ filtering problem for a class of sector-bounded nonlinear system over a filtering network with time-varying and switching topology. Both topology switching and adaptive event-triggered mechanisms (AETMs) between filters are simultaneously considered in the filtering network design. The communication topology evolves over time, which is assumed to be subject to a nonhomogeneous Markov chain. In consideration of the limited network bandwidth, AETMs have been used in the information transmission from the sensor to the filter as well as the information exchange among filters. The proposed AETM is characterized by introducing the dynamic threshold parameter, which provides benefits in data scheduling. Moreover, the gain of the correction term in the adaptive rule varies directly with the estimation error and inversely with the transmission error. The switching filtering network is modeled by a Markov jump nonlinear system. The stochastic Markov stability theory and linear matrix inequality techniques are exploited to establish the existence of the filtering network and further derive the filter parameters. A co-design algorithm for determining ${H}_{\infty}$ filters and the event parameters is developed. Finally, some simulation results on a continuous stirred tank reactor and a numerical example are presented to show the applicability of the obtained results.

Journal ArticleDOI
TL;DR: It is proved that the transition probability from any state to an invariant subset (or to a fixed point) is nondecreasing in time, which is an important property in establishing stability criteria and in calculating or estimating the transient period.
Abstract: We propose a new concept, stability in distribution (SD) of a probabilistic Boolean network (PBN), which determines whether the probability distribution converges to the distribution of the target state (namely, a one-point distributed random variable). In a PBN, stability with probability one, stability in the stochastic sense, and SD are equivalent. The SD is easily generalized to subset stability, i.e., to set stability in distribution (SSD). We prove that the transition probability from any state to an invariant subset (or to a fixed point) is nondecreasing in time. This monotonicity is an important property in establishing stability criteria and in calculating or estimating the transient period. We also obtain a verifiable, necessary, and sufficient condition for SD of PBNs with independently and identically distributed switching. We then show that SD problems of PBNs with Markovian switching and PBN synchronizations can be recast as SSD problems of Markov chains. After calculating the largest invariant subset of a Markov chain in a given set by the newly proposed algorithm, we propose a necessary and sufficient condition for SSDs of Markov chains. The proposed method and results are supported by examples.

Journal ArticleDOI
01 Jan 2019
TL;DR: This article covers key analyses appropriate for trajectory data generated by conventional simulation methods such as molecular dynamics and (single Markov chain) Monte Carlo and provides guidance for analyzing some 'enhanced' sampling approaches.
Abstract: The quantitative assessment of uncertainty and sampling quality is essential in molecular simulation. Many systems of interest are highly complex, often at the edge of current computational capabilities. Modelers must therefore analyze and communicate statistical uncertainties so that "consumers" of simulated data understand its significance and limitations. This article covers key analyses appropriate for trajectory data generated by conventional simulation methods such as molecular dynamics and (single Markov chain) Monte Carlo. It also provides guidance for analyzing some 'enhanced' sampling approaches. We do not discuss systematic errors arising, e.g., from inaccuracy in the chosen model or force field.

Journal ArticleDOI
17 Jul 2019
TL;DR: This work proposes to integrate the techniques of GCN and MRF to solve the problem of semi-supervised community detection in attributed networks with semantic information and exploits both network topology and node semantic information in a complete end-to-end deep network architecture.
Abstract: Community detection is a fundamental problem in network science with various applications. The problem has attracted much attention and many approaches have been proposed. Among the existing approaches are the latest methods based on Graph Convolutional Networks (GCN) and on statistical modeling of Markov Random Fields (MRF). Here, we propose to integrate the techniques of GCN and MRF to solve the problem of semi-supervised community detection in attributed networks with semantic information. Our new method takes advantage of salient features of GNN and MRF and exploits both network topology and node semantic information in a complete end-to-end deep network architecture. Our extensive experiments demonstrate the superior performance of the new method over state-of-the-art methods and its scalability on several large benchmark problems.

Proceedings Article
24 May 2019
TL;DR: This paper proposes the Graph Markov Neural Network (GMNN) that combines the advantages of both worlds, and demonstrates that GMNN achieves state-of-the-art results on object classification, link classification, and unsupervised node representation learning.
Abstract: This paper studies semi-supervised object classification in relational data, which is a fundamental problem in relational data modeling. The problem has been extensively studied in the literature of both statistical relational learning (e.g. relational Markov networks) and graph neural networks (e.g. graph convolutional networks). Statistical relational learning methods can effectively model the dependency of object labels through conditional random fields for collective classification, whereas graph neural networks learn effective object representations for classification through end-to-end training. In this paper, we propose the Graph Markov Neural Network (GMNN) that combines the advantages of both worlds. A GMNN models the joint distribution of object labels with a conditional random field, which can be effectively trained with the variational EM algorithm. In the E-step, one graph neural network learns effective object representations for approximating the posterior distributions of object labels. In the M-step, another graph neural network is used to model the local label dependency. Experiments on object classification, link classification, and unsupervised node representation learning show that GMNN achieves state-of-the-art results.

Journal ArticleDOI
TL;DR: A mode-dependent intermediate temperature matrix is designed, which constructs an intermediate estimator to estimate faulty temperature values obtained by the IoT network, and the efficiency of the presented approach is verified with the results obtained in the conducted case study.
Abstract: This paper investigates distributed continuous-time fault estimation for multiple devices in the Internet-of-Things (IoT) networks by using a hybrid between cooperative control and state prediction techniques. First, a mode-dependent intermediate temperature matrix is designed, which constructs an intermediate estimator to estimate faulty temperature values obtained by the IoT network. Second, the continuous-time Markov chains transition matrix and output temperatures and the sufficient conditions of stability for auto-correct error of the IoT network temperatures are considered. Moreover, faulty devices are replaced by virtual devices to ensure continuous and robust monitoring of the IoT network, preventing in this way false data collection. Finally, the efficiency of the presented approach is verified with the results obtained in the conducted case study.

Journal ArticleDOI
TL;DR: The purpose focuses on designing a controller via the event-triggered scheme, which guarantees that the resulting error networks are finite-time bounded with a prescribed level of the H∞ performance.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed α-Rank, a principled evolutionary dynamics methodology for the evaluation and ranking of agents in large-scale multi-agent interactions, grounded in a novel dynamical game-theoretic solution concept called Markov-Conley chains (MCCs), which leverages continuous-time and discrete-time evolutionary dynamical systems applied to empirical games.
Abstract: We introduce α-Rank, a principled evolutionary dynamics methodology, for the evaluation and ranking of agents in large-scale multi-agent interactions, grounded in a novel dynamical game-theoretic solution concept called Markov-Conley chains (MCCs). The approach leverages continuous-time and discrete-time evolutionary dynamical systems applied to empirical games, and scales tractably in the number of agents, in the type of interactions (beyond dyadic), and the type of empirical games (symmetric and asymmetric). Current models are fundamentally limited in one or more of these dimensions, and are not guaranteed to converge to the desired game-theoretic solution concept (typically the Nash equilibrium). α-Rank automatically provides a ranking over the set of agents under evaluation and provides insights into their strengths, weaknesses, and long-term dynamics in terms of basins of attraction and sink components. This is a direct consequence of the correspondence we establish to the dynamical MCC solution concept when the underlying evolutionary model’s ranking-intensity parameter, α, is chosen to be large, which exactly forms the basis of α-Rank. In contrast to the Nash equilibrium, which is a static solution concept based solely on fixed points, MCCs are a dynamical solution concept based on the Markov chain formalism, Conley’s Fundamental Theorem of Dynamical Systems, and the core ingredients of dynamical systems: fixed points, recurrent sets, periodic orbits, and limit cycles. Our α-Rank method runs in polynomial time with respect to the total number of pure strategy profiles, whereas computing a Nash equilibrium for a general-sum game is known to be intractable. We introduce mathematical proofs that not only provide an overarching and unifying perspective of existing continuous- and discrete-time evolutionary evaluation models, but also reveal the formal underpinnings of the α-Rank methodology. We illustrate the method in canonical games and empirically validate it in several domains, including AlphaGo, AlphaZero, MuJoCo Soccer, and Poker.

Journal ArticleDOI
TL;DR: The findings show that the CA-Markov model is more sensitive to neighborhood size than to cell size or neighborhood type considering individual component effects, and the bilateral and trilateral interactions between neighborhood and cell size result in a more remarkable scale effect than that of a single cell size.
Abstract: Understanding the spatial scale sensitivity of cellular automata is crucial for improving the accuracy of land use change simulation. We propose a framework based on a response surface method to co...

Journal ArticleDOI
TL;DR: Stability and stabilization of Boolean networks with stochastic delays are studied via semi-tensor product of matrices and an equivalent condition for the existence of feedback controllers is provided in terms of a convex programming problem, which can be easily solved and conveniently applied to design controller gains.
Abstract: In this paper, stability and stabilization of Boolean networks with stochastic delays are studied via semi-tensor product of matrices. The stochastic delays, randomly attaining finite values, are modeled by Markov chains. By utilizing an augmented method, the considered Boolean network is first converted into two coupled Markovian switching systems without delays. Then, some stochastic stability results are obtained based on stability results of positive systems. Subsequently, the stabilization of Boolean networks with stochastic delays is further investigated, and an equivalent condition for the existence of feedback controllers is provided in terms of a convex programming problem, which can be easily solved and also conveniently applied to design controller gains. Finally, numerical examples are given to illustrate feasibility of the obtained results.

Posted Content
TL;DR: In this paper, a new family of Markov chains called Recombination (or ReCom) is introduced to sample from the vast space of districting plans for identifying partisan gerrymanders.
Abstract: Redistricting is the problem of partitioning a set of geographical units into a fixed number of districts, subject to a list of often-vague rules and priorities. In recent years, the use of randomized methods to sample from the vast space of districting plans has been gaining traction in courts of law for identifying partisan gerrymanders, and it is now emerging as a possible analytical tool for legislatures and independent commissions. In this paper, we set up redistricting as a graph partition problem and introduce a new family of Markov chains called Recombination (or ReCom) on the space of graph partitions. The main point of comparison will be the commonly used Flip walk, which randomly changes the assignment label of a single node at a time. We present evidence that ReCom mixes efficiently, especially in contrast to the slow-mixing Flip, and provide experiments that demonstrate its qualitative behavior. We demonstrate the advantages of ReCom on real-world data and explain both the challenges of the Markov chain approach and the analytical tools that it enables. We close with a short case study involving the Virginia House of Delegates.

Proceedings ArticleDOI
03 Nov 2019
TL;DR: This paper systematically formalizes the meta-path guided random walk as a higher-order Markov chain process, and presents a heterogeneous personalized spacey random walk to efficiently and effectively attain the expected stationary distribution among nodes.
Abstract: Heterogeneous information network (HIN) embedding has gained increasing interests recently. However, the current way of random-walk based HIN embedding methods have paid few attention to the higher-order Markov chain nature of meta-path guided random walks, especially to the stationarity issue. In this paper, we systematically formalize the meta-path guided random walk as a higher-order Markov chain process,and present a heterogeneous personalized spacey random walk to efficiently and effectively attain the expected stationary distribution among nodes. Then we propose a generalized scalable framework to leverage the heterogeneous personalized spacey random walk to learn embeddings for multiple types of nodes in an HIN guided by a meta-path, a meta-graph, and a meta-schema respectively. We conduct extensive experiments in several heterogeneous networks and demonstrate that our methods substantially outperform the existing state-of-the-art network embedding algorithms.

Journal ArticleDOI
TL;DR: An on-line-non-intrusive load monitoring machine learning algorithm combining two methodologies: 1) unsupervised event-based profiling and 2) Markov chain appliance load modeling is proposed.
Abstract: In this paper, we address the problem of providing fast and on-line households appliance load detection in a non-intrusive way from aggregate electric energy consumption data. Enabling on-line load detection is a relevant research problem as it can unlock new grid services such as demand-side management and raises interactivity in energy awareness possibly leading to more green behaviors. To this purpose, we propose an on-line-non-intrusive load monitoring machine learning algorithm combining two methodologies: 1) unsupervised event-based profiling and 2) Markov chain appliance load modeling. The event-based part performs event detection through contiguous and transient data segments, events clustering and matching. The resulting features are used to build household-specific appliance models from generic appliance models. Disaggregation is then performed on-line using an additive factorial hidden Markov model from the generated appliance model parameters. Our solution is implemented on the cloud and tested with public benchmark datasets. Accuracy results are presented and compared with literature solutions, showing that the proposed solution achieves on-line detection with comparable detection performance with respect to non on-line approaches.