scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic process published in 2011"


Proceedings ArticleDOI
09 May 2011
TL;DR: It is experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in.
Abstract: We present a new approach to motion planning using a stochastic trajectory optimization framework. The approach relies on generating noisy trajectories to explore the space around an initial (possibly infeasible) trajectory, which are then combined to produced an updated trajectory with lower cost. A cost function based on a combination of obstacle and smoothness cost is optimized in each iteration. No gradient information is required for the particular optimization algorithm that we use and so general costs for which derivatives may not be available (e.g. costs corresponding to constraints and motor torques) can be included in the cost function. We demonstrate the approach both in simulation and on a mobile manipulation system for unconstrained and constrained tasks. We experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in.

817 citations


Journal ArticleDOI
TL;DR: The main theorem shows that when the probability that each individual observes some other individual from the recent past converges to one as the social network becomes large, unbounded private beliefs are sufficient to ensure asymptotic learning.
Abstract: We study the (perfect Bayesian) equilibrium of a sequential learning model over a general social network. Each individual receives a signal about the underlying state of the world, observes the past actions of a stochastically generated neighbourhood of individuals, and chooses one of two possible actions. The stochastic process generating the neighbourhoods defines the network topology. We characterize pure strategy equilibria for arbitrary stochastic and deterministic social networks and characterize the conditions under which there will be asymptotic learning—convergence (in probability) to the right action as the social network becomes large. We show that when private beliefs are unbounded (meaning that the implied likelihood ratios are unbounded), there will be asymptotic learning as long as there is some minimal amount of “expansion in observations”. We also characterize conditions under which there will be asymptotic learning when private beliefs are bounded.

678 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduced a simple and very general theory of compressive sensing, in which the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all standard models-e.g., Gaussian, frequency measurements-discussed in the literature, but also provides a framework for new measurement strategies as well.
Abstract: This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all standard models-e.g., Gaussian, frequency measurements-discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) to hold near the sparsity level in question, nor a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s logn Fourier coefficients that are contaminated with noise.

520 citations


Journal ArticleDOI
TL;DR: In this article, the authors give some basic and important properties of typical Banach spaces of functions of G-Brownian motion paths induced by a sublinear expectation, which can be applied in continuous time dynamic and coherent risk measures in finance, in particular for path-dependence risky positions under situations of volatility model uncertainty.
Abstract: In this paper we give some basic and important properties of several typical Banach spaces of functions of G-Brownian motion paths induced by a sublinear expectation—G-expectation. Many results can be also applied to more general situations. A generalized version of Kolmogorov’s criterion for continuous modification of a stochastic process is also obtained. The results can be applied in continuous time dynamic and coherent risk measures in finance, in particular for path-dependence risky positions under situations of volatility model uncertainty.

514 citations


Proceedings ArticleDOI
09 May 2011
TL;DR: The algorithm incrementally constructs a graph of trajectories through state space, while efficiently searching over candidate paths through the graph at each iteration results in a search tree in belief space that provably converges to the optimal path.
Abstract: In this paper we address the problem of motion planning in the presence of state uncertainty, also known as planning in belief space. The work is motivated by planning domains involving nontrivial dynamics, spatially varying measurement properties, and obstacle constraints. To make the problem tractable, we restrict the motion plan to a nominal trajectory stabilized with a linear estimator and controller. This allows us to predict distributions over future states given a candidate nominal trajectory. Using these distributions to ensure a bounded probability of collision, the algorithm incrementally constructs a graph of trajectories through state space, while efficiently searching over candidate paths through the graph at each iteration. This process results in a search tree in belief space that provably converges to the optimal path. We analyze the algorithm theoretically and also provide simulation results demonstrating its utility for balancing information gathering to reduce uncertainty and finding low cost paths.

490 citations


Journal ArticleDOI
TL;DR: A detailed derivation of this distribution is given, and its use as a prior in an infinite latent feature model in probabilistic models such as bipartite graphs in which the size of at least one class of nodes is unknown is unknown.
Abstract: The Indian buffet process is a stochastic process defining a probability distribution over equivalence classes of sparse binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features, or that involve bipartite graphs in which the size of at least one class of nodes is unknown. We give a detailed derivation of this distribution, and illustrate its use as a prior in an infinite latent feature model. We then review recent applications of the Indian buffet process in machine learning, discuss its extensions, and summarize its connections to other stochastic processes.

428 citations


Journal ArticleDOI
TL;DR: This paper discusses statistical properties and convergence of the Stochastic Dual Dynamic Programming method applied to multistage linear stochastic programming problems, and argues that the computational complexity of the corresponding SDDP algorithm is almost the same as in the risk neutral case.

399 citations


Journal ArticleDOI
TL;DR: In this paper, passivity analysis is conducted for discrete-time stochastic neural networks with both Markovian jumping parameters and mixed time delays by introducing a Lyapunov functional that accounts for the Mixed time delays.
Abstract: In this paper, passivity analysis is conducted for discrete-time stochastic neural networks with both Markovian jumping parameters and mixed time delays. The mixed time delays consist of both discrete and distributed delays. The Markov chain in the underlying neural networks is finite piecewise homogeneous. By introducing a Lyapunov functional that accounts for the mixed time delays, a delay-dependent passivity condition is derived in terms of the linear matrix inequality approach. The case of Markov chain with partially unknown transition probabilities is also considered. All the results presented depend upon not only discrete delay but also distributed delay. A numerical example is included to demonstrate the effectiveness of the proposed methods.

355 citations


Journal ArticleDOI
TL;DR: The objective of this paper is the analysis of the noise sources in diffusion-based MC using tools from signal processing, statistics and communication engineering to evaluate the capability of the stochastic model to express the diffusion- based noise sources represented by the physical model.
Abstract: Molecular communication (MC) is a promising bio-inspired paradigm, in which molecules are used to encode, transmit and receive information at the nanoscale. Very limited research has addressed the problem of modeling and analyzing the MC in nanonetworks. One of the main challenges in MC is the proper study and characterization of the noise sources. The objective of this paper is the analysis of the noise sources in diffusion-based MC using tools from signal processing, statistics and communication engineering. The reference diffusion-based MC system for this analysis is the physical end-to-end model introduced in a previous work by the same authors. The particle sampling noise and the particle counting noise are analyzed as the most relevant diffusion-based noise sources. The analysis of each noise source results in two types of models, namely, the physical model and the stochastic model. The physical model mathematically expresses the processes underlying the physics of the noise source. The stochastic model captures the noise source behavior through statistical parameters. The physical model results in block schemes, while the stochastic model results in the characterization of the noises using random processes. Simulations are conducted to evaluate the capability of the stochastic model to express the diffusion-based noise sources represented by the physical model.

344 citations


Book
29 Jun 2011
TL;DR: Table of Contents Chapter 1 Introduction Chapter 2 Fundamentals of Probability and Random Variables Chapter 3 Expected Values of Random Variable Values Chapter 4 Analysis of Stochastic Processes Chapter 5 Time Domain Linear Vibration Analysis Chapter 6 Frequency Domain Analysis Chapter 7 Frequency, bandwith, and Amplitude Chapter 8 Matrix Analysis of Linear Systems.
Abstract: Table of Contents Chapter 1 Introduction Chapter 2 Fundamentals of Probability and Random Variables Chapter 3 Expected Values of Random Variables Chapter 4 Analysis of Stochastic Processes Chapter 5 Time Domain Linear Vibration Analysis Chapter 6 Frequency Domain Analysis Chapter 7 Frequency, bandwith, and Amplitude Chapter 8 Matrix Analysis of Linear Systems Chapter 9 Direct Stochastic Analysis of Linear Systems Chapter 10 Introduction to Nonlinear Stochastic Vibration Chapter 11 Failure Analysis Chapter 12 Effect of Parameter Uncertainty Appendices A and B

339 citations


Journal ArticleDOI
TL;DR: In this article, a detailed analysis of the time averaged mean squared displacement for systems governed by anomalous diffusion, considering both unconfined and restricted (corralled) motion is presented.
Abstract: Anomalous diffusion has been widely observed by single particle tracking microscopy in complex systems such as biological cells. The resulting time series are usually evaluated in terms of time averages. Often anomalous diffusion is connected with non-ergodic behaviour. In such cases the time averages remain random variables and hence irreproducible. Here we present a detailed analysis of the time averaged mean squared displacement for systems governed by anomalous diffusion, considering both unconfined and restricted (corralled) motion. We discuss the behaviour of the time averaged mean squared displacement for two prominent stochastic processes, namely, continuous time random walks and fractional Brownian motion. We also study the distribution of the time averaged mean squared displacement around its ensemble mean, and show that this distribution preserves typical process characteristics even for short time series. Recently, velocity correlation functions were suggested to distinguish between these processes. We here present analytical expressions for the velocity correlation functions. The knowledge of the results presented here is expected to be relevant for the correct interpretation of single particle trajectory data in complex systems.

Journal ArticleDOI
TL;DR: A review of the operational methods that have been developed for analyzing stochastic data in time and scale can be found in this paper, where a basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale.

Book ChapterDOI
01 Jan 2011
TL;DR: The Skorokhod Embedding Problem (SEP) as mentioned in this paper is a stochastic process that finds a stopping time τ such that the stopped process X τ has law μ.
Abstract: This set of lecture notes is concerned with the following pair of ideas and concepts: 1. The Skorokhod Embedding problem (SEP) is, given a stochastic process X=(X t ) t≥0 and a measure μ on the state space of X, to find a stopping time τ such that the stopped process X τ has law μ. Most often we take the process X to be Brownian motion, and μ to be a centred probability measure. 2. The standard approach for the pricing of financial options is to postulate a model and then to calculate the price of a contingent claim as the suitably discounted, risk-neutral expectation of the payoff under that model. In practice we can observe traded option prices, but know little or nothing about the model. Hence the question arises, if we know vanilla option prices, what can we infer about the underlying model?

Journal ArticleDOI
TL;DR: It is shown that the fraction of misclassified network nodes converges in probability to zero under maximum likelihood fitting when the number of classes is allowed to grow as the root of the network size and the average network degree grows at least poly-logarithmically in this size.
Abstract: We present asymptotic and finite-sample results on the use of stochastic blockmodels for the analysis of network data. We show that the fraction of misclassified network nodes converges in probability to zero under maximum likelihood fitting when the number of classes is allowed to grow as the root of the network size and the average network degree grows at least poly-logarithmically in this size. We also establish finite-sample confidence bounds on maximum-likelihood blockmodel parameter estimates from data comprising independent Bernoulli random variates; these results hold uniformly over class assignment. We provide simulations verifying the conditions sufficient for our results, and conclude by fitting a logit parameterization of a stochastic blockmodel with covariates to a network data example comprising self-reported school friendships, resulting in block estimates that reveal residual structure.

Journal ArticleDOI
01 Apr 2011
TL;DR: The proposed LMI-based criteria are quite general since many factors, such as noise perturbations, Markovian jump parameters, and mixed time delays, are considered, which are more general than those discussed in the previous literature.
Abstract: In this paper, the problem of exponential stability is investigated for a class of stochastic neural networks with both Markovian jump parameters and mixed time delays. The jumping parameters are modeled as a continuous-time finite-state Markov chain. Based on a Lyapunov-Krasovskii functional and the stochastic analysis theory, a linear matrix inequality (LMI) approach is developed to derive some novel sufficient conditions, which guarantee the exponential stability of the equilibrium point in the mean square. The proposed LMI-based criteria are quite general since many factors, such as noise perturbations, Markovian jump parameters, and mixed time delays, are considered. In particular, the mixed time delays in this paper synchronously consist of constant, time-varying, and distributed delays, which are more general than those discussed in the previous literature. In the latter, either constant and distributed delays or time-varying and distributed delays are only included. Therefore, the results obtained in this paper generalize and improve those given in the previous literature. Two numerical examples are provided to show the effectiveness of the theoretical results and demonstrate that the stability criteria used in the earlier literature fail.

Posted Content
Ivan Corwin1
TL;DR: In this paper, the authors present a survey of the development of the Kardar-Parisi-Zhang (KPZ) universality class and its connections with directed polymers in random media.
Abstract: Brownian motion is a continuum scaling limit for a wide class of random processes, and there has been great success in developing a theory for its properties (such as distribution functions or regularity) and expanding the breadth of its universality class. Over the past twenty five years a new universality class has emerged to describe a host of important physical and probabilistic models (including one dimensional interface growth processes, interacting particle systems and polymers in random environments) which display characteristic, though unusual, scalings and new statistics. This class is called the Kardar-Parisi-Zhang (KPZ) universality class and underlying it is, again, a continuum object -- a non-linear stochastic partial differential equation -- known as the KPZ equation. The purpose of this survey is to explain the context for, as well as the content of a number of mathematical breakthroughs which have culminated in the derivation of the exact formula for the distribution function of the KPZ equation started with {\it narrow wedge} initial data. In particular we emphasize three topics: (1) The approximation of the KPZ equation through the weakly asymmetric simple exclusion process; (2) The derivation of the exact one-point distribution of the solution to the KPZ equation with narrow wedge initial data; (3) Connections with directed polymers in random media. As the purpose of this article is to survey and review, we make precise statements but provide only heuristic arguments with indications of the technical complexities necessary to make such arguments mathematically rigorous.

Journal ArticleDOI
TL;DR: This technical note introduces a new class of discrete-time networked nonlinear systems with mixed random delays and packet dropouts, and the H∞ filtering problem for such systems is investigated, and sufficient conditions for the existence of an admissible filter are established.
Abstract: In this technical note, a new class of discrete-time networked nonlinear systems with mixed random delays and packet dropouts is introduced, and the H∞ filtering problem for such systems is investigated The mixed stochasitc time-delays consist of both discrete and infinite distributed delays and the packet dropout phenomenon occurs in a random way Furthermore, new techniques are presented to deal with the infinite distributed delay in the discrete-time domain Sufficient conditions for the existence of an admissible filter are established, which ensure the asymptotical stability as well as a prescribed H∞ performance Finally, examples are given to demonstrate the effectiveness of the proposed filter design scheme in this technical note

Journal ArticleDOI
TL;DR: Comparisons reveal that this moment closure technique based on derivative-matching provides more accurate estimates of the moment dynamics, especially when the population size is small, and it is shown that the accuracy of the proposed moment closure scheme can be arbitrarily increased by incurring additional computational effort.
Abstract: In the stochastic formulation of chemical kinetics, the differential equation that describes the time evolution of the lower-order statistical moments for the number of molecules of the different species involved, is generally not closed, in the sense that the right-hand side of this equation depends on higher-order moments. Recent work has proposed a moment closure technique based on derivative-matching, which closes the moment equations by approximating higher-order moments as nonlinear functions of lower-order moments. We here provide a mathematical proof of this moment closure technique, and highlight its performance through comparisons with alternative methods. These comparisons reveal that this moment closure technique based on derivative-matching provides more accurate estimates of the moment dynamics, especially when the population size is small. Finally, we show that the accuracy of the proposed moment closure scheme can be arbitrarily increased by incurring additional computational effort.

Journal ArticleDOI
TL;DR: A model describing the evolution of predicted tube scalings simplifies the computation of stochastic tubes, and the resulting MPC scheme has a low online computational load even for long prediction horizons, thus allowing for performance improvements.
Abstract: Stochastic model predictive control (MPC) strategies can provide guarantees of stability and constraint satisfaction, but their online computation can be formidable. This difficulty is avoided in the current technical note through the use of tubes of fixed cross section and variable scaling. A model describing the evolution of predicted tube scalings facilitates the computation of stochastic tubes; furthermore this procedure can be performed offline. The resulting MPC scheme has a low online computational load even for long prediction horizons, thus allowing for performance improvements. The efficacy of the approach is illustrated by numerical examples.

Journal ArticleDOI
TL;DR: This approach reduces the problem of calculating the Fisher information matrix to solving a set of ordinary differential equations and is the first method to compute Fisher information for stochastic chemical kinetics models without the need for Monte Carlo simulations.
Abstract: We present a novel and simple method to numerically calculate Fisher information matrices for stochastic chemical kinetics models. The linear noise approximation is used to derive model equations and a likelihood function that leads to an efficient computational algorithm. Our approach reduces the problem of calculating the Fisher information matrix to solving a set of ordinary differential equations. This is the first method to compute Fisher information for stochastic chemical kinetics models without the need for Monte Carlo simulations. This methodology is then used to study sensitivity, robustness, and parameter identifiability in stochastic chemical kinetics models. We show that significant differences exist between stochastic and deterministic models as well as between stochastic models with time-series and time-point measurements. We demonstrate that these discrepancies arise from the variability in molecule numbers, correlations between species, and temporal correlations and show how this approach can be used in the analysis and design of experiments probing stochastic processes at the cellular level. The algorithm has been implemented as a Matlab package and is available from the authors upon request.

Book ChapterDOI
Michael Koller1
01 Jan 2011
TL;DR: In this article, the authors provide all necessary definition and results with respect to Martingales and stochastic integration with the aim of providing all necessary proof and analysis for different theorems.
Abstract: The aim of this appendix is to provide all necessary definition and results with respect to Martingales and stochastic integration Since some of the underlying properties and theorems require a lot of advanced mathematics, we do not aim to proof the different theorems For valuable literature we refer to (Protter 1990) and (Ikeda and Watanabe 1981)

Journal ArticleDOI
TL;DR: Two different approaches to robust output-feedback controller design are developed for the underlying T-S fuzzy affine systems with unreliable communication links in the form of linear matrix inequalities (LMIs).
Abstract: This paper investigates the problem of robust output-feedback control for a class of networked nonlinear systems with multiple packet dropouts. The nonlinear plant is represented by Takagi-Sugeno (T-S) fuzzy affine dynamic models with norm-bounded uncertainties, and stochastic variables that satisfy the Bernoulli random binary distribution are adopted to characterize the data-missing phenomenon. The objective is to design an admissible output-feedback controller that guarantees the stochastic stability of the resulting closed-loop system with a prescribed disturbance attenuation level. It is assumed that the plant premise variables, which are often the state variables or their functions, are not measurable so that the controller implementation with state-space partition may not be synchronous with the state trajectories of the plant. Based on a piecewise quadratic Lyapunov function combined with an S-procedure and some matrix inequality convexifying techniques, two different approaches to robust output-feedback controller design are developed for the underlying T-S fuzzy affine systems with unreliable communication links. The solutions to the problem are formulated in the form of linear matrix inequalities (LMIs). Finally, simulation examples are provided to illustrate the effectiveness of the proposed approaches.

Journal ArticleDOI
TL;DR: In this paper, a restricted family of non-dominated probability measures for smooth processes is studied. But the authors focus on developing stochastic analysis simultaneously under a general family of probability measures that are not dominated by a single probability measure.
Abstract: This paper is on developing stochastic analysis simultaneously under a general family of probability measures that are not dominated by a single probability measure. The interest in this question originates from the probabilistic representations of fully nonlinear partial differential equations and applications to mathematical finance. The existing literature relies either on the capacity theory (Denis and Martini), or on the underlying nonlinear partial differential equation (Peng). In both approaches, the resulting theory requires certain smoothness, the so-called quasi-sure continuity, of the corresponding processes and random variables in terms of the underlying canonical process. In this paper, we investigate this question for a larger class of ``non-smooth" processes, but with a restricted family of non-dominated probability measures. For smooth processes, our approach leads to similar results as in previous literature, provided the restricted family satisfies an additional density property.

Journal ArticleDOI
TL;DR: The ability to model endogenous or random fluctuations on hidden neuronal (and physiological) states provides a new and possibly more plausible perspective on how regionally specific signals in fMRI are generated.

Journal ArticleDOI
TL;DR: An algorithm for decentralized multi-agent estimation of parameters in linear discrete-time regression models is proposed in the form of a combination of local stochastic approximation algorithms and a global consensus strategy, and an analysis of the asymptotic properties of the proposed algorithm is presented.
Abstract: In this paper, an algorithm for decentralized multi-agent estimation of parameters in linear discrete-time regression models is proposed in the form of a combination of local stochastic approximation algorithms and a global consensus strategy. An analysis of the asymptotic properties of the proposed algorithm is presented, taking into account both the multi-agent network structure and the probabilities of getting local measurements and implementing exchange of inter-agent messages. In the case of non-vanishing gains in the stochastic approximation algorithms, an asymptotic estimation error covariance matrix bound is defined as the solution of a Lyapunov-like matrix equation. In the case of asymptotically vanishing gains, the mean-square convergence is proved and the rate of convergence estimated. In the discussion, the problem of additive communication noise is treated in a methodologically consistent way. It is also demonstrated how the consensus scheme in the algorithm can contribute to the overall reduction of measurement noise influence. Some simulation results illustrate the obtained theoretical results.

Journal ArticleDOI
TL;DR: The GIKF error process remains stochastically bounded, irrespective of the instability of the random process dynamics; and the network achieves weak consensus, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semidefinite matrices.
Abstract: The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where intersensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1) the GIKF error process remains stochastically bounded, irrespective of the instability of the random process dynamics; and 2) the network achieves weak consensus, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semidefinite matrices (independent of the initial state). To prove these results, we interpret the filtering states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a nonstationary Markov chain on the network graph.

Journal ArticleDOI
TL;DR: A set of general nonlinear equations described by sector-bounded nonlinearities is utilized to model the system and sensors in networks and a sufficient condition is derived to guarantee the H∞ performance as well as the exponential mean-square stability of the resulting filtering error dynamics.
Abstract: In this paper, the problem of distributed H∞ filtering in sensor networks using a stochastic sampled-data approach is investigated. A set of general nonlinear equations described by sector-bounded nonlinearities is utilized to model the system and sensors in networks. Each sensor receives the information from both the system and its neighbors. The signal received by each sensor is sampled by a sampler separately with stochastic sampling periods before it is employed by the corresponding filter. By converting the sampling periods into bounded time-delays, the design problem of the stochastic sampled-data based distributed H∞ filters amounts to solving the H∞ filtering problem for a class of stochastic nonlinear systems with multiple bounded time-delays. Then, by constructing a new Lyapunov functional and employing both the Gronwall's inequality and the Jenson integral inequality, a sufficient condition is derived to guarantee the H∞ performance as well as the exponential mean-square stability of the resulting filtering error dynamics. Subsequently, the desired sampled-data based distributed H∞ filters are designed in terms of the solution to certain matrix inequalities that can be solved effectively by using available software. Finally, a numerical simulation example is exploited to demonstrate the effectiveness of the proposed sampled-data distributed H∞ filtering scheme.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a one-PRNG-per-kernel-call-perthread scheme, in which a micro-stream of pseudorandom numbers is generated in each thread and kernel call.

Journal ArticleDOI
TL;DR: Sufficient conditions are presented to guarantee the convergence of the estimation error systems for all admissible stochastic disturbances, randomly varying nonlinearities, and missing measurements.
Abstract: This paper deals with the distributed state estimation problem for a class of sensor networks described by discrete-time stochastic systems with randomly varying nonlinearities and missing measurements. In the sensor network, there is no centralized processor capable of collecting all the measurements from the sensors, and therefore each individual sensor needs to estimate the system state based not only on its own measurement but also on its neighboring sensors' measurements according to certain topology. The stochastic Brownian motions affect both the dynamical plant and the sensor measurement outputs. The randomly varying nonlinearities and missing measurements are introduced to reflect more realistic dynamical behaviors of the sensor networks that are caused by noisy environment as well as by probabilistic communication failures. Through available output measurements from each individual sensor, we aim to design distributed state estimators to approximate the states of the networked dynamic system. Sufficient conditions are presented to guarantee the convergence of the estimation error systems for all admissible stochastic disturbances, randomly varying nonlinearities, and missing measurements. Then, the explicit expressions of individual estimators are derived to facilitate the distributed computing of state estimation from each sensor. Finally, a numerical example is given to verify the theoretical results.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the conceptually different definitions used for the non-Markovianity of classical and quantum processes, and compare these definitions and their relations to the classical notion of non-markovianness by employing a large class of semi-MarkOVian processes, known as semiMarkov processes, which admit a natural extension to the quantum case.
Abstract: We discuss the conceptually different definitions used for the non-Markovianity of classical and quantum processes. The well-established definition of non-Markovianity of a classical stochastic process represents a condition on the Kolmogorov hierarchy of the n-point joint probability distributions. Since this definition cannot be transferred to the quantum regime, quantum non-Markovianity has recently been defined and quantified in terms of the underlying quantum dynamical map, using either its divisibility properties or the behavior of the trace distance between pairs of initial states. Here, we investigate and compare these definitions and their relations to the classical notion of non-Markovianity by employing a large class of non-Markovian processes, known as semi-Markov processes, which admit a natural extension to the quantum case. A number of specific physical examples are constructed that allow us to study the basic features of the classical and the quantum definitions and to evaluate explicitly the measures of quantum non-Markovianity. Our results clearly demonstrate several fundamental differences between the classical and the quantum notion of non-Markovianity, as well as between the various quantum measures of non-Markovianity. In particular, we show that the divisibility property in the classical case does not coincide with Markovianity and that the non-Markovianity measure based on divisibility assigns equal infinite values to different dynamics, which can be distinguished by exploiting the trace distance measure. A simple exact expression for the latter is also obtained in a special case.