scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic process published in 2016"


01 Jan 2016
TL;DR: The stochastic processes and filtering theory is universally compatible with any devices to read and will help you to get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you for reading stochastic processes and filtering theory. Maybe you have knowledge that, people have look numerous times for their favorite novels like this stochastic processes and filtering theory, but end up in harmful downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they are facing with some infectious bugs inside their computer. stochastic processes and filtering theory is available in our digital library an online access to it is set as public so you can download it instantly. Our digital library saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the stochastic processes and filtering theory is universally compatible with any devices to read.

646 citations


Journal ArticleDOI
TL;DR: The theory and applications of random walks on networks are surveyed, restricting ourselves to simple cases of single and non-adaptive random walkers, and three main types are distinguished: discrete-time random walks, node-centric continuous-timerandom walks, and edge-centric Continuous-Time random walks.
Abstract: Random walks are ubiquitous in the sciences, and they are interesting from both theoretical and practical perspectives. They are one of the most fundamental types of stochastic processes; can be used to model numerous phenomena, including diffusion, interactions, and opinions among humans and animals; and can be used to extract information about important entities or dense groups of entities in a network. Random walks have been studied for many decades on both regular lattices and (especially in the last couple of decades) on networks with a variety of structures. In the present article, we survey the theory and applications of random walks on networks, restricting ourselves to simple cases of single and non-adaptive random walkers. We distinguish three main types of random walks: discrete-time random walks, node-centric continuous-time random walks, and edge-centric continuous-time random walks. We first briefly survey random walks on a line, and then we consider random walks on various types of networks. We extensively discuss applications of random walks, including ranking of nodes (e.g., PageRank), community detection, respondent-driven sampling, and opinion models such as voter models.

397 citations


Journal ArticleDOI
TL;DR: It is proved that as long as b is below a certain threshold, the authors can reach any predefined accuracy with less overall work than without mini-batching, and is suitable for further acceleration by parallelization.
Abstract: We propose mS2GD: a method incorporating a mini-batching scheme for improving the theoretical complexity and practical performance of semi-stochastic gradient descent (S2GD). We consider the problem of minimizing a strongly convex function represented as the sum of an average of a large number of smooth convex functions, and a simple nonsmooth convex regularizer. Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps. The process is repeated a few times with the last iterate becoming the new starting point. The novelty of our method is in introduction of mini-batching into the computation of stochastic steps. In each step, instead of choosing a single function, we sample $b$ functions, compute their gradients, and compute the direction based on this. We analyze the complexity of the method and show that it benefits from two speedup effects. First, we prove that as long as $b$ is below a certain threshold, we can reach any predefined accuracy with less overall work than without mini-batching. Second, our mini-batching scheme admits a simple parallel implementation, and hence is suitable for further acceleration by parallelization.

289 citations


Book
23 Aug 2016
TL;DR: Fractional Brownian motion (fBm) as mentioned in this paper is a stochastic process which deviates significantly from Brownian Motion and semimartingales, and others classically used in probability theory.
Abstract: Fractional Brownian motion (fBm) is a stochastic process which deviates significantly from Brownian motion and semimartingales, and others classically used in probability theory. As a centered Gaussian process, it is characterized by the stationarity of its increments and a medium- or long-memory property which is in sharp contrast with martingales and Markov processes. FBm has become a popular choice for applications where classical processes cannot model these non-trivial properties; for instance long memory, which is also known as persistence, is of fundamental importance for financial data and in internet traffic. The mathematical theory of fBm is currently being developed vigorously by a number of stochastic analysts, in various directions, using complementary and sometimes competing tools. This book is concerned with several aspects of fBm, including the stochastic integration with respect to it, the study of its supremum and its appearance as limit of partial sums involving stationary sequences, to name but a few. The book is addressed to researchers and graduate students in probability and mathematical statistics. With very few exceptions (where precise references are given), every stated result is proved.

234 citations


Journal ArticleDOI
TL;DR: The main purpose of this brief is to design a filter to guarantee that the augmented Markovian jump fuzzy neural networks are stable in mean-square sense and satisfy a prescribed passivity performance index by employing the Lyapunov method and the stochastic analysis technique.
Abstract: In this brief, the problems of the mixed H-infinity and passivity performance analysis and design are investigated for discrete time-delay neural networks with Markovian jump parameters represented by Takagi–Sugeno fuzzy model. The main purpose of this brief is to design a filter to guarantee that the augmented Markovian jump fuzzy neural networks are stable in mean-square sense and satisfy a prescribed passivity performance index by employing the Lyapunov method and the stochastic analysis technique. Applying the matrix decomposition techniques, sufficient conditions are provided for the solvability of the problems, which can be formulated in terms of linear matrix inequalities. A numerical example is also presented to illustrate the effectiveness of the proposed techniques.

231 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule, which can be performed in just a few cpu hours.
Abstract: Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule. We demonstrate for systems like Cr2 that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C2.

226 citations


Journal ArticleDOI
TL;DR: Based on Lyapunov functions, Halanay inequality, and linear matrix inequalities, sufficient conditions that depend on the probability distribution of the delay coupling and the impulsive delay were obtained and numerical simulations are used to show the effectiveness of the theoretical results.
Abstract: This paper deals with the exponential synchronization of coupled stochastic memristor-based neural networks with probabilistic time-varying delay coupling and time-varying impulsive delay. There is one probabilistic transmittal delay in the delayed coupling that is translated by a Bernoulli stochastic variable satisfying a conditional probability distribution. The disturbance is described by a Wiener process. Based on Lyapunov functions, Halanay inequality, and linear matrix inequalities, sufficient conditions that depend on the probability distribution of the delay coupling and the impulsive delay were obtained. Numerical simulations are used to show the effectiveness of the theoretical results.

192 citations


Journal ArticleDOI
TL;DR: In this paper, a data-driven risk-averse stochastic unit commitment model is proposed, where risk aversion stems from the worst-case probability distribution of the renewable energy generation amount, and the corresponding solution methods to solve the problem are developed.
Abstract: Considering recent development of deregulated energy markets and the intermittent nature of renewable energy generation, it is important for power system operators to ensure cost effectiveness while maintaining the system reliability To achieve this goal, significant research progress has recently been made to develop stochastic optimization models and solution methods to improve reliability unit commitment run practice, which is used in the day-ahead market for ISOs/RTOs to ensure sufficient generation capacity available in real time to accommodate uncertainties Most stochastic optimization approaches assume the renewable energy generation amounts follow certain distributions However, in practice, the distributions are unknown and instead, a certain amount of historical data are available In this research, we propose a data-driven risk-averse stochastic unit commitment model, where risk aversion stems from the worst-case probability distribution of the renewable energy generation amount, and develop the corresponding solution methods to solve the problem Given a set of historical data, our proposed approach first constructs a confidence set for the distributions of the uncertain parameters using statistical inference and solves the corresponding risk-averse stochastic unit commitment problem Then, we show that the conservativeness of the proposed stochastic program vanishes as the number of historical data increases to infinity Finally, the computational results numerically show how the risk-averse stochastic unit commitment problem converges to the risk-neutral one, which indicates the value of data

182 citations


Journal ArticleDOI
TL;DR: The proposed method presents its superiority in describing the stochastic degradation processes and predicting the machine RUL through comparisons with other methods.
Abstract: Remaining useful life (RUL) prediction is a key process in condition-based maintenance for machines. It contributes to reducing risks and maintenance costs and increasing the maintainability, availability, reliability, and productivity of machines. This paper proposes a new method based on stochastic process models for machine RUL prediction. First, a new stochastic process model is constructed considering the multiple variability sources of machine stochastic degradation processes simultaneously. Then the Kalman particle filtering algorithm is used to estimate the system states and predict the RUL. The effectiveness of the method is demonstrated using simulated degradation processes and accelerated degradation tests of rolling element bearings. Through comparisons with other methods, the proposed method presents its superiority in describing the stochastic degradation processes and predicting the machine RUL.

182 citations


Journal ArticleDOI
TL;DR: A set of event-based state estimators is constructed so as to reduce unnecessary data transmissions in the communication channel using the stochastic analysis approach and Lyapunov theory to obtain sufficient conditions for ensuring the existence of the desired estimators.
Abstract: In this paper, the state estimation problem is investigated for a class of discrete-time complex networks subject to nonlinearities, mixed delays, and stochastic noises. A set of event-based state estimators is constructed so as to reduce unnecessary data transmissions in the communication channel. Compared with the traditional state estimator whose measurement signal is received under a periodic clock-driven rule, the event-based estimator only updates the measurement information from the sensors when the prespecified “event” is violated. Attention is focused on the analysis and design problem of the event-based estimators for the addressed discrete-time complex networks such that the estimation error is exponentially bounded in mean square. A combination of the stochastic analysis approach and Lyapunov theory is employed to obtain sufficient conditions for ensuring the existence of the desired estimators and the upper bound of the estimation error is also derived. By using the convex optimization technique, the gain parameters of the desired estimators are provided in an explicit form. Finally, a simulation example is used to demonstrate the effectiveness of the proposed estimation strategy.

178 citations


Journal ArticleDOI
TL;DR: This paper investigates the tracking and optimization problems for a class of industrial processes by utilizing output feedback fault-tolerant control (FTC) and predictive compensation strategy and simulation results further demonstrate the effectiveness of the proposed method.
Abstract: This paper investigates the tracking and optimization problems for a class of industrial processes by utilizing output feedback fault-tolerant control (FTC) and predictive compensation strategy. At device layer, the tracking problem for device layer subsystems which subject to random failures and random network-induced delays is investigated. These two different random processes are modeled as Markovian chains. Device layer controllers are designed to guarantee the tracking performance at $H_{\infty}$ disturbance attenuation level. At operation layer, a nonlinear model predictive control (NMPC) strategy is proposed to stabilize the upper operation layer system. Then by considering the effect of radial basis function (RBF) performance index and random packet dropout phenomena, a predictive compensator is designed to guarantee the input-to-state practical stability (ISpS) of the resulting system. In addition, networked flotation processes are considered in the simulation part, and the simulation results further demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This work shows that there exists an optimal α that minimizes the mean time to reach the target, thereby offering a step towards a viable strategy to locate targets in a crowded environment.
Abstract: What happens when a continuously evolving stochastic process is interrupted with large changes at random intervals τ distributed as a power law ∼τ^{-(1+α)};α>0? Modeling the stochastic process by diffusion and the large changes as abrupt resets to the initial condition, we obtain exact closed-form expressions for both static and dynamic quantities, while accounting for strong correlations implied by a power law. Our results show that the resulting dynamics exhibits a spectrum of rich long-time behavior, from an ever-spreading spatial distribution for α 1. The dynamics has strong consequences on the time to reach a distant target for the first time; we specifically show that there exists an optimal α that minimizes the mean time to reach the target, thereby offering a step towards a viable strategy to locate targets in a crowded environment.

Journal ArticleDOI
TL;DR: In this article, the authors obtain the general solution for the distribution of processes in which waiting times between reset events are drawn from an arbitrary distribution, which allows for the investigation of a broader class of much more realistic processes.
Abstract: Stochastic processes that are randomly reset to an initial condition serve as a showcase to investigate non-equilibrium steady states. However, all existing results have been restricted to the special case of memoryless resetting protocols. Here, we obtain the general solution for the distribution of processes in which waiting times between reset events are drawn from an arbitrary distribution. This allows for the investigation of a broader class of much more realistic processes. As an example, our results are applied to the analysis of the efficiency of constrained random search processes.

Journal ArticleDOI
TL;DR: This paper is concerned with the extended Kalman filtering problem for a class of stochastic nonlinear systems under cyber attacks, wherein the discussed cyber attacks occur in a random way in the data transmission from sensor nodes to remote filter nodes.

Journal ArticleDOI
TL;DR: The intensity matching approach for tractable performance evaluation and optimization of cellular networks is introduced and is conveniently formulated for unveiling the impact of several system parameters, e.g., the density of base stations and blockages.
Abstract: The intensity matching approach for tractable performance evaluation and optimization of cellular networks is introduced. It assumes that the base stations are modeled as the points of a Poisson point process (PPP) and leverages stochastic geometry for system-level analysis. Its rationale relies on observing that system-level performance is determined by the intensity measure of transformations of the underlaying spatial PPP. By approximating the original system model with a simplified one, whose performance is determined by a mathematically convenient intensity measure, tractable yet accurate integral expressions for computing area spectral efficiency and potential throughput are provided. The considered system model accounts for many practical aspects that, for tractability, are typically neglected, e.g., line-of-sight (LOS) and non-LOS propagation, antenna radiation patterns, traffic load, practical cell associations, and general fading channels. The proposed approach, more importantly, is conveniently formulated for unveiling the impact of several system parameters, e.g., the density of base stations and blockages. The effectiveness of this novel and general methodology is validated with the aid of empirical data for the locations of base stations and for the footprints of buildings in dense urban environments.

Book
29 Oct 2016
TL;DR: This chapter discusses the development of linear Finite-Dimensional Stochastic Systems with Inputs, and some Topics in Linear Algebra and Hilbert Space Theory.
Abstract: Introduction.- Geometry of Second-Order Random Processes.- Spectral Representation of Stationary Processes.- Innovations, Wold Decomposition, and Spectral Factorization.- Wold Decomposition and Spectral Factorization in Continuous Time.- Linear Finite-Dimensional Stochastic Systems.- The Geometry of Splitting Subspaces.- Markovian Representations.- Proper Markovian Representations in Hardy Space.- Stochastic Realization Theory in Continuous Time.- Stochastic Balancing and Model Reduction.- Finite-Interval Stochastic Realization and Partial Realization Theory.- Subspace Identification for Time Series.- Zero Dynamics and the Geometry of the Riccati Inequality.- Smoothing and Interpolation.- Acausal Linear Stochastic Models and Spectral Factorization.- Stochastic Systems with Inputs.- Appendix A. Basic Principles of Deterministic Realization Theory.- Appendix B. Some Topics in Linear Algebra and Hilbert Space Theory

Journal ArticleDOI
TL;DR: It is shown that SDP relaxations also achieve the sharp recovery threshold in the following cases: 1) binary SBM with two clusters of sizes proportional to network size but not necessarily equal; 2) S BM with a fixed number of equal-sized clusters; and 3) binary censored block model with the background graph being Erdös-Rényi.
Abstract: Resolving a conjecture of Abbe, Bandeira, and Hall, the authors have recently shown that the semidefinite programming (SDP) relaxation of the maximum likelihood estimator achieves the sharp threshold for exactly recovering the community structure under the binary stochastic block model (SBM) of two equal-sized clusters. The same was shown for the case of a single cluster and outliers. Extending the proof techniques, in this paper, it is shown that SDP relaxations also achieve the sharp recovery threshold in the following cases: 1) binary SBM with two clusters of sizes proportional to network size but not necessarily equal; 2) SBM with a fixed number of equal-sized clusters; and 3) binary censored block model with the background graph being Erdős–Renyi. Furthermore, a sufficient condition is given for an SDP procedure to achieve exact recovery for the general case of a fixed number of clusters plus outliers. These results demonstrate the versatility of SDP relaxation as a simple, general purpose, computationally feasible methodology for community detection.

Journal ArticleDOI
TL;DR: In this article, a probabilistic power flow analysis technique based on the stochastic response surface method is proposed to estimate the probability distributions and statistics of power flow responses without using series expansions such as the Gram-Charlier, Cornish-Fisher or Edgeworth series.
Abstract: This paper proposes a probabilistic power flow analysis technique based on the stochastic response surface method. The probability distributions and statistics of power flow responses can be accurately and efficiently estimated by the proposed method without using series expansions such as the Gram-Charlier, Cornish-Fisher, or Edgeworth series. The stochastic continuous input variables following normal distributions such as loads or non-normal distributions such as photovoltaic generation and wind power and their multiple correlations can be easily modeled. The correctness, effectiveness and adaptability of the proposed method are demonstrated by comparing the probabilistic power flow analysis results of the IEEE 14-bus and 57-bus standard test systems obtained from the proposed method, the point estimate method, and the Monte Carlo simulation method.

Journal ArticleDOI
TL;DR: It is argued that the relationship for the Fano factor of the entropy production rate varσ/meanσ≥2 is the most significant realization of the loose bound.
Abstract: We connect two recent advances in the stochastic analysis of nonequilibrium systems: the (loose) uncertainty principle for the currents, which states that statistical errors are bounded by thermodynamic dissipation, and the analysis of thermodynamic consistency of the currents in the light of symmetries. Employing the large deviation techniques presented by Gingrich et al. [Phys. Rev. Lett. 116, 120601 (2016)PRLTAO0031-900710.1103/PhysRevLett.116.120601] and Pietzonka, Barato, and Seifert [Phys. Rev. E 93, 052145 (2016)2470-004510.1103/PhysRevE.93.052145], we provide a short proof of the loose uncertainty principle, and prove a tighter uncertainty relation for a class of thermodynamically consistent currents J. Our bound involves a measure of partial entropy production, that we interpret as the least amount of entropy that a system sustaining current J can possibly produce, at a given steady state. We provide a complete mathematical discussion of quadratic bounds which allows one to determine which are optimal, and finally we argue that the relationship for the Fano factor of the entropy production rate varσ/meanσ≥2 is the most significant realization of the loose bound. We base our analysis both on the formalism of diffusions, and of Markov jump processes in the light of Schnakenberg's cycle analysis.

Posted Content
TL;DR: In this paper, the problem of quantifying the impact of model misspecification when computing general expected values of interest is addressed, and bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms of a flexible class of distances from a suitable baseline model.
Abstract: This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms of a flexible class of distances from a suitable baseline model. These distances, based on optimal transportation between probability measures, include Wasserstein's distances as particular cases. The proposed methodology is well-suited for risk analysis, as we demonstrate with a number of applications. We also discuss how to estimate the tolerance region non-parametrically using Skorokhod-type embeddings in some of these applications.

Journal ArticleDOI
TL;DR: A new generative model for cluster-centric D2D networks is developed that allows to study the effect of intra-cluster interfering devices that are more likely to lie closer to the cluster center.
Abstract: This paper develops a comprehensive analytical framework with foundations in stochastic geometry to characterize the performance of cluster-centric content placement in a cache-enabled device-to-device (D2D) network. Different from device-centric content placement, cluster-centric placement focuses on placing content in each cluster, such that the collective performance of all the devices in each cluster is optimized. Modeling the locations of the devices by a Poisson cluster process, we define and analyze the performance for three general cases: 1) $k$ -Tx case: the receiver of interest is chosen uniformly at random in a cluster and its content of interest is available at the $k{\mathrm{ th}}$ closest device to the cluster center; 2) $\ell $ -Rx case: the receiver of interest is the $\ell {\mathrm{ th}}$ closest device to the cluster center and its content of interest is available at a device chosen uniformly at random from the same cluster; and 3) baseline case: the receiver of interest is chosen uniformly at random in a cluster and its content of interest is available at a device chosen independently and uniformly at random from the same cluster. Easy-to-use expressions for the key performance metrics, such as coverage probability and area spectral efficiency of the whole network, are derived for all three cases. Our analysis concretely demonstrates significant improvement in the network performance when the device on which content is cached or device requesting content from cache is biased to lie closer to the cluster center compared with the baseline case. Based on this insight, we develop and analyze a new generative model for cluster-centric D2D networks that allows to study the effect of intra-cluster interfering devices that are more likely to lie closer to the cluster center.

Journal ArticleDOI
TL;DR: This paper aims to design a non-fragile state estimator such that, in the presence of all admissible gain variations, the estimation error converges to zero exponentially.

Journal ArticleDOI
TL;DR: In this paper, a generalised projection identification algorithm (or a finite data window stochastic gradient identification algorithm) for time-varying systems is presented and its convergence is analyzed by using the Stochastic Process Theory.
Abstract: The least mean square methods include two typical parameter estimation algorithms, which are the projection algorithm and the stochastic gradient algorithm, the former is sensitive to noise and the latter is not capable of tracking the time-varying parameters. On the basis of these two typical algorithms, this study presents a generalised projection identification algorithm (or a finite data window stochastic gradient identification algorithm) for time-varying systems and studies its convergence by using the stochastic process theory. The analysis indicates that the generalised projection algorithm can track the time-varying parameters and requires less computational effort compared with the forgetting factor recursive least squares algorithm. The way of choosing the data window length is stated so that the minimum parameter estimation error upper bound can be obtained. The numerical examples are provided.

Journal ArticleDOI
16 Jun 2016-Nature
TL;DR: An analytical approach is introduced to calculate the mean first-passage time of a Gaussian non-Markovian random walker to a target in the limit of a large confining volume, and reveals, on the basis of Gaussian processes, the importance of memory effects in first-Passage statistics of non- Markovianrandom walkers in confinement.
Abstract: The first-passage time, defined as the time a random walker takes to reach a target point in a confining domain, is a key quantity in the theory of stochastic processes. Its importance comes from its crucial role in quantifying the efficiency of processes as varied as diffusion-limited reactions, target search processes or the spread of diseases. Most methods of determining the properties of first-passage time in confined domains have been limited to Markovian (memoryless) processes. However, as soon as the random walker interacts with its environment, memory effects cannot be neglected: that is, the future motion of the random walker does not depend only on its current position, but also on its past trajectory. Examples of non-Markovian dynamics include single-file diffusion in narrow channels, or the motion of a tracer particle either attached to a polymeric chain or diffusing in simple or complex fluids such as nematics, dense soft colloids or viscoelastic solutions. Here we introduce an analytical approach to calculate, in the limit of a large confining volume, the mean first-passage time of a Gaussian non-Markovian random walker to a target. The non-Markovian features of the dynamics are encompassed by determining the statistical properties of the fictitious trajectory that the random walker would follow after the first-passage event takes place, which are shown to govern the first-passage time kinetics. This analysis is applicable to a broad range of stochastic processes, which may be correlated at long times. Our theoretical predictions are confirmed by numerical simulations for several examples of non-Markovian processes, including the case of fractional Brownian motion in one and higher dimensions. These results reveal, on the basis of Gaussian processes, the importance of memory effects in first-passage statistics of non-Markovian random walkers in confinement.

Book
18 Feb 2016
TL;DR: The first title in SIAM's Financial Mathematics book series and is based on the author s lecture notes as mentioned in this paper, which is written for young researchers and newcomers to stochastic control and differential games, and illustrated by applications to models of systemic risk, macroeconomic growth, flocking/schooling, crowd behavior, and predatory trading, among others.
Abstract: The goal of this textbook is to introduce students to the stochastic analysis tools that play an increasing role in the probabilistic approach to optimization problems, including stochastic control and stochastic differential games. While optimal control is taught in many graduate programs in applied mathematics and operations research, the author was intrigued by the lack of coverage of the theory of stochastic differential games. This is the first title in SIAM's Financial Mathematics book series and is based on the author s lecture notes. It will be helpful to students who are interested in stochastic differential equations (forward, backward, forward-backward); the probabilistic approach to stochastic control (dynamic programming and the stochastic maximum principle); and mean field games and control of McKean-Vlasov dynamics. The theory is illustrated by applications to models of systemic risk, macroeconomic growth, flocking/schooling, crowd behavior, and predatory trading, among others. Audience: This book is written for young researchers and newcomers to stochastic control and stochastic differential games.

Journal ArticleDOI
TL;DR: Several easy to verified conditions for the existence of an asynchronously switched distributed controller are derived such that stochastic delayed multi-agent systems with asynchronous switching and nonlinear dynamics can achieve global exponential consensus.
Abstract: In this paper, the distributed exponential consensus of stochastic delayed multi-agent systems with nonlinear dynamics is investigated under asynchronous switching. The asynchronous switching considered here is to account for the time of identifying the active modes of multi-agent systems. After receipt of confirmation of mode’s switching, the matched controller can be applied, which means that the switching time of the matched controller in each node usually lags behind that of system switching. In order to handle the coexistence of switched signals and stochastic disturbances, a comparison principle of stochastic switched delayed systems is first proved. By means of this extended comparison principle, several easy to verified conditions for the existence of an asynchronously switched distributed controller are derived such that stochastic delayed multi-agent systems with asynchronous switching and nonlinear dynamics can achieve global exponential consensus. Two examples are given to illustrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: The sufficient criteria for global asymptotic stability in probability and stochastic input-to-state stability are given and the importance and effectiveness of the extended asynchronous switching model is verified.
Abstract: An extended asynchronous switching model is investigated for a class of switched stochastic nonlinear retarded systems in the presence of both detection delay and false alarm, where the extended asynchronous switching is described by two independent and exponentially distributed stochastic processes, and further simplified as Markovian. Based on the Razumikhin-type theorem incorporated with average dwell-time approach, the sufficient criteria for global asymptotic stability in probability and stochastic input-to-state stability are given, whose importance and effectiveness are finally verified by numerical examples.

Journal ArticleDOI
TL;DR: The approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured within a flexible class of distances from a suitable baseline model.
Abstract: This paper deals with the problem of quantifying the impact of model misspecification when computing general expected values of interest. The methodology that we propose is applicable in great generality, in particular, we provide examples involving path-dependent expectations of stochastic processes. Our approach consists in computing bounds for the expectation of interest regardless of the probability measure used, as long as the measure lies within a prescribed tolerance measured in terms of a flexible class of distances from a suitable baseline model. These distances, based on optimal transportation between probability measures, include Wasserstein’s distances as particular cases. The proposed methodology is well-suited for risk analysis, as we demonstrate with a number of applications. We also discuss how to estimate the tolerance region non-parametrically using Skorokhod-type embeddings in some of these applications.

Journal ArticleDOI
01 May 2016-Energy
TL;DR: A new neural network architecture is established in this work which combines Multilayer perception and ERNN (Elman recurrent neural networks) with stochastic time effective function to improve the forecasting accuracy of crude oil price fluctuations.

Journal ArticleDOI
TL;DR: The most prominent feature of the intermediate scattering function is an oscillatory behavior at intermediate wavenumbers reflecting the persistent swimming motion, whereas at small length scales bare translational and at large length scales an enhanced effective diffusion emerges.
Abstract: Various challenges are faced when animalcules such as bacteria, protozoa, algae, or sperms move autonomously in aqueous media at low Reynolds number. These active agents are subject to strong stochastic fluctuations, that compete with the directed motion. So far most studies consider the lowest order moments of the displacements only, while more general spatio-temporal information on the stochastic motion is provided in scattering experiments. Here we derive analytically exact expressions for the directly measurable intermediate scattering function for a mesoscopic model of a single, anisotropic active Brownian particle in three dimensions. The mean-square displacement and the non-Gaussian parameter of the stochastic process are obtained as derivatives of the intermediate scattering function. These display different temporal regimes dominated by effective diffusion and directed motion due to the interplay of translational and rotational diffusion which is rationalized within the theory. The most prominent feature of the intermediate scattering function is an oscillatory behavior at intermediate wavenumbers reflecting the persistent swimming motion, whereas at small length scales bare translational and at large length scales an enhanced effective diffusion emerges. We anticipate that our characterization of the motion of active agents will serve as a reference for more realistic models and experimental observations.