scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic process published in 2014"


Journal ArticleDOI
TL;DR: This Perspective is intended as a guidebook for both experimentalists and theorists working on systems, which exhibit anomalous diffusion, and pays special attention to the ergodicity breaking parameters for the different anomalous stochastic processes.
Abstract: Modern microscopic techniques following the stochastic motion of labelled tracer particles have uncovered significant deviations from the laws of Brownian motion in a variety of animate and inanimate systems. Such anomalous diffusion can have different physical origins, which can be identified from careful data analysis. In particular, single particle tracking provides the entire trajectory of the traced particle, which allows one to evaluate different observables to quantify the dynamics of the system under observation. We here provide an extensive overview over different popular anomalous diffusion models and their properties. We pay special attention to their ergodic properties, highlighting the fact that in several of these models the long time averaged mean squared displacement shows a distinct disparity to the regular, ensemble averaged mean squared displacement. In these cases, data obtained from time averages cannot be interpreted by the standard theoretical results for the ensemble averages. Here we therefore provide a comparison of the main properties of the time averaged mean squared displacement and its statistical behaviour in terms of the scatter of the amplitudes between the time averages obtained from different trajectories. We especially demonstrate how anomalous dynamics may be identified for systems, which, on first sight, appear to be Brownian. Moreover, we discuss the ergodicity breaking parameters for the different anomalous stochastic processes and showcase the physical origins for the various behaviours. This Perspective is intended as a guidebook for both experimentalists and theorists working on systems, which exhibit anomalous diffusion.

1,390 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a mathematical framework to model random blockages and analyze their impact on cellular network performance, and showed that the probability of a link not intersecting by any blockages decays exponentially with the link length.
Abstract: Large-scale blockages such as buildings affect the performance of urban cellular networks, especially at higher frequencies. Unfortunately, such blockage effects are either neglected or characterized by oversimplified models in the analysis of cellular networks. Leveraging concepts from random shape theory, this paper proposes a mathematical framework to model random blockages and analyze their impact on cellular network performance. Random buildings are modeled as a process of rectangles with random sizes and orientations whose centers form a Poisson point process on the plane. The distribution of the number of blockages in a link is proven to be a Poisson random variable with parameter dependent on the length of the link. Our analysis shows that the probability that a link is not intersected by any blockages decays exponentially with the link length. A path loss model that incorporates the blockage effects is also proposed, which matches experimental trends observed in prior work. The model is applied to analyze the performance of cellular networks in urban areas with the presence of buildings, in terms of connectivity, coverage probability, and average rate. Our results show that the base station density should scale superlinearly with the blockage density to maintain the network connectivity. Our analyses also show that while buildings may block the desired signal, they may still have a positive impact on the SIR coverage probability and achievable rate since they can block significantly more interference.

650 citations


Book
17 Feb 2014
TL;DR: The authors provides a solid introduction to discrete and continuous stochastic processes, tackling a complex field in a way that instils a deep understanding of the relevant mathematical principles, and develops an intuitive grasp of the way these principles can be applied to modelling real-world systems.
Abstract: This definitive textbook provides a solid introduction to discrete and continuous stochastic processes, tackling a complex field in a way that instils a deep understanding of the relevant mathematical principles, and develops an intuitive grasp of the way these principles can be applied to modelling real-world systems. It includes a careful review of elementary probability and detailed coverage of Poisson, Gaussian and Markov processes with richly varied queuing applications. The theory and applications of inference, hypothesis testing, estimation, random walks, large deviations, martingales and investments are developed. Written by one of the world's leading information theorists, evolving over twenty years of graduate classroom teaching and enriched by over 300 exercises, this is an exceptional resource for anyone looking to develop their understanding of stochastic processes.

502 citations


Journal ArticleDOI
TL;DR: By using Lyapunov analysis, it is proven that all the signals of the closed-loop system are semiglobally uniformly ultimately bounded in probability and the system output tracks the reference signal to a bounded compact set.
Abstract: This paper studies an adaptive tracking control for a class of nonlinear stochastic systems with unknown functions. The considered systems are in the nonaffine pure-feedback form, and it is the first to control this class of systems with stochastic disturbances. The fuzzy-neural networks are used to approximate unknown functions. Based on the backstepping design technique, the controllers and the adaptation laws are obtained. Compared to most of the existing stochastic systems, the proposed control algorithm has fewer adjustable parameters and thus, it can reduce online computation load. By using Lyapunov analysis, it is proven that all the signals of the closed-loop system are semiglobally uniformly ultimately bounded in probability and the system output tracks the reference signal to a bounded compact set. The simulation example is given to illustrate the effectiveness of the proposed control algorithm.

447 citations


Book
19 Nov 2014
TL;DR: In this article, the Fokker-Planck Equation is modelled with Stochastic Differential Equations (SDE) and the Langevin Equation (LDE).
Abstract: Stochastic Processes.- Diffusion Processes.- Introduction to Stochastic Differential Equations.- The Fokker-Planck Equation.- Modelling with Stochastic Differential Equations.- The Langevin Equation.- Exit Problems for Diffusions.- Derivation of the Langevin Equation.- Linear Response Theory.- Appendix A Frequently Used Notations.- Appendix B Elements of Probability Theory.

358 citations


Book
11 Aug 2014
TL;DR: This book offers graduate students and researchers powerful tools for understanding uncertainty quantification for risk analysis and theory is developed in tandem with state-of-the art computational methods through worked examples, exercises, theorems and proofs.
Abstract: Part I. Deterministic Differential Equations: 1. Linear analysis 2. Galerkin approximation and finite elements 3. Time-dependent differential equations Part II. Stochastic Processes and Random Fields: 4. Probability theory 5. Stochastic processes 6. Stationary Gaussian processes 7. Random fields Part III. Stochastic Differential Equations: 8. Stochastic ordinary differential equations (SODEs) 9. Elliptic PDEs with random data 10. Semilinear stochastic PDEs.

284 citations


Journal ArticleDOI
TL;DR: In this article, a stochastic modeling and simulation technique for analyzing impacts of electric vehicles charging demands on distribution network is proposed, where the feeder daily load models, electric vehicle start charging time, and battery state of charge used in the impact study are derived from actual measurements and survey data.
Abstract: A stochastic modeling and simulation technique for analyzing impacts of electric vehicles charging demands on distribution network is proposed in this paper. Different from the previous deterministic approaches, the feeder daily load models, electric vehicle start charging time, and battery state of charge used in the impact study are derived from actual measurements and survey data. Distribution operation security risk information, such as over-current and under-voltage, is obtained from three-phase distribution load flow studies that use stochastic parameters drawn from Roulette wheel selection. Voltage and congestion impact indicators are defined and a comparison of the deterministic and stochastic analytical approaches in providing information required in distribution network reinforcement planning is presented. Numerical results illustrate the capability of the proposed stochastic models in reflecting system losses and security impacts due to electric vehicle integrations. The effectiveness of a controlled charging algorithm aimed at relieving the system operation problem is also presented.

269 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a general theory which allows one to accurately evaluate the mean first-passage time (FPT) for regular random walks in bounded domains, and its extensions to related firstpassage observables such as splitting probabilities and occupation times.

249 citations


Book
08 Oct 2014
TL;DR: Self-Organization in Cells I: Active Processes and Reaction-Diffusion Models: The WKB Method and Large Deviation Theory and Probability Theory and Martingales are used.
Abstract: Introduction- Diffusion in Cells: Random walks and Brownian Motion- Stochastic Ion Channels- Polymers and Molecular Motors- Sensing the Environment: Adaptation and Amplification in Cells- Stochastic Gene Expression and Regulatory Networks- Transport Processes in Cells- Self-Organization in Cells I: Active Processes- Self-Organization in Cells II: Reaction-Diffusion Models- The WKB Method and Large Deviation Theory- Probability Theory and Martingales

249 citations


Journal ArticleDOI
TL;DR: In this article, a chance-constrained stochastic programming formulation with economic and reliability metrics is presented for the day-ahead scheduling, where reserve requirements and line flow limits are formulated as chance constraints in which power system reliability requirements are to be satisfied with a presumed level of high probability.
Abstract: This paper proposes a day-ahead stochastic scheduling model in electricity markets. The model considers hourly forecast errors of system loads and variable renewable sources as well as random outages of power system components. A chance-constrained stochastic programming formulation with economic and reliability metrics is presented for the day-ahead scheduling. Reserve requirements and line flow limits are formulated as chance constraints in which power system reliability requirements are to be satisfied with a presumed level of high probability. The chance-constrained stochastic programming formulation is converted into a linear deterministic problem and a decomposition-based method is utilized to solve the day-ahead scheduling problem. Numerical tests are performed and the results are analyzed for a modified 31-bus system and an IEEE 118-bus system. The results show the viability of the proposed formulation for the day-ahead stochastic scheduling. Comparative evaluations of the proposed chance-constrained method and the Monte Carlo simulation (MCS) method are presented in the paper.

222 citations


Posted Content
TL;DR: A* sampling as mentioned in this paper is a generic sampling algorithm that searches for the maximum of a Gumbel process using A* search, which makes more efficient use of bound and likelihood evaluations than the most closely related adaptive rejection sampling based algorithms.
Abstract: The problem of drawing samples from a discrete distribution can be converted into a discrete optimization problem In this work, we show how sampling from a continuous distribution can be converted into an optimization problem over continuous space Central to the method is a stochastic process recently described in mathematical statistics that we call the Gumbel process We present a new construction of the Gumbel process and A* sampling, a practical generic sampling algorithm that searches for the maximum of a Gumbel process using A* search We analyze the correctness and convergence time of A* sampling and demonstrate empirically that it makes more efficient use of bound and likelihood evaluations than the most closely related adaptive rejection sampling-based algorithms

Journal ArticleDOI
TL;DR: This work modify the standard l1l1-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refers to the resulting algorithm as weighted l1l 1- Minimization.

Journal ArticleDOI
TL;DR: The obtained results show that stochastic functional differential equations with/without Markovian switching may be pth moment exponentially stabilized by impulses.
Abstract: In this paper, the pth moment exponential stability for a class of impulsive stochastic functional differential equations with Markovian switching is investigated. Based on the Lyapunov function, Dynkin formula and Razumikhin technique with stochastic version as well as stochastic analysis theory, many new sufficient conditions are derived to ensure the pth moment exponential stability of the trivial solution. The obtained results show that stochastic functional differential equations with/without Markovian switching may be pth moment exponentially stabilized by impulses. Moreover, our results generalize and improve some results obtained in the literature. Finally, a numerical example and its simulations are given to illustrate the theoretical results.

Journal ArticleDOI
TL;DR: In this paper, the problem of stochastic stability for a class of semi-Markovian systems with mode-dependent time-variant delays is investigated by Lyapunov function approach, together with a piecewise analysis method.
Abstract: SUMMARY Semi-Markovian jump systems, due to the relaxed conditions on the stochastic process, and its transition rates are time varying, can be used to describe a larger class of dynamical systems than conventional full Markovian jump systems. In this paper, the problem of stochastic stability for a class of semi-Markovian systems with mode-dependent time-variant delays is investigated. By Lyapunov function approach, together with a piecewise analysis method, a sufficient condition is proposed to guarantee the stochastic stability of the underlying systems. As more time-delay information is used, our results are much less conservative than some existing ones in literature. Finally, two examples are given to show the effectiveness and advantages of the proposed techniques. Copyright © 2013 John Wiley & Sons, Ltd.

Book
01 Apr 2014
TL;DR: In this paper, the AIC under the framework of Least Squares Estimation is used to estimate probability measures using aggregate population data and the Prohorov Metric Framework Consistency of the PMF Estimator.
Abstract: Introduction Probability and Statistics Overview Probability and Probability Space Random Variables and Their Associated Distribution Functions Statistical Averages of Random Variables Characteristic Functions of a Random Variable Special Probability Distributions Convergence of a Sequence of Random Variables Mathematical and Statistical Aspects of Inverse Problems Least Squares Inverse Problem Formulations Methodology: Ordinary, Weighted, and Generalized Least Squares Asymptotic Theory: Theoretical Foundations Computation of SIGMAN, Standard Errors, and Confidence Intervals Investigation of Statistical Assumptions Bootstrapping vs. Asymptotic Error Analysis The "Corrective" Nature of Bootstrapping Covariance Estimates and Their Effects on Confidence Intervals Some Summary Remarks on Asymptotic Theory vs. Bootstrapping Model Selection Criteria Introduction Likelihood Based-Model Selection Criteria-Akaike Information Criterion and Its Variations The AIC under the Framework of Least Squares Estimation Example: CFSE Label Decay Residual Sum of Squares Based Model Selection Criterion Estimation of Probability Measures Using Aggregate Population Data Motivation Type I: Individual Dynamics/Aggregate Data Inverse Problems Type II: Aggregate Dynamics/Aggregate Data Inverse Problems Aggregate Data and the Prohorov Metric Framework Consistency of the PMF Estimator Further Remarks Nonparametric Maximum Likelihood Estimation Final Remarks Optimal Design Introduction Mathematical and Statistical Models Algorithmic Considerations Example: HIV Model Propagation of Uncertainty in a Continuous Time Dynamical System Introduction to Stochastic Processes Stochastic Differential Equations Random Differential Equations Relationships between Random and Stochastic Differential Equations A Stochastic System and Its Corresponding Deterministic System Overview of Multivariate Continuous Time Markov Chains Simulation Algorithms for Continuous Time Markov Chain Models Density Dependent Continuous Time Markov Chains and Kurtz's Limit Theorem Biological Application: Vancomycin-Resistant Enterococcus Infection in a Hospital Unit Biological Application: HIV Infection within a Host Application in Agricultural Production Networks Overview of Stochastic Systems with Delays Simulation Algorithms for Stochastic Systems with Fixed Delays Application in the Pork Production Network with a Fixed Delay Simulation Algorithms for Stochastic Systems with Random Delays Application in the Pork Production Network with a Random Delay Frequently Used Notations and Abbreviations Index References appear at the end of each chapter.

Journal ArticleDOI
TL;DR: In this article, the second-order Boltzmann-Gibbs principle is used to replace local functionals of a conservative, one-dimensional stochastic process by a possibly nonlinear function of the conserved quantity.
Abstract: We introduce what we call the second-order Boltzmann–Gibbs principle, which allows one to replace local functionals of a conservative, one-dimensional stochastic process by a possibly nonlinear function of the conserved quantity. This replacement opens the way to obtain nonlinear stochastic evolutions as the limit of the fluctuations of the conserved quantity around stationary states. As an application of this second-order Boltzmann–Gibbs principle, we introduce the notion of energy solutions of the KPZ and stochastic Burgers equations. Under minimal assumptions, we prove that the density fluctuations of one-dimensional, stationary, weakly asymmetric, conservative particle systems are sequentially compact and that any limit point is given by energy solutions of the stochastic Burgers equation. We also show that the fluctuations of the height function associated to these models are given by energy solutions of the KPZ equation in this sense. Unfortunately, we lack a uniqueness result for these energy solutions. We conjecture these solutions to be unique, and we show some regularity results for energy solutions of the KPZ/Burgers equation, supporting this conjecture.

Journal ArticleDOI
TL;DR: Using the completing squares method and stochastic analysis techniques, necessary and sufficient conditions are established for the existence of the desired finite-horizon H ∞ fault estimator whose parameters are then obtained by solving coupled backward recursive Riccati difference equations (RDEs).

Journal ArticleDOI
TL;DR: A scalable and analytically tractable probabilistic model for the cascading failure dynamics in power grids is constructed while retaining key physical attributes and operating characteristics of the power grid.
Abstract: A scalable and analytically tractable probabilistic model for the cascading failure dynamics in power grids is constructed while retaining key physical attributes and operating characteristics of the power grid. The approach is based upon extracting a reduced abstraction of large-scale power grids using a small number of aggregate state variables while modeling the system dynamics using a continuous-time Markov chain. The aggregate state variables represent critical power-grid attributes, which have been shown, from prior simulation-based and historical-data-based analysis, to strongly influence the cascading behavior. The transition rates among states are formulated in terms of certain parameters that capture grid's operating characteristics comprising loading level, error in transmission-capacity estimation, and constraints in performing load shedding. The model allows the prediction of the evolution of blackout probability in time. Moreover, the asymptotic analysis of the blackout probability enables the calculation of the probability mass function of the blackout size. A key benefit of the model is that it enables the characterization of the severity of cascading failures in terms of the operating characteristics of the power grid.

Journal ArticleDOI
TL;DR: The proposed method is shown to outperform deterministic model predictive control in terms of average EV charging cost and an enhancement to the classical discrete stochastic dynamic programming method is proposed.
Abstract: This paper investigates the application of stochastic dynamic programming to the optimization of charging and frequency regulation capacity bids of an electric vehicle (EV) in a smart electric grid environment. We formulate a Markov decision problem to minimize an EV's expected cost over a fixed charging horizon. We account for both Markov random prices and a Markov random regulation signal. We also propose an enhancement to the classical discrete stochastic dynamic programming method. This enhancement allows optimization over a continuous space of decision variables via linear programming at each state. Simple stochastic process models are built from real data and used to simulate the implementation of the proposed method. The proposed method is shown to outperform deterministic model predictive control in terms of average EV charging cost.

Book
20 Oct 2014
TL;DR: In this article, the theory of stochastic processes that admit a parsimonious representation in a matched wavelet-like basis is presented, which leads to two distinct types of behaviour -Gaussian and sparse -and is exploited to simplify the mathematical analysis.
Abstract: Providing a novel approach to sparsity, this comprehensive book presents the theory of stochastic processes that are ruled by linear stochastic differential equations, and that admit a parsimonious representation in a matched wavelet-like basis. Two key themes are the statistical property of infinite divisibility, which leads to two distinct types of behaviour - Gaussian and sparse - and the structural link between linear stochastic processes and spline functions, which is exploited to simplify the mathematical analysis. The core of the book is devoted to investigating sparse processes, including a complete description of their transform-domain statistics. The final part develops practical signal-processing algorithms that are based on these models, with special emphasis on biomedical image reconstruction. This is an ideal reference for graduate students and researchers with an interest in signal/image processing, compressed sensing, approximation theory, machine learning, or statistics.

Journal ArticleDOI
TL;DR: This work considers random fields that are in the domain of attraction of a widely used class of max-stable processes, namely those constructed via manipulation of log-Gaussian random functions, and performs full likelihood inference by exploiting the methods of Stephenson & Tawn (2005), assessing the improvements in inference from both methods over pairwise likelihood methodology.
Abstract: Max-stable processes arise as the only possible nontrivial limits for maxima of affinely normalized identically distributed stochastic processes, and thus form an important class of models for the extreme values of spatial processes. Until recently, inference for max-stable processes has been restricted to the use of pairwise composite likelihoods, due to intractability of higher-dimensional distributions. In this work we consider random fields that are in the domain of attraction of a widely used class of max-stable processes, namely those constructed via manipulation of log-Gaussian random functions. For this class, we exploit limiting d-dimensional multivariate Poisson process intensities of the underlying process for inference on all d-vectors exceeding a high marginal threshold in at least one component, employing a censoring scheme to incorporate information below the marginal threshold. We also consider the d-dimensional distributions for the equivalent max-stable process, and perform full likelihood inference by exploiting the methods of Stephenson & Tawn (2005), where information on the occurrence times of extreme events is shown to dramatically simplify the likelihood. The Stephenson-Tawn likelihood is in fact simply a special case of the censored Poisson process likelihood. We assess the improvements in inference from both methods over pairwise likelihood methodology by simulation.

Book
24 Aug 2014
TL;DR: This Digitalbook is possible to locate as well as download introduction to probability statistics and random processes Book.
Abstract: Are you looking to uncover introduction to probability statistics and random processes Digitalbook. Correct here it is possible to locate as well as download introduction to probability statistics and random processes Book. We've got ebooks for every single topic introduction to probability statistics and random processes accessible for download cost-free. Search the site also as find Jean Campbell eBook in layout. We also have a fantastic collection of information connected to this Digitalbook for you. As well because the best part is you could assessment as well as download for introduction to probability statistics and random processes eBook

Journal ArticleDOI
TL;DR: The proposed stochastic approach is scalable for analyzing large circuits and can further account for various fault models as well as calculating the soft error rate (SER), supported by extensive simulations and detailed comparison with existing approaches.
Abstract: Reliability is fast becoming a major concern due to the nanometric scaling of CMOS technology. Accurate analytical approaches for the reliability evaluation of logic circuits, however, have a computational complexity that generally increases exponentially with circuit size. This makes intractable the reliability analysis of large circuits. This paper initially presents novel computational models based on stochastic computation; using these stochastic computational models (SCMs), a simulation-based analytical approach is then proposed for the reliability evaluation of logic circuits. In this approach, signal probabilities are encoded in the statistics of random binary bit streams and non-Bernoulli sequences of random permutations of binary bits are used for initial input and gate error probabilities. By leveraging the bit-wise dependencies of random binary streams, the proposed approach takes into account signal correlations and evaluates the joint reliability of multiple outputs. Therefore, it accurately determines the reliability of a circuit; its precision is only limited by the random fluctuations inherent in the stochastic sequences. Based on both simulation and analysis, the SCM approach takes advantages of ease in implementation and accuracy in evaluation. The use of non-Bernoulli sequences as initial inputs further increases the evaluation efficiency and accuracy compared to the conventional use of Bernoulli sequences, so the proposed stochastic approach is scalable for analyzing large circuits. It can further account for various fault models as well as calculating the soft error rate (SER). These results are supported by extensive simulations and detailed comparison with existing approaches.

Journal ArticleDOI
TL;DR: In this paper, a solution is proposed which addresses the correlation amongst input random variables, as well as an input variable truncation approach for addressing the large number of random input variables, such that a PEM can be effectively used to obtain POPF output distributions.
Abstract: Increasing levels of wind power integration pose a challenge in system operation, owing to the uncertainty and non-dispatchability of wind generation. The probabilistic nature of wind speed inputs dictates that in an optimization of the system, all output variables will themselves be probabilistic. In order to determine the distributions resulting from system optimization, a probabilistic optimal power flow (POPF) method may be applied. While Monte Carlo (MC) techniques are a traditional approach, recent research into point estimate methods (PEMs) has displayed their capabilities to obtain output distributions while reducing computational burden. Unfortunately both spatial and temporal correlation amongst the input wind speed random variables complicates the application of PEM for solving the POPF. Further complications may arise due to the large number of random input variables present when performing a multi-period POPF. In this paper, a solution is proposed which addresses the correlation amongst input random variables, as well as an input variable truncation approach for addressing the large number of random input variables, such that a PEM can be effectively used to obtain POPF output distributions.

Journal ArticleDOI
28 Jul 2014-PLOS ONE
TL;DR: This work combines the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series and tests the performance and robustness of the implementation on data from numerical simulations of stochastic processes.
Abstract: Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.

Journal ArticleDOI
TL;DR: The value process is characterized as the unique viscosity solution of the corresponding path dependent Bellman-Isaacs equation, a notion recently introduced by Ekren et al.
Abstract: In this paper we study a two person zero sum stochastic differential game in weak formulation. Unlike the standard literature, which uses strategy type controls, the weak formulation allows us to consider the game with control against control. We shall prove the existence of game value under natural conditions. Another main feature of the paper is that we allow for non-Markovian structure, and thus the game value is a random process. We characterize the value process as the unique viscosity solution of the corresponding path dependent Bellman-Isaacs equation, a notion recently introduced by Ekren et al. (Ann. Probab., 42 (2014), pp. 204-236) and Ekren, Touzi, and Zhang (Stochastic Process., to appear; preprint, arXiv:1210.0006v2; preprint, arXiv:1210.0007v2).

Book
20 May 2014
TL;DR: This book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area and the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling.
Abstract: Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence ofmeasurements from a real system to model for example the inter-arrival times of packets in a computer network or failure times of components in a manufacturing plant. Typical application areas are performance and dependability analysis of computer systems, communication networks, logistics or manufacturing systems but also the analysis of biological or chemical reaction networks and similar problems. Often the measured values have a high variability and are correlated. Its been known for a long time that Markov based models like phase type distributions or Markovian arrival processes are very general and allow one to capture even complex behaviors. However, the parameterization of these models results often in a complex and non-linear optimization problem. Only recently, several new results about the modeling capabilities of Markov based models and algorithms to fit the parameters of those models have been published.

Journal ArticleDOI
TL;DR: This paper introduces and study a new stability criterion: the mean-square exponential input-to-state stability, which has never been discussed in the field of stochastic recurrent neural networks.

Journal ArticleDOI
TL;DR: General criteria ensuring global exponential synchronization in mean square of the addressed TS fuzzy complex networks are obtained and the theory of function minimum value is utilized to reduce the conservativeness of the obtained synchronization criteria.

Journal ArticleDOI
TL;DR: This work presents a simple and general framework to simulate statistically correct realizations of a system of non-Markovian discrete stochastic processes and finds that the generalized Gillespie algorithm is the most general because it can be implemented very easily in cases in which other algorithms do not work or need adapted versions that are less efficient in computational terms.
Abstract: We present a simple and general framework to simulate statistically correct realizations of a system of non-Markovian discrete stochastic processes. We give the exact analytical solution and a practical and efficient algorithm like the Gillespie algorithm for Markovian processes, with the difference being that now the occurrence rates of the events depend on the time elapsed since the event last took place. We use our non-Markovian generalized Gillespie stochastic simulation methodology to investigate the effects of nonexponential interevent time distributions in the susceptible-infected-susceptible model of epidemic spreading. Strikingly, our results unveil the drastic effects that very subtle differences in the modeling of non-Markovian processes have on the global behavior of complex systems, with important implications for their understanding and prediction. We also assess our generalized Gillespie algorithm on a system of biochemical reactions with time delays. As compared to other existing methods, we find that the generalized Gillespie algorithm is the most general because it can be implemented very easily in cases (such as for delays coupled to the evolution of the system) in which other algorithms do not work or need adapted versions that are less efficient in computational terms.