scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic process published in 2015"


Journal ArticleDOI
TL;DR: The results show that the improved model is able to select an appropriate FPT and reduce random errors of the stochastic process and performs better in the RUL prediction of rolling element bearings than the original exponential model.
Abstract: The remaining useful life (RUL) prediction of rolling element bearings has attracted substantial attention recently due to its importance for the bearing health management. The exponential model is one of the most widely used methods for RUL prediction of rolling element bearings. However, two shortcomings exist in the exponential model: 1) the first predicting time (FPT) is selected subjectively; and 2) random errors of the stochastic process decrease the prediction accuracy. To deal with these two shortcomings, an improved exponential model is proposed in this paper. In the improved model, an adaptive FPT selection approach is established based on the $3\sigma$ interval, and particle filtering is utilized to reduce random errors of the stochastic process. In order to demonstrate the effectiveness of the improved model, a simulation and four tests of bearing degradation processes are utilized for the RUL prediction. The results show that the improved model is able to select an appropriate FPT and reduce random errors of the stochastic process. Consequently, it performs better in the RUL prediction of rolling element bearings than the original exponential model.

412 citations


Journal ArticleDOI
TL;DR: The β-GPP is introduced and promoted, which is an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion and it is found that the fitted β- GPP can closely model the deployment of actual base stations in terms of coverage probability and other statistics.
Abstract: The spatial structure of transmitters in wireless networks plays a key role in evaluating mutual interference and, hence, performance. Although the Poisson point process (PPP) has been widely used to model the spatial configuration of wireless networks, it is not suitable for networks with repulsion. The Ginibre point process (GPP) is one of the main examples of determinantal point processes that can be used to model random phenomena where repulsion is observed. Considering the accuracy, tractability, and practicability tradeoffs, we introduce and promote the $\beta$ -GPP, which is an intermediate class between the PPP and the GPP, as a model for wireless networks when the nodes exhibit repulsion. To show that the model leads to analytically tractable results in several cases of interest, we derive the mean and variance of the interference using two different approaches: the Palm measure approach and the reduced second-moment approach, and then provide approximations of the interference distribution by three known probability density functions. In addition, to show that the model is relevant for cellular systems, we derive the coverage probability of a typical user and find that the fitted $\beta$ -GPP can closely model the deployment of actual base stations in terms of coverage probability and other statistics.

255 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo simulation (MCS) based approach for efficient evaluation of the system failure probability P f, s of slope stability in spatially variable soils is presented.
Abstract: Monte Carlo simulation (MCS) provides a conceptually simple and robust method to evaluate the system reliability of slope stability, particularly in spatially variable soils. However, it suffers from a lack of efficiency at small probability levels, which are of great interest in geotechnical design practice. To address this problem, this paper develops a MCS-based approach for efficient evaluation of the system failure probability P f , s of slope stability in spatially variable soils. The proposed approach allows explicit modeling of the inherent spatial variability of soil properties in a system reliability analysis of slope stability. It facilitates the slope system reliability analysis using representative slip surfaces (i.e., dominating slope failure modes) and multiple stochastic response surfaces. Based on the stochastic response surfaces, the values of P f , s are efficiently calculated using MCS with negligible computational effort. For illustration, the proposed MCS-based system reliab...

253 citations


Journal ArticleDOI
TL;DR: For a high-order considered system, the attention is focused on the construction of a reduced-order model, which not only approximates the original system well with a Hankel-norm performance but translates it into a lower dimensional fuzzy switched system as well.
Abstract: In this paper, the model approximation problem is investigated for a Takagi–Sugeno fuzzy switched system with stochastic disturbance. For a high-order considered system, our attention is focused on the construction of a reduced-order model, which not only approximates the original system well with a Hankel-norm performance but translates it into a lower dimensional fuzzy switched system as well. By using the average dwell time approach and the piecewise Lyapunov function technique, a sufficient condition is first proposed to guarantee the mean-square exponential stability with a Hankel-norm error performance for the error system. The model approximation is then converted into a convex optimization problem by using a linearization procedure. Finally, simulations are provided to illustrate the effectiveness of the proposed theory.

203 citations


Book
26 Jun 2015
TL;DR: In this article, moment bounds for fully and partially drift-implicit Euler methods and for a class of new explicit approximation methods which require only a few more arithmetical operations than the Euler-Maruyama method were established.
Abstract: Many stochastic differential equations (SDEs) in the literature have a superlinearly growing nonlinearity in their drift or diffusion coefficient. Unfortunately, momentsof the computationally efficient Euler-Maruyama approximation method diverge for these SDEs in finite time. This article develops a general theory for studying integrability properties such as moment bounds for discrete-time stochastic processes. Using this approach, we establish moment bounds for fully and partially drift-implicit Euler methods and for a class of new explicit approximation methods which require only a few more arithmetical operations than the Euler-Maruyama method. These moment bounds are then used to prove strong convergence of the proposed schemes. Finally, we illustrate our results for several SDEs from finance, physics, biology and chemistry.

200 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the input-to-state stability analysis for a class of impulsive stochastic Cohen-Grossberg neural networks with mixed delays and obtained sufficient conditions to ensure that the considered system with/without impulse control is mean-square exponentially stable.
Abstract: In this paper, we study an issue of input-to-state stability analysis for a class of impulsive stochastic Cohen–Grossberg neural networks with mixed delays. The mixed delays consist of varying delays and continuously distributed delays. To the best of our knowledge, the input-to-state stability problem for this class of stochastic system has still not been solved, despite its practical importance. The main aim of this paper is to fill the gap. By constricting several novel Lyapunov–Krasovskii functionals and using some techniques such as the It $$\hat{o}$$ formula, Dynkin formula, impulse theory, stochastic analysis theory, and the mathematical induction, we obtain some new sufficient conditions to ensure that the considered system with/without impulse control is mean-square exponentially input-to-state stable. Moreover, the obtained results are illustrated well with two numerical examples and their simulations.

196 citations


Journal ArticleDOI
TL;DR: Under the assumption that the time-varying delays exist in the system output, only one NN is employed to compensate for all unknown nonlinear terms depending on the delayed output, and the NN parameters to be estimated are greatly decreased and the online learning time is dramatically decreased.
Abstract: This paper presents an adaptive output-feedback neural network (NN) control scheme for a class of stochastic nonlinear time-varying delay systems with unknown control directions. To make the controller design feasible, the unknown control coefficients are grouped together and the original system is transformed into a new system using a linear state transformation technique. Then, the Nussbaum function technique is incorporated into the backstepping recursive design technique to solve the problem of unknown control directions. Furthermore, under the assumption that the time-varying delays exist in the system output, only one NN is employed to compensate for all unknown nonlinear terms depending on the delayed output. Moreover, by estimating the maximum of NN parameters instead of the parameters themselves, the NN parameters to be estimated are greatly decreased and the online learning time is also dramatically decreased. It is shown that all the signals of the closed-loop system are bounded in probability. The effectiveness of the proposed scheme is demonstrated by the simulation results.

195 citations


Journal ArticleDOI
TL;DR: A mapping between general stochastic models of gene expression and systems studied in queueing theory is invoked to derive exact analytical expressions for the moments associated with mRNA/protein steady-state distributions, and approaches for accurate estimation of burst parameters are developed.
Abstract: Gene expression in individual cells is highly variable and sporadic, often resulting in the synthesis of mRNAs and proteins in bursts. Such bursting has important consequences for cell-fate decisions in diverse processes ranging from HIV-1 viral infections to stem-cell differentiation. It is generally assumed that bursts are geometrically distributed and that they arrive according to a Poisson process. On the other hand, recent single-cell experiments provide evidence for complex burst arrival processes, highlighting the need for analysis of more general stochastic models. To address this issue, we invoke a mapping between general stochastic models of gene expression and systems studied in queueing theory to derive exact analytical expressions for the moments associated with mRNA/protein steady-state distributions. These results are then used to derive noise signatures, i.e. explicit conditions based entirely on experimentally measurable quantities, that determine if the burst distributions deviate from the geometric distribution or if burst arrival deviates from a Poisson process. For non-Poisson arrivals, we develop approaches for accurate estimation of burst parameters. The proposed approaches can lead to new insights into transcriptional bursting based on measurements of steady-state mRNA/protein distributions.

189 citations


Journal ArticleDOI
TL;DR: It is proved that the expected makespan of scheduling stochastic tasks is greater than or equal to the makes pan of scheduling deterministic tasks, where all processing times and communication times are replaced by their expected values.
Abstract: Generally, a parallel application consists of precedence constrained stochastic tasks, where task processing times and intertask communication times are random variables following certain probability distributions. Scheduling such precedence constrained stochastic tasks with communication times on a heterogeneous cluster system with processors of different computing capabilities to minimize a parallel application's expected completion time is an important but very difficult problem in parallel and distributed computing. In this paper, we present a model of scheduling stochastic parallel applications on heterogeneous cluster systems. We discuss stochastic scheduling attributes and methods to deal with various random variables in scheduling stochastic tasks. We prove that the expected makespan of scheduling stochastic tasks is greater than or equal to the makespan of scheduling deterministic tasks, where all processing times and communication times are replaced by their expected values. To solve the problem of scheduling precedence constrained stochastic tasks efficiently and effectively, we propose a stochastic dynamic level scheduling (SDLS) algorithm, which is based on stochastic bottom levels and stochastic dynamic levels. Our rigorous performance evaluation results clearly demonstrate that the proposed stochastic task scheduling algorithm significantly outperforms existing algorithms in terms of makespan, speedup, and makespan standard deviation.

170 citations


Journal ArticleDOI
TL;DR: An input–output (IO) approach to the delay-dependent stability analysis and H ∞ controller synthesis for a class of continuous-time Markovian jump linear systems (MJLSs) and conditions for the underlying MJLSs are formulated in terms of linear matrix inequalities.
Abstract: This paper proposes an input–output (IO) approach to the delay-dependent stability analysis and H ∞ controller synthesis for a class of continuous-time Markovian jump linear systems (MJLSs). The concerned systems are with a time-varying delay in the state and deficient mode information in the Markov stochastic process, which simultaneously involves the exactly known, partially unknown and uncertain transition rates. It is first shown that the original system with time-varying delay can be reformulated by a new IO model through a process of two-term approximation and the stability problem of the original system can be transformed into the scaled small gain (SSG) problem of the IO model. Then, based on a Markovian Lyapunov–Krasovskii formulation of SSG condition together with some convexification techniques, the stability analysis and state-feedback H ∞ controller synthesis conditions for the underlying MJLSs are formulated in terms of linear matrix inequalities. Simulation studies are provided to illustrate the effectiveness and superiority of the proposed analysis and design methods.

168 citations


Journal ArticleDOI
TL;DR: An intuitive approach to the stochastic network calculus is contributed, where the method uses moment generating functions, known from the theory of effective bandwidths, to characterize traffic arrivals and network service.
Abstract: The aim of the stochastic network calculus is to comprehend statistical multiplexing and scheduling of non-trivial traffic sources in a framework for end-to-end analysis of multi-node networks. To date, several models, some of them with subtle yet important differences, have been explored to achieve these objectives. Capitalizing on previous works, this paper contributes an intuitive approach to the stochastic network calculus, where we seek to obtain its fundamental results in the possibly easiest way. In detail, the method that is assembled in this work uses moment generating functions, known from the theory of effective bandwidths, to characterize traffic arrivals and network service. Thereof, affine envelope functions with an exponentially decaying overflow profile are derived to compute statistical end-to-end backlog and delay bounds for networks.

Journal ArticleDOI
TL;DR: In this paper, the envelope-constrained H ∞ filtering problem is investigated for a class of discrete time-varying stochastic systems over a finite horizon that involves fading measurements, randomly occurring nonlinearities and mixed noises.

Journal ArticleDOI
TL;DR: A general event-triggered framework is developed to deal with the finite-horizon H∞ filtering problem for discrete time-varying systems with fading channels, randomly occurring nonlinearities and multiplicative noises and a recursive linear matrix inequality approach is employed to design the desired filter gains.
Abstract: In this paper, a general event-triggered framework is developed to deal with the finite-horizon $H_{\infty}$ filtering problem for discrete time-varying systems with fading channels, randomly occurring nonlinearities and multiplicative noises. An event indicator variable is constructed and the corresponding event-triggered scheme is proposed. Such a scheme is based on the relative error with respect to the measurement signal in order to determine whether the measurement output should be transmitted to the filter or not. The fading channels are described by modified stochastic Rice fading models. Some uncorrelated random variables are introduced, respectively, to govern the phenomena of state-multiplicative noises, randomly occurring nonlinearities as well as fading measurements. The purpose of the addressed problem is to design a set of time-varying filter such that the influence from the exogenous disturbances onto the filtering errors is attenuated at the given level quantified by a $H_{\infty}$ -norm in the mean-square sense. By utilizing stochastic analysis techniques, sufficient conditions are established to ensure that the dynamic system under consideration satisfies the $H_{\infty}$ filtering performance constraint, and then a recursive linear matrix inequality (RLMI) approach is employed to design the desired filter gains. Simulation results demonstrate the effectiveness of the developed filter design scheme.

Journal ArticleDOI
TL;DR: A scenario selection algorithm inspired by importance sampling is described in order to formulate the stochastic unit commitment problem and its performance is validated by comparing it to a Stochastic formulation with a very large number of scenarios, that is able to solve through parallelization.
Abstract: We present a parallel implementation of Lagrangian relaxation for solving stochastic unit commitment subject to uncertainty in renewable power supply and generator and transmission line failures. We describe a scenario selection algorithm inspired by importance sampling in order to formulate the stochastic unit commitment problem and validate its performance by comparing it to a stochastic formulation with a very large number of scenarios, that we are able to solve through parallelization. We examine the impact of narrowing the duality gap on the performance of stochastic unit commitment and compare it to the impact of increasing the number of scenarios in the model. We report results on the running time of the model and discuss the applicability of the method in an operational setting.

Journal ArticleDOI
TL;DR: The main purpose of the problem addressed is to design a time-varying output feedback controller over a given finite horizon such that, in the simultaneous presence of ROUs, RONs, actuator and sensor failures as well as measurement quantizations, the closed-loop system achieves a prescribed performance level in terms of the H ∞ -norm.

Journal ArticleDOI
TL;DR: In this paper, the problem of quantized filtering for a class of continuous-time Markovian jump linear systems with deficient mode information is investigated, where the measurement output of the plant is quantized by a mode-dependent logarithmic quantizer, and the defect in the Markov stochastic process simultaneously considers the exactly known, partially unknown, and uncertain transition rates.
Abstract: This paper investigates the problem of quantized filtering for a class of continuous-time Markovian jump linear systems with deficient mode information. The measurement output of the plant is quantized by a mode-dependent logarithmic quantizer, and the deficient mode information in the Markov stochastic process simultaneously considers the exactly known, partially unknown, and uncertain transition rates. By fully exploiting the properties of transition rate matrices, together with the convexification of uncertain domains, a new sufficient condition for quantized performance analysis is first derived, and then two approaches, namely, the convex linearization approach and iterative approach, to the filter synthesis are developed. It is shown that both the full-order and reduced-order filters can be obtained by solving a set of linear matrix inequalities (LMIs) or bilinear matrix inequalities (BMIs). Finally, two illustrative examples are given to show the effectiveness and less conservatism of the proposed design methods.

Journal ArticleDOI
Chien-Yu Peng1
TL;DR: The properties of the lifetime distribution and parameter estimation using the EM-type algorithm are presented in addition to providing a simple model-checking procedure to assess the validity of different stochastic processes.
Abstract: Degradation models are widely used to assess the lifetime information of highly reliable products. This study proposes a degradation model based on an inverse normal-gamma mixture of an inverse Gaussian process. This article presents the properties of the lifetime distribution and parameter estimation using the EM-type algorithm, in addition to providing a simple model-checking procedure to assess the validity of different stochastic processes. Several case applications are performed to demonstrate the advantages of the proposed model with random effects and explanatory variables. Technical details, data, and R code are available online as supplementary materials.

Journal ArticleDOI
TL;DR: In this article, a new simulation method with the first order approximation and series expansions is proposed to improve the accuracy and efficiency of the Rice/FORM method, which maps the general stochastic process of the response into a Gaussian process, whose samples are then generated by the Expansion Optimal Linear Estimation if the response is stationary or by the Orthogonal Series Expansion if a response is non-stationary.
Abstract: Time-variant reliability is often evaluated by Rice's formula combined with the First Order Reliability Method (FORM). To improve the accuracy and efficiency of the Rice/FORM method, this work develops a new simulation method with the first order approximation and series expansions. The approximation maps the general stochastic process of the response into a Gaussian process, whose samples are then generated by the Expansion Optimal Linear Estimation if the response is stationary or by the Orthogonal Series Expansion if the response is non-stationary. As the computational cost largely comes from estimating the covariance of the response at expansion points, a cheaper surrogate model of the covariance is built and allows for significant reduction in computational cost. In addition to its superior accuracy and efficiency over the Rice/FORM method, the proposed method can also produce the failure rate and probability of failure with respect to time for a given period of time within only one reliability analysis.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a transmission-constrained unit commitment method that combines the cost-efficient but computationally demanding stochastic optimization and the expensive but tractable interval optimization techniques to manage uncertainty on the expected net load.
Abstract: This paper proposes a new transmission-constrained unit commitment method that combines the cost-efficient but computationally demanding stochastic optimization and the expensive but tractable interval optimization techniques to manage uncertainty on the expected net load. The proposed hybrid unit commitment approach applies the stochastic formulation to the initial operating hours of the optimization horizon, during which the wind forecasts are more accurate, and then switches to the interval formulation for the remaining hours. The switching time is optimized to balance the cost of unhedged uncertainty from the stochastic unit commitment against the cost of the security premium of the interval unit commitment formulation. These hybrid, stochastic, and interval formulations are compared using Monte Carlo simulations on a modified 24-bus IEEE Reliability Test System. The results demonstrate that the proposed unit commitment formulation results in the least expensive day-ahead schedule among all formulations and can be solved in the same amount of time as a full stochastic unit commitment. However, if the range of the switching time is reduced, the hybrid formulation in the parallel computing implementation outperforms the stochastic formulation in terms of computing time.

Journal ArticleDOI
TL;DR: In this article, the authors point out that the failure of the complex Langevin method can be attributed to the breakdown of the relation between the complex weight that satisfies the Fokker-Planck equation and the probability distribution associated with the stochastic process.
Abstract: The complex Langevin method aims at performing path integral with a complex action numerically based on complexification of the original real dynamical variables. One of the poorly understood issues concerns occasional failure in the presence of logarithmic singularities in the action, which appear, for instance, from the fermion determinant in finite density QCD. We point out that the failure should be attributed to the breakdown of the relation between the complex weight that satisfies the Fokker-Planck equation and the probability distribution associated with the stochastic process. In fact, this problem can occur, in general, when the stochastic process involves a singular drift term. We show, however, in a simple example that there exists a parameter region in which the method works, although the standard reweighting method is hardly applicable.

Journal ArticleDOI
Zhaojing Wu1
TL;DR: A theoretical framework on stability of RDEs is constructed, which is distinguished from the existing framework of SDEs and several estimation methods of stochastic processes are presented to explain the reasonability of the assumptions used in theorems.
Abstract: Stochastic differential equations (SDEs) are widely adopted to describe systems with stochastic disturbances, while they are not necessarily the best models in some specific situations. This paper considers the nonlinear systems described by random differential equations (RDEs). The notions and the corresponding criteria of noise-to-state stability, asymptotic gain and asymptotic stability are proposed, in the $m$ -th moment or in probability. Several estimation methods of stochastic processes are presented to explain the reasonability of the assumptions used in theorems. As applications of stability criteria, some examples about stabilization, regulation and tracking are considered, respectively. A theoretical framework on stability of RDEs is finally constructed, which is distinguished from the existing framework of SDEs.

Journal ArticleDOI
TL;DR: This paper addresses the exponential H∞ filtering problem for a class of discrete-time switched neural networks with random time-varying delays using a piecewise Lyapunov-Krasovskii functional together with linear matrix inequality (LMI) approach and average dwell time method.
Abstract: This paper addresses the exponential $\mathcal {H}_{\infty }$ filtering problem for a class of discrete-time switched neural networks with random time-varying delays. The involved delays are assumed to be randomly time-varying which are characterized by introducing a Bernoulli stochastic variable. Effects of both variation range and distribution probability of the time delays are considered. The nonlinear activation functions are assumed to satisfy the sector conditions. Our aim is to estimate the state by designing a full order filter such that the filter error system is globally exponentially stable with an expected decay rate and a $\mathcal {H}_{\infty }$ performance attenuation level. The filter is designed by using a piecewise Lyapunov–Krasovskii functional together with linear matrix inequality (LMI) approach and average dwell time method. First, a set of sufficient LMI conditions are established to guarantee the exponential mean-square stability of the augmented system and then the parameters of full-order filter are expressed in terms of solutions to a set of LMI conditions. The proposed LMI conditions can be easily solved by using standard software packages. Finally, numerical examples by means of practical problems are provided to illustrate the effectiveness of the proposed filter design.

Journal ArticleDOI
TL;DR: This work pushes the speed of a quantum random number generator to 68 Gbps by operating a laser around its threshold level by developing a practical interferometer with active feedback instead of common temperature control to meet the requirement of stability.
Abstract: The speed of a quantum random number generator is essential for practical applications, such as high-speed quantum key distribution systems. Here, we push the speed of a quantum random number generator to 68 Gbps by operating a laser around its threshold level. To achieve the rate, not only high-speed photodetector and high sampling rate are needed but also a very stable interferometer is required. A practical interferometer with active feedback instead of common temperature control is developed to meet the requirement of stability. Phase fluctuations of the laser are measured by the interferometer with a photodetector and then digitalized to raw random numbers with a rate of 80 Gbps. The min-entropy of the raw data is evaluated by modeling the system and is used to quantify the quantum randomness of the raw data. The bias of the raw data caused by other signals, such as classical and detection noises, can be removed by Toeplitz-matrix hashing randomness extraction. The final random numbers can pass through the standard randomness tests. Our demonstration shows that high-speed quantum random number generators are ready for practical usage.

Journal ArticleDOI
TL;DR: An extended hierarchy equation of motion (HEOM) is proposed and applied to study the dynamics of the spin-boson model by including the system reduced density matrix and auxiliary fields composed of these expansion functions, where the extended HEOM is derived for the time derivative of each element.
Abstract: An extended hierarchy equation of motion (HEOM) is proposed and applied to study the dynamics of the spin-boson model. In this approach, a complete set of orthonormal functions are used to expand an arbitrary bath correlation function. As a result, a complete dynamic basis set is constructed by including the system reduced density matrix and auxiliary fields composed of these expansion functions, where the extended HEOM is derived for the time derivative of each element. The reliability of the extended HEOM is demonstrated by comparison with the stochastic Hamiltonian approach under room-temperature classical ohmic and sub-ohmic noises and the multilayer multiconfiguration time-dependent Hartree theory under zero-temperature quantum ohmic noise. Upon increasing the order in the hierarchical expansion, the result obtained from the extended HOEM systematically converges to the numerically exact answer.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the vertices-reinforced jump process (VRJP) can be interpreted as an annealed version of the vertex reinforced jump process.
Abstract: Edge-reinforced random walk (ERRW), introduced by Coppersmith and Diaconis in 1986, is a random process that takes values in the vertex set of a graph G, which is more likely to cross edges it has visited before. We show that it can be interpreted as an annealed version of the Vertex-reinforced jump process (VRJP), conceived by Werner and first studied by Davis and Volkov (2002,2004), a continuous-time process favouring sites with more local time. We calculate, for any finite graph G, the limiting measure of the centred occupation time measure of VRJP, and interpret it as a supersymmetric hyperbolic sigma model in quantum field theory. This enables us to deduce that VRJP is recurrent in any dimension for large reinforcement, using a localisation result of Disertori and Spencer (2010).

Journal ArticleDOI
TL;DR: It is demonstrated how the time-delay approach to networked control systems with scheduling protocols, variable delays and variable sampling intervals allows treating network-induced delays larger than the sampling intervals in the presence of collisions.
Abstract: This note develops the time-delay approach to networked control systems with scheduling protocols, variable delays and variable sampling intervals. The scheduling of sensor communication is defined by a stochastic protocol. Two classes of protocols are considered. The first one is defined by an independent and identically-distributed stochastic process. The activation probability of each sensor node for this protocol is a given constant, whereas it is assumed that collisions occur with a certain probability. The resulting closed-loop system is a stochastic impulsive system with delays both in the continuous dynamics and in the reset equations, where the system matrices have stochastic parameters with Bernoulli distributions. The second scheduling protocol is defined by a discrete-time Markov chain with a known transition probability matrix taking into account collisions. The resulting closed-loop system is a Markovian jump impulsive system with delays both in the continuous dynamics and in the reset equations. Sufficient conditions for exponential mean-square stability of the resulting closed-loop system are derived via a Lyapunov-Krasovskii-based method. The efficiency of the method is illustrated on an example of a batch reactor. It is demonstrated how the time-delay approach allows treating network-induced delays larger than the sampling intervals in the presence of collisions.

Journal ArticleDOI
TL;DR: A new approach to migration trajectories of tumour cells in two and three dimensions is applied, and the superior ability of the superstatistical method to discriminate cell migration strategies in different environments is demonstrated.
Abstract: Stochastic time series are ubiquitous in nature. In particular, random walks with time-varying statistical properties are found in many scientific disciplines. Here we present a superstatistical approach to analyse and model such heterogeneous random walks. The time-dependent statistical parameters can be extracted from measured random walk trajectories with a Bayesian method of sequential inference. The distributions and correlations of these parameters reveal subtle features of the random process that are not captured by conventional measures, such as the mean-squared displacement or the step width distribution. We apply our new approach to migration trajectories of tumour cells in two and three dimensions, and demonstrate the superior ability of the superstatistical method to discriminate cell migration strategies in different environments. Finally, we show how the resulting insights can be used to design simple and meaningful models of the underlying random processes.

Journal ArticleDOI
TL;DR: The results suggest that the normal MA is favourable over the other studied MAs, and the new software package MOCA is described, which enables the automated numerical analysis of various MA methods in a graphical user interface and which was used to perform the comparative analysis presented in this paper.
Abstract: In recent years, moment-closure approximations (MAs) of the chemical master equation have become a popular method for the study of stochastic effects in chemical reaction systems. Several different MA methods have been proposed and applied in the literature, but it remains unclear how they perform with respect to each other. In this paper, we study the normal, Poisson, log-normal, and central-moment-neglect MAs by applying them to understand the stochastic properties of chemical systems whose deterministic rate equations show the properties of bistability, ultrasensitivity, and oscillatory behaviour. Our results suggest that the normal MA is favourable over the other studied MAs. In particular, we found that (i) the size of the region of parameter space where a closure gives physically meaningful results, e.g., positive mean and variance, is considerably larger for the normal closure than for the other three closures, (ii) the accuracy of the predictions of the four closures (relative to simulations using the stochastic simulation algorithm) is comparable in those regions of parameter space where all closures give physically meaningful results, and (iii) the Poisson and log-normal MAs are not uniquely defined for systems involving conservation laws in molecule numbers. We also describe the new software package MOCA which enables the automated numerical analysis of various MA methods in a graphical user interface and which was used to perform the comparative analysis presented in this paper. MOCA allows the user to develop novel closure methods and can treat polynomial, non-polynomial, as well as time-dependent propensity functions, thus being applicable to virtually any chemical reaction system.

Journal ArticleDOI
TL;DR: In this article, anomalous stochastic processes based on the renewal continuous time random walk model with different forms for the probability density of waiting times between individual jumps are considered, and the generalized diffusion and Fokker-Planck-Smoluchowski equations with corresponding memory kernels are derived.
Abstract: We consider anomalous stochastic processes based on the renewal continuous time random walk model with different forms for the probability density of waiting times between individual jumps. In the corresponding continuum limit we derive the generalized diffusion and Fokker-Planck-Smoluchowski equations with the corresponding memory kernels. We calculate the qth order moments in the unbiased and biased cases, and demonstrate that the generalized Einstein relation for the considered dynamics remains valid. The relaxation of modes in the case of an external harmonic potential and the convergence of the mean squared displacement to the thermal plateau are analyzed.

Journal ArticleDOI
TL;DR: The notion of robustness is extended to stochastic systems, and it is shown how to exploit this notion to address the system design problem, where the goal is to optimise some control parameters of a Stochastic model in order to maximise robustness of the desired specifications.