scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic process published in 2008"


Journal ArticleDOI
TL;DR: The mathematical theory behind the simple random walk is introduced and how this relates to Brownian motion and diffusive processes in general and a reinforced random walk can be used to model movement where the individual changes its environment.
Abstract: Mathematical modelling of the movement of animals, micro-organisms and cells is of great relevance in the fields of biology, ecology and medicine. Movement models can take many different forms, but the most widely used are based on the extensions of simple random walk processes. In this review paper, our aim is twofold: to introduce the mathematics behind random walks in a straightforward manner and to explain how such models can be used to aid our understanding of biological processes. We introduce the mathematical theory behind the simple random walk and explain how this relates to Brownian motion and diffusive processes in general. We demonstrate how these simple models can be extended to include drift and waiting times or be used to calculate first passage times. We discuss biased random walks and show how hyperbolic models can be used to generate correlated random walks. We cover two main applications of the random walk model. Firstly, we review models and results relating to the movement, dispersal and population redistribution of animals and micro-organisms. This includes direct calculation of mean squared displacement, mean dispersal distance, tortuosity measures, as well as possible limitations of these model approaches. Secondly, oriented movement and chemotaxis models are reviewed. General hyperbolic models based on the linear transport equation are introduced and we show how a reinforced random walk can be used to model movement where the individual changes its environment. We discuss the applications of these models in the context of cell migration leading to blood vessel growth (angiogenesis). Finally, we discuss how the various random walk models and approaches are related and the connections that underpin many of the key processes involved.

1,313 citations


Journal ArticleDOI
01 Feb 2008
TL;DR: This work marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings, which results in a nonparametric model for dynamical systems that accounts for uncertainty in the model.
Abstract: We introduce Gaussian process dynamical models (GPDMs) for nonlinear time series analysis, with applications to learning models of human pose and motion from high-dimensional motion capture data. A GPDM is a latent variable model. It comprises a low-dimensional latent space with associated dynamics, as well as a map from the latent space to an observation space. We marginalize out the model parameters in closed form by using Gaussian process priors for both the dynamical and the observation mappings. This results in a nonparametric model for dynamical systems that accounts for uncertainty in the model. We demonstrate the approach and compare four learning algorithms on human motion capture data, in which each pose is 50-dimensional. Despite the use of small data sets, the GPDM learns an effective representation of the nonlinear dynamics in these spaces.

1,026 citations


Book
19 Feb 2008
TL;DR: This introduction to multiscale methods gives readers a broad overview of the many uses and applications of the methods, and sets the theoretical foundations of the subject area, and develops a unified approach to the simplification of a wide range of problems which possess multiple scales, via perturbation expansions.
Abstract: This introduction to multiscale methods gives readers a broad overview of the many uses and applications of the methods. The book begins by setting the theoretical foundations of the subject area, and moves on to develop a unified approach to the simplification of a wide range of problems which possess multiple scales, via perturbation expansions; differential equations and stochastic processes are studied in one unified framework. The book concludes with an overview of a range of theoretical tools used to justify the simplified models derived via the perturbation expansions. The presentation of the material is particularly suited to the range of mathematicians, scientists and engineers who want to exploit multiscale methods in applications. Extensive use of examples shows how to apply multiscale methods to solving a variety of problems. Exercises then enable readers to build their own skills and put them into practice. Extensions and generalizations of the results presented in the book, as well as references to the literature, are provided in the Discussion and Bibliography section at the end of each chapter. All of the twenty-one chapters are supplemented with exercises.

919 citations


Book
02 Sep 2008
TL;DR: In this article, the authors present a review of Probability theory and its application in the field of mine planning, including the following: 1.1 Introduction. 2.2 Discrete-Time, Discrete State Markov Chains (DSMC) 2.3 Monte Carlo Analysis and Results. 3.4 Probabilistic Interpretation.
Abstract: Preface. Acknowledgements. PART 1: THEORY. Chapter 1: Review of Probability Theory. 1.1 Introduction. 1.2 Basic Set Theory. 1.3 Probability. 1.4 Conditional Probability. 1.5 Random Variables and Probability Distributions. 1.6 Measures of Central Tendency, Variability, and Association. 1.7 Linear Combinations of Random Variables. 1.8 Functions of Random Variables. 1.9 Common Discrete Probability Distributions. 1.10 Common Continuous Probability Distributions. 1.11 Extreme-Value Distributions. Chapter2: Discrete random Processes. 2.1 Introduction. 2.2 Discrete-Time, Discrete-State Markov Chains. 2.3 Continuous-Time Markov Chains. 2.4 Queueing Models. Chapter 3: Random Fields. 3.1 Introduction. 3.2 Covariance Function. 3.3 Spectral Density Function. 3.4 Variance Function. 3.5 Correlation Length. 3.6 Some Common Models. 3.7 Random Fields in Higher Dimensions. Chapter 4: Best Estimates, Excursions, and Averages. 4.1 Best Linear Unbiased Estimation. 4.2 Threshold Excursions in One Dimension. 4.3 Threshold Excursions in Two Dimensions. 4.4 Averages. Chapter 5: Estimation. 5.1 Introduction. 5.2 Choosing a Distribution. 5.3 Estimation in Presence of Correlation. 5.4 Advanced Estimation Techniques. Chapter 6: Simulation. 6.1 Introduction. 6.2 Random-Number Generators. 6.3 Generating Nonuniform Random Variables. 6.4 Generating Random Fields. 6.5 Conditional Simulation of Random Fields. 6.6 Monte carlo Simulation. Chapter 7: Reliability-Based Design. 7.1 Acceptable Risk. 7.2 Assessing Risk. 7.3 Background to Design Methodologies. 7.4 Load and Resistance Factor Design. 7.5 Going Beyond Calibration. 7.6 Risk-Based Decision making. PART 2: PRACTICE. Chapter 8: Groundwater Modeling. 8.1 Introduction. 8.2 Finite-Element Model. 8.3 One-Dimensional Flow. 8.4 Simple Two-Dimensional Flow. 8.5 Two-Dimensional Flow Beneath Water-Retaining Structures. 8.6 Three-Dimensional Flow. 8.7 Three Dimensional Exit Gradient Analysis. Chapter 9: Flow Through Earth Dams. 9.1 Statistics of Flow Through Earth Dams. 9.2 Extreme Hydraulic Gradient Statistics. Chapter 10: Settlement of Shallow Foundations. 10.1 Introduction. 10.2 Two-Dimensional Probabilistic Foundation Settlement. 10.3 Three-Dimensional Probabilistic Foundation Settlement. 10.4 Strip Footing Risk Assessment. 10.5 Resistance Factors for Shallow-Foundation Settlement Design. Chapter 11: Bearing Capacity. 11.1 Strip Footings on c-o Soils. 11.2 Load and Resistance Factor Design of Shallow Foundations. 11.3 Summary. Chapter 12: Deep Foundations. 12.1 Introduction. 12.2 Random Finite-Element Method. 12.3 Monte Carlo Estimation of Pile Capacity. 12.4 Summary. Chapter 13: Slope Stability. 13.1 Introduction. 13.2 Probabilistic Slope Stability Analysis. 13.3 Slope Stability Reliability Model. Chapter 14: Earth Pressure. 14.1 Introduction. 14.2 Passive Earth Pressures. 14.3 Active Earth Pressures: Retaining Wall Reliability. Chapter 15: Mine Pillar Capacity. 15.1 Introduction. 15.2 Literature. 15.3 Parametric Studies. 15.4 Probabilistic Interpretation. 15.5 Summary. Chapter 16: Liquefaction. 16.1 Introduction. 16.2 Model Size: Soil Liquefaction. 16.3 Monte Carlo Analysis and Results. 16.4 Summary PART 3: APPENDIXES. APPENDIX A: PROBABILITY TABLES. A.1 Normal Distribution. A.2 Inverse Student t -Distribution. A.3 Inverse Chi-Square Distribution APPENDIX B: NUMERICAL INTEGRATION. B.1 Gaussian Quadrature. APPENDIX C. COMPUTING VARIANCES AND CONVARIANCES OF LOCAL AVERAGES. C.1 One-Dimensional Case. C.2 Two-Dimensional Case C.3 Three-Dimensional Case. Index.

751 citations


Book
23 Oct 2008
TL;DR: In this paper, the authors proposed a continuous time Particle Filter and a Particle Filtering in Discrete Time (PFTF) algorithm for solving the problem of filtering.
Abstract: Filtering Theory- The Stochastic Process ?- The Filtering Equations- Uniqueness of the Solution to the Zakai and the Kushner-Stratonovich Equations- The Robust Representation Formula- Finite-Dimensional Filters- The Density of the Conditional Distribution of the Signal- Numerical Algorithms- Numerical Methods for Solving the Filtering Problem- A Continuous Time Particle Filter- Particle Filters in Discrete Time

638 citations


Journal ArticleDOI
22 May 2008-Nature
TL;DR: It is shown that it is possible to engineer an optical material in which light waves perform a Lévy flight, and the key parameters that determine the transport behaviour can be easily tuned, making this an ideal experimental system in which to study LÉvy flights in a controlled way.
Abstract: Translucent materials such as milk, clouds and biological tissues owe their appearance to the way they interact with light, randomly scattering an incident ray many times before it re-emerges. This process — analogous to the brownian motion of particles in a fluid — is called a random walk, a concept central to statistical physics. It is used, for example, to describe the diffusion of heat, light and sound. An extension of this idea is the Levy flight, where a moving entity can occasionally take unusually large steps, thereby transforming a system's behaviour. Levy flights have been recognized in systems as diverse as earthquakes and animal food searches. Barthelemy et al. have now engineered such behaviour into an optical material (titanium dioxide particles in a glass matrix). In the resulting 'Levy glass', rather than regular diffusion, light waves perform a Levy flight, in which photons spread around extremely efficiently. This will be an ideal model for studying Levy flights, and may also lead to novel optical materials. The cover the photons' path, with the light source top right. Photo by Diederik and Leonardo Wiersma An extension of the concept of a random walk is the Levy flight, in which the moving entity can occasionally take unusually large steps. Pierre Barthelemy and colleagues show how such behaviour can be engineered into an optical material. A random walk is a stochastic process in which particles or waves travel along random trajectories. The first application of a random walk was in the description of particle motion in a fluid (brownian motion); now it is a central concept in statistical physics, describing transport phenomena such as heat, sound and light diffusion1. Levy flights are a particular class of generalized random walk in which the step lengths during the walk are described by a ‘heavy-tailed’ probability distribution. They can describe all stochastic processes that are scale invariant2,3. Levy flights have accordingly turned out to be applicable to a diverse range of fields, describing animal foraging patterns4, the distribution of human travel5 and even some aspects of earthquake behaviour6. Transport based on Levy flights has been extensively studied numerically7,8,9, but experimental work has been limited10,11 and, to date, it has not seemed possible to observe and study Levy transport in actual materials. For example, experimental work on heat, sound, and light diffusion is generally limited to normal, brownian, diffusion. Here we show that it is possible to engineer an optical material in which light waves perform a Levy flight. The key parameters that determine the transport behaviour can be easily tuned, making this an ideal experimental system in which to study Levy flights in a controlled way. The development of a material in which the diffusive transport of light is governed by Levy statistics might even permit the development of new optical functionalities that go beyond normal light diffusion.

630 citations


Proceedings ArticleDOI
07 Dec 2008
TL;DR: This paper will briefly describe the nature and relevance of Monte Carlo simulation, the way to perform these simulations and analyze results, and the underlying mathematical techniques required for performing these simulations.
Abstract: This is an introductory tutorial on Monte Carlo simulation, a type of simulation that relies on repeated random sampling and statistical analysis to compute the results. In this paper, we will briefly describe the nature and relevance of Monte Carlo simulation, the way to perform these simulations and analyze results, and the underlying mathematical techniques required for performing these simulations. We will present a few examples from various areas where Monte Carlo simulation is used, and also touch on the current state of software in this area.

467 citations


Journal ArticleDOI
TL;DR: In this paper, a first-law like energy balance involving exchanged heat and entropy production entering refinements of the second law can consistently be defined along single stochastic trajectories.
Abstract: Stochastic thermodynamics provides a framework for describing small systems like colloids or biomolecules driven out of equilibrium but still in contact with a heat bath. Both, a first-law like energy balance involving exchanged heat and entropy production entering refinements of the second law can consistently be defined along single stochastic trajectories. Various exact relations involving the distribution of such quantities like integral and detailed fluctuation theorems for total entropy production and the Jarzynski relation follow from such an approach based on Langevin dynamics. Analogues of these relations can be proven for any system obeying a stochastic master equation like, in particular, (bio)chemically driven enzyms or whole reaction networks. The perspective of investigating such relations for stochastic field equations like the Kardar-Parisi-Zhang equation is sketched as well.

462 citations


Journal ArticleDOI
TL;DR: In this article, a Markov continuous Markov stochastic process is used to make inference on a partially observed monotone stochastically varying rate of sedimentation in lake sediment cores.
Abstract: Summary. We propose a new and simple continuous Markov monotone stochastic process and use it to make inference on a partially observed monotone stochastic process. The process is piecewise linear, based on additive independent gamma increments arriving in a Poisson fashion. An independent increments variation allows very simple conditional simulation of sample paths given known values of the process. We take advantage of a reparameterization involving the Tweedie distribution to provide efficient computation. The motivating problem is the establishment of a chronology for samples taken from lake sediment cores, i.e. the attribution of a set of dates to samples of the core given their depths, knowing that the age–depth relationship is monotone. The chronological information arises from radiocarbon (14C) dating at a subset of depths. We use the process to model the stochastically varying rate of sedimentation.

454 citations


Posted Content
TL;DR: In this article, a generalized version of Kolmogorov's criterion for continuous modification of a stochastic process is also obtained for continuous time dynamic and coherent risk measures in finance in particular for path-dependence risky positions under situations of volatility model uncertainty.
Abstract: In this paper we give some basic and important properties of several typical Banach spaces of functions of $G$-Brownian motion pathes induced by a sublinear expectation--G-expectation. Many results can be also applied to more general situations. A generalized version of Kolmogorov's criterion for continuous modification of a stochastic process is also obtained. The results can be applied to continuous time dynamic and coherent risk measures in finance in particular for path-dependence risky positions under situations of volatility model uncertainty.

450 citations


Journal ArticleDOI
TL;DR: In this paper, a necessary and sufficient condition for almost sure asymptotic consensus using simple ergodicity and probabilistic arguments is presented. This easily verifiable condition uses the spectrum of the average weight matrix.
Abstract: We consider the consensus problem for stochastic discrete-time linear dynamical systems. The underlying graph of such systems at a given time instance is derived from a random graph process, independent of other time instances. For such a framework, we present a necessary and sufficient condition for almost sure asymptotic consensus using simple ergodicity and probabilistic arguments. This easily verifiable condition uses the spectrum of the average weight matrix. Finally, we investigate a special case for which the linear dynamical system converges to a fixed vector with probability 1.

Journal ArticleDOI
TL;DR: The dynamic equations provide an alternative way of determining the epidemic threshold where large-scale epidemics are expected to occur, and below which epidemic behavior is limited to finite-sized outbreaks.
Abstract: Random networks with specified degree distributions have been proposed as realistic models of population structure, yet the problem of dynamically modeling SIR-type epidemics in random networks remains complex. I resolve this dilemma by showing how the SIR dynamics can be modeled with a system of three nonlinear ODE’s. The method makes use of the probability generating function (PGF) formalism for representing the degree distribution of a random network and makes use of network-centric quantities such as the number of edges in a well-defined category rather than node-centric quantities such as the number of infecteds or susceptibles. The PGF provides a simple means of translating between network and node-centric variables and determining the epidemic incidence at any time. The theory also provides a simple means of tracking the evolution of the degree distribution among susceptibles or infecteds. The equations are used to demonstrate the dramatic effects that the degree distribution plays on the final size of an epidemic as well as the speed with which it spreads through the population. Power law degree distributions are observed to generate an almost immediate expansion phase yet have a smaller final size compared to homogeneous degree distributions such as the Poisson. The equations are compared to stochastic simulations, which show good agreement with the theory. Finally, the dynamic equations provide an alternative way of determining the epidemic threshold where large-scale epidemics are expected to occur, and below which epidemic behavior is limited to finite-sized outbreaks.

Journal ArticleDOI
TL;DR: In this article, a Markov chain Monte Carlo (MCMCMC) method was used for the direct generation of synthetic time series of wind power output, which leads to reduced number of states and a lower order of the MCMC at equal power data resolution.
Abstract: This paper contributes a Markov chain Monte Carlo (MCMC) method for the direct generation of synthetic time series of wind power output. It is shown that obtaining a stochastic model directly in the wind power domain leads to reduced number of states and to lower order of the Markov chain at equal power data resolution. The estimation quality of the stochastic model is positively influenced since in the power domain, a lower number of independent parameters is estimated from a given amount of recorded data. The simulation results prove that this method offers excellent fit for both the probability density function and the autocorrelation function of the generated wind power time series. The method is a first step toward simple stochastic black-box models for wind generation.

Journal ArticleDOI
TL;DR: In this paper, it has been shown that the Hausdorff dimension of the chordal SLE is equal to Min(2, 1 + κ/8) with probability one.
Abstract: Let γ be the curve generating a Schramm–Loewner Evolution (SLE) process, with parameter κ ≥ 0. We prove that, with probability one, the Haus-dorff dimension of γ is equal to Min(2, 1 + κ/8). Introduction. It has been conjectured by theoretical physicists that various lattice models in statistical physics (such as percolation, Potts model, Ising model, uniform spanning trees), taken at their critical point, have a continuous confor-mally invariant scaling limit when the mesh of the lattice tends to 0. Recently, Oded Schramm [15] introduced a family of random processes which he called Stochastic Loewner Evolutions (or SLE), that are the only possible conformally invariant scaling limits of random cluster interfaces (which are very closely related to all above-mentioned models). An SLE process is defined using the usual Loewner equation, where the driving function is a time-changed Brownian motion. More specifically, in the present paper we will be mainly concerned with SLE in the upper-half plane (sometimes called chordal SLE), defined by the following PDE:

Proceedings ArticleDOI
25 Oct 2008
TL;DR: It is proved that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi, and that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring, which means that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail.
Abstract: For many random constraint satisfaction problems, by now there exist asymptotically tight estimates of the largest constraint density for which solutions exist. At the same time, for many of these problems, all known polynomial-time algorithms stop finding solutions at much smaller densities. For example, it is well-known that it is easy to color a random graph using twice as many colors as its chromatic number. Indeed, some of the simplest possible coloring algorithms achieve this goal. Given the simplicity of those algorithms, one would expect room for improvement. Yet, to date, no algorithm is known that uses (2 - epsiv)chi colors, in spite of efforts by numerous researchers over the years. In view of the remarkable resilience of this factor of 2 against every algorithm hurled at it, we find it natural to inquire into its origin. We do so by analyzing the evolution of the set of k-colorings of a random graph, viewed as a subset of {1,...,k}n, as edges are added. We prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set. Roughly speaking, we prove that the set of k-colorings looks like a giant ball for k ges 2chi, but like an error-correcting code for k les (2 - epsiv)chi. We also prove that an analogous phase transition occurs both in random k-SAT and in random hypergraph 2-coloring. And that for each of these three problems, the location of the transition corresponds to the point where all known polynomial-time algorithms fail. To prove our results we develop a general technique that allows us to establish rigorously much of the celebrated 1-step replica-symmetry-breaking hypothesis of statistical physics for random CSPs.

Journal ArticleDOI
TL;DR: In this paper, the authors developed more general limit theorems, based on triangular arrays, for sequences of continuous time random walks (CTRW), which are used in physics to model anomalous diffusion.

Journal ArticleDOI
TL;DR: The concepts of asymptotic Nash-equilibrium in probability and almost surely, respectively, are introduced and the relationship between these concepts is illuminated, which provide necessary tools for analyzing the optimality of the decentralized control laws.
Abstract: The interaction of interest-coupled decision-makers and the uncertainty of individual behavior are prominent characteristics of multiagent systems (MAS). How to break through the framework of conventional control theory, which aims at single decision-maker and single decision objective, and to extend the methodology and tools in the stochastic adaptive control theory to analyze MAS are of great significance. In this paper, a preliminary exploration is made in this direction, and the decentralized control problem is considered for large population stochastic MAS with coupled cost functions. Different from the deterministic discounted costs in the existing differential game models, a time-averaged stochastic cost function is adopted for each agent. The decentralized control law is constructed based on the state aggregation method and tracking-like quadratic optimal control. By using probability limit theory, the stability and optimality of the closed-loop system are analyzed. The main contributions of this paper include the following points. 1) The concepts of asymptotic Nash-equilibrium in probability and almost surely, respectively, are introduced and the relationship between these concepts is illuminated, which provide necessary tools for analyzing the optimality of the decentralized control laws. 2) The closed-loop system is shown to be almost surely uniformly stable, and bounded independently of the number of agents N . 3) The population state average (PSA) is shown to converge to the infinite population mean (IPM) trajectory in the sense of both L2-norm and time average almost surely, as N increases to infinity. 4) The decentralized control law is designed and shown to be almost surely asymptotically optimal; the cost of each agent based on local measurements converges to that based on global measurements almost surely, as N increases to infinity.

Journal ArticleDOI
TL;DR: This paper presents conditions for a general class of nonlinear NCS with exogenous disturbances using stochastic protocols in the presence of packet dropouts, random packet transmission times and collisions that are sufficient for LP stability from exogenous disturbance to NCS state with a linear finite expected gain.
Abstract: This paper introduces a new definition of stochastic protocols for networked control systems (NCS) and the stochastic analog of the notion of uniform persistency of excitation of protocols first presented in the Proceedings of the 44th IEEE Conference on Decision and Control. Our framework applies directly to common wireless and wireline NCS, including those built on carrier-sense multiple access (CSMA) style protocols, with Ethernet and 802.11a/b/g as prime examples of this class. We present conditions for a general class of nonlinear NCS with exogenous disturbances using stochastic protocols in the presence of packet dropouts, random packet transmission times and collisions that are sufficient for LP stability from exogenous disturbance to NCS state with a linear finite expected gain. Within the same framework, we extend the results of Nesic and Teel (see IEEE Trans. Autom. Control, vol. 49, no. 10, pp. 1650-1667, Oct. 2004) to provide an analysis of deterministic protocols, including try-once-discard (TOD), in the presence of random packet dropouts and intertransmission times and provide a stochastic analog of the Lyapunov-theoretic stability properties for network protocols introduced therein.

Journal ArticleDOI
TL;DR: The approach is to mathematically model degradation-based signals from a population of components using stochastic models that combine three main sources of information: real-time degradation characteristics of component obtained by observing the component's in-situ degradation signal, the degraded characteristics of the component’s population, and the real- time status of the environmental conditions under which the component is operating.
Abstract: This paper presents a degradation modeling framework for computing condition-based residual life distributions of partially degraded systems and/or components functioning under time-varying environmental and/or operational conditions. Our approach is to mathematically model degradation-based signals from a population of components using stochastic models that combine three main sources of information: real-time degradation characteristics of component obtained by observing the component's in-situ degradation signal, the degradation characteristics of the component's population, and the real-time status of the environmental conditions under which the component is operating. Prior degradation information is used to estimate the model coefficients. The resulting generalized stochastic degradation model is then used to predict an initial residual life distribution for the component being monitored. In-situ degradation signals, along with real-time information related to the environmental conditions, are then used to update the residual life distributions in real-time. Because these updated distributions capture current health information and the latest environmental conditions, they provide precise lifetime estimates. The performance of the proposed models is evaluated using real world vibration-based degradation signals from a rotating machinery application.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a new spatial individual-based forest model that includes a perfect plasticity formulation for crown shape and derived a series of analytical results including equilibrium abundances for trees of different crown shapes, stability conditions, transient behaviors, such as the constant yield law and self-thinning exponents.
Abstract: Individual-based forest simulators, such as TASS and SORTIE, are spatial stochastic processes that predict properties of populations and communities by simulating the fate of every plant throughout its life cycle. Although they are used for forest management and are able to predict dynamics of real forests, they are also analytically intractable, which limits their usefulness to basic scientists. We have developed a new spatial individual-based forest model that includes a perfect plasticity formulation for crown shape. Its structure allows us to derive an accurate approximation for the individual-based model that predicts mean densities and size structures using the same parameter values and functional forms, and also it is analytically tractable. The approximation is represented by a system of von Foerster partial differential equations coupled with an integral equation that we call the perfect plasticity approximation (PPA). We have derived a series of analytical results including equilibrium abundances for trees of different crown shapes, stability conditions, transient behaviors, such as the constant yield law and self-thinning exponents, and two species coexistence conditions.

Journal ArticleDOI
TL;DR: In this article, the existence of the unique solution of a general backward stochastic differential equation with quadratic growth driven by martingales was proved and a kind of comparison theorem was also proved.

Journal ArticleDOI
TL;DR: The algorithm is intended both for online safety applications in a vehicle and for offline data analysis, and several techniques are presented to increase performance without increasing computational load.
Abstract: This paper presents a threat-assessment algorithm for general road scenes. A road scene consists of a number of objects that are known, and the threat level of the scene is based on their current positions and velocities. The future driver inputs of the surrounding objects are unknown and are modeled as random variables. In order to capture realistic driver behavior, a dynamic driver model is implemented as a probabilistic prior, which computes the likelihood of a potential maneuver. A distribution of possible future scenarios can then be approximated using a Monte Carlo sampling. Based on this distribution, different threat measures can be computed, e.g., probability of collision or time to collision. Since the algorithm is based on the Monte Carlo sampling, it is computationally demanding, and several techniques are presented to increase performance without increasing computational load. The algorithm is intended both for online safety applications in a vehicle and for offline data analysis.

Journal ArticleDOI
TL;DR: Using the Kronecker product as an effective tool, a linear matrix inequality (LMI) approach is developed to derive several sufficient criteria ensuring the coupled delayed neural networks to be globally, robustly, exponentially synchronized in the mean square.
Abstract: This paper is concerned with the robust synchronization problem for an array of coupled stochastic discrete-time neural networks with time-varying delay. The individual neural network is subject to parameter uncertainty, stochastic disturbance, and time-varying delay, where the norm-bounded parameter uncertainties exist in both the state and weight matrices, the stochastic disturbance is in the form of a scalar Wiener process, and the time delay enters into the activation function. For the array of coupled neural networks, the constant coupling and delayed coupling are simultaneously considered. We aim to establish easy-to-verify conditions under which the addressed neural networks are synchronized. By using the Kronecker product as an effective tool, a linear matrix inequality (LMI) approach is developed to derive several sufficient criteria ensuring the coupled delayed neural networks to be globally, robustly, exponentially synchronized in the mean square. The LMI-based conditions obtained are dependent not only on the lower bound but also on the upper bound of the time-varying delay, and can be solved efficiently via the Matlab LMI Toolbox. Two numerical examples are given to demonstrate the usefulness of the proposed synchronization scheme.

Journal ArticleDOI
TL;DR: The framework proposed and the numerical methods derived from it provide a new and powerful tool for the exploration of neural behaviors at different scales and shed some new light on such neural mass models as the one of Jansen and Rit (1995).
Abstract: We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit \cite{jansen-rit:95}: their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales.

Journal ArticleDOI
TL;DR: Further advantages of the matrix formulation of dynamics are described, providing simple exact methods for evaluating expected eradication times of diseases, for comparing expected total costs of possible control programmes and for estimation of disease parameters.
Abstract: Models that deal with the individual level of populations have shown the importance of stochasticity in ecology, epidemiology and evolution. An increasingly common approach to studying these models is through stochastic (event-driven) simulation. One striking disadvantage of this approach is the need for a large number of replicates to determine the range of expected behaviour. Here, for a class of stochastic models called Markov processes, we present results that overcome this difficulty and provide valuable insights, but which have been largely ignored by applied researchers. For these models, the so-called Kolmogorov forward equation (also called the ensemble or master equation) allows one to simultaneously consider the probability of each possible state occurring. Irrespective of the complexities and nonlinearities of population dynamics, this equation is linear and has a natural matrix formulation that provides many analytical insights into the behaviour of stochastic populations and allows rapid evaluation of process dynamics. Here, using epidemiological models as a template, these ensemble equations are explored and results are compared with traditional stochastic simulations. In addition, we describe further advantages of the matrix formulation of dynamics, providing simple exact methods for evaluating expected eradication (extinction) times of diseases, for comparing expected total costs of possible control programmes and for estimation of disease parameters.

Journal ArticleDOI
TL;DR: This study examines a non‐Archimedean copula from the Plackett family that is founded on the theory of constant cross‐product ratio, and suggests that it provides further flexibility for multivariate stochastic analyses of rainfall.
Abstract: [1] Recent hydrologic studies on multivariate stochastic analysis have indicated that copulas perform well for bivariate problems. In particular, the Frank family of Archimedean copulas has been a popular choice for a dependence model. However, there are limitations to extending such Archimedean copulas to trivariate and higher dimensions, with very specific restrictions on the kinds of dependencies that can be modeled. In this study, we examine a non-Archimedean copula from the Plackett family that is founded on the theory of constant cross-product ratio. It is shown that the Plackett family not only performs well at the bivariate level, but also allows a trivariate stochastic analysis where the lower-level dependencies between variables can be fully preserved while allowing for specificity at the trivariate level as well. The feasible range of Plackett parameters that would result in valid 3-copulas is determined numerically. The trivariate Plackett family of copulas is then applied to the study of temporal distribution of extreme rainfall events for several stations in Indiana where the estimated parameters lie in the feasible region. On the basis of a given rainfall depth and duration, conditional expectations of rainfall features such as expected peak intensity, time to peak, and percentage cumulative rainfall at 10% cumulative time increments are evaluated. The results of this study suggest that while the constant cross-product ratio theory was conventionally applied to discrete type random variables, it is also applicable to continuous random variables, and that it provides further flexibility for multivariate stochastic analyses of rainfall.

Journal ArticleDOI
TL;DR: Two types of methods for solving stochastic variational inequality problems (SVIP) where the underlying functions are the expected value of Stochastic functions based on projections and reformulations of SVIP are proposed.
Abstract: Stochastic approximation methods have been extensively studied in the literature for solving systems of stochastic equations and stochastic optimization problems where function values and first order derivatives are not observable but can be approximated through simulation. In this paper, we investigate stochastic approximation methods for solving stochastic variational inequality problems (SVIP) where the underlying functions are the expected value of stochastic functions. Two types of methods are proposed: stochastic approximation methods based on projections and stochastic approximation methods based on reformulations of SVIP. Global convergence results of the proposed methods are obtained under appropriate conditions.

Journal ArticleDOI
TL;DR: A general expression for the cumulant generating function (CGF) of non-Markovian quantum stochastic transport processes is derived and the effects of dissipation on the zero-frequency cumulants of high orders and the finite-frequency noise are studied.
Abstract: We derive a general expression for the cumulant generating function (CGF) of non-Markovian quantum stochastic transport processes. The long-time limit of the CGF is determined by a single dominating pole of the resolvent of the memory kernel from which we extract the zero-frequency cumulants of the current using a recursive scheme. The finite-frequency noise is expressed not only in terms of the resolvent, but also initial system-environment correlations. As an illustrative example we consider electron transport through a dissipative double quantum dot for which we study the effects of dissipation on the zero-frequency cumulants of high orders and the finite-frequency noise.

Journal ArticleDOI
01 Aug 2008
TL;DR: By utilizing the Lyapunov functional method combined with linear matrix inequality (LMI) techniques, several sufficient delay-dependent conditions are obtained that ensure the coupled networks to be globally exponentially synchronized in the mean square.
Abstract: In this paper, the synchronization control problem is considered for two coupled discrete-time complex networks with time delays. The network under investigation is quite general to reflect the reality, where the state delays are allowed to be time varying with given lower and upper bounds, and the stochastic disturbances are assumed to be Brownian motions that affect not only the network coupling but also the overall networks. By utilizing the Lyapunov functional method combined with linear matrix inequality (LMI) techniques, we obtain several sufficient delay-dependent conditions that ensure the coupled networks to be globally exponentially synchronized in the mean square. A control law is designed to synchronize the addressed coupled complex networks in terms of certain LMIs that can be readily solved using the Matlab LMI toolbox. Two numerical examples are presented to show the validity of our theoretical analysis results.

Proceedings ArticleDOI
07 Dec 2008
TL;DR: The basic theory of kriging is extended, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables.
Abstract: We extend the basic theory of kriging, as applied to the design and analysis of deterministic computer experiments, to the stochastic simulation setting. Our goal is to provide flexible, interpolation-based metamodels of simulation output performance measures as functions of the controllable design or decision variables. To accomplish this we characterize both the intrinsic uncertainty inherent in a stochastic simulation and the extrinsic uncertainty about the unknown response surface. We use tractable examples to demonstrate why it is critical to characterize both types of uncertainty, derive general results for experiment design and analysis, and present a numerical example that illustrates the stochastic kriging method.