scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic process published in 2006"


BookDOI
01 Jan 2006
TL;DR: In this paper, the Brownian forest and the additive coalescent were constructed for random walks and random forests, respectively, and the Bessel process was used for random mappings.
Abstract: Preliminaries.- Bell polynomials, composite structures and Gibbs partitions.- Exchangeable random partitions.- Sequential constructions of random partitions.- Poisson constructions of random partitions.- Coagulation and fragmentation processes.- Random walks and random forests.- The Brownian forest.- Brownian local times, branching and Bessel processes.- Brownian bridge asymptotics for random mappings.- Random forests and the additive coalescent.

1,371 citations


Proceedings ArticleDOI
18 Dec 2006
TL;DR: The heart of the approach is to exploit two important properties shared by many real graphs: linear correlations and block- wise, community-like structure and exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman- Morrison lemma for matrix inversion.
Abstract: How closely related are two nodes in a graph? How to compute this score quickly, on huge, disk-resident, real graphs? Random walk with restart (RWR) provides a good relevance score between two nodes in a weighted graph, and it has been successfully used in numerous settings, like automatic captioning of images, generalizations to the "connection subgraphs", personalized PageRank, and many more. However, the straightforward implementations of RWR do not scale for large graphs, requiring either quadratic space and cubic pre-computation time, or slow response time on queries. We propose fast solutions to this problem. The heart of our approach is to exploit two important properties shared by many real graphs: (a) linear correlations and (b) block- wise, community-like structure. We exploit the linearity by using low-rank matrix approximation, and the community structure by graph partitioning, followed by the Sherman- Morrison lemma for matrix inversion. Experimental results on the Corel image and the DBLP dabasets demonstrate that our proposed methods achieve significant savings over the straightforward implementations: they can save several orders of magnitude in pre-computation and storage cost, and they achieve up to 150x speed up with 90%+ quality preservation.

1,148 citations


Journal ArticleDOI
TL;DR: The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and tau leaping algorithms.
Abstract: This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or tau leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probability density error. For any system in which a sufficiently accurate FSP exists, the FSP algorithm is shown to converge in a finite number of steps. The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and tau leaping algorithms. In both examples, the FSP outperforms the SSA in terms of accuracy as well as computational efficiency. Furthermore, due to very small molecular counts in these particular examples, the FSP also performs far more effectively than tau leaping methods.

796 citations


Journal ArticleDOI
TL;DR: This work presents a new method that makes maximum likelihood estimation feasible for partially-observed nonlinear stochastic dynamical systems (also known as state-space models) where this was not previously the case.
Abstract: Nonlinear stochastic dynamical systems are widely used to model systems across the sciences and engineering. Such models are natural to formulate and can be analyzed mathematically and numerically. However, difficulties associated with inference from time-series data about unknown parameters in these models have been a constraint on their application. We present a new method that makes maximum likelihood estimation feasible for partially-observed nonlinear stochastic dynamical systems (also known as state-space models) where this was not previously the case. The method is based on a sequence of filtering operations which are shown to converge to a maximum likelihood parameter estimate. We make use of recent advances in nonlinear filtering in the implementation of the algorithm. We apply the method to the study of cholera in Bangladesh. We construct confidence intervals, perform residual analysis, and apply other diagnostics. Our analysis, based upon a model capturing the intrinsic nonlinear dynamics of the system, reveals some effects overlooked by previous studies.

485 citations


Journal ArticleDOI
TL;DR: It is shown that the addressed stochastic Cohen-Grossberg neural networks with mixed delays are globally asymptotically stable in the mean square if two LMIs are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox.
Abstract: In this letter, the global asymptotic stability analysis problem is considered for a class of stochastic Cohen-Grossberg neural networks with mixed time delays, which consist of both the discrete and distributed time delays. Based on an Lyapunov-Krasovskii functional and the stochastic stability analysis theory, a linear matrix inequality (LMI) approach is developed to derive several sufficient conditions guaranteeing the global asymptotic convergence of the equilibrium point in the mean square. It is shown that the addressed stochastic Cohen-Grossberg neural networks with mixed delays are globally asymptotically stable in the mean square if two LMIs are feasible, where the feasibility of LMIs can be readily checked by the Matlab LMI toolbox. It is also pointed out that the main results comprise some existing results as special cases. A numerical example is given to demonstrate the usefulness of the proposed global stability criteria

433 citations


Book
31 Oct 2006
TL;DR: The general theory of large deviations: Large deviations and exponential tightness Large deviations for stochastic processes, large deviations for Markov processes and semigroup convergence, and nonlinear semiigroup convergence using viscosity solutions is discussed in this article.
Abstract: Introduction: Introduction An overview The general theory of large deviations: Large deviations and exponential tightness Large deviations for stochastic processes Large deviations for Markov processes and semigroup convergence: Large deviations for Markov processes and nonlinear semigroup convergence Large deviations and nonlinear semigroup convergence using viscosity solutions Extensions of viscosity solution methods The Nisio semigroup and a control representation of the rate function Examples of large deviations and the comparison principle: The comparison principle Nearly deterministic processes in $R^d$ Random evolutions Occupation measures Stochastic equations in infinite dimensions Appendix: Operators and convergence in function spaces Variational constants, rate of growth and spectral theory for the semigroup of positive linear operators Spectral properties for discrete and continuous Laplacians Results from mass transport theory Bibliography Index.

433 citations


Journal ArticleDOI
TL;DR: A stochastic discrete-time susceptible-exposed-infectious-recovered (SEIR) model for infectious diseases is developed with the aim of estimating parameters from daily incidence and mortality time series for an outbreak of Ebola in the Democratic Republic of Congo in 1995.
Abstract: A stochastic discrete-time susceptible-exposed-infectious-recovered (SEIR) model for infectious diseases is developed with the aim of estimating parameters from daily incidence and mortality time series for an outbreak of Ebola in the Democratic Republic of Congo in 1995. The incidence time series exhibit many low integers as well as zero counts requiring an intrinsically stochastic modeling approach. In order to capture the stochastic nature of the transitions between the compartmental populations in such a model we specify appropriate conditional binomial distributions. In addition, a relatively simple temporally varying transmission rate function is introduced that allows for the effect of control interventions. We develop Markov chain Monte Carlo methods for inference that are used to explore the posterior distribution of the parameters. The algorithm is further extended to integrate numerically over state variables of the model, which are unobserved. This provides a realistic stochastic model that can be used by epidemiologists to study the dynamics of the disease and the effect of control interventions.

414 citations


Book
01 Jan 2006
TL;DR: The authors presented the main concepts and results in measure theory and probability theory in a simple and easy-to-understand way, and further provided heuristic explanations behind the theory to help students see the big picture.
Abstract: This is a graduate level textbook on measure theory and probability theory. It presents the main concepts and results in measure theory and probability theory in a simple and easy-to-understand way. It further provides heuristic explanations behind the theory to help students see the big picture. The book can be used as a text for a two semester sequence of courses in measure theory and probability theory, with an option to include supplemental material on stochastic processes and special topics. Prerequisites are kept to the minimal level and the book is intended primarily for first year Ph.D. students in mathematics and statistics.

396 citations


Book
01 Jan 2006
TL;DR: The class of random-cluster models is a unification of a variety of stochastic processes of significance for probability and statistical physics, including percolation, Ising, and Potts models; in addition, their study has impact on the theory of certain random combinatorial structures and of electrical networks as mentioned in this paper.
Abstract: The class of random-cluster models is a unification of a variety of stochastic processes of significance for probability and statistical physics, including percolation, Ising, and Potts models; in addition, their study has impact on the theory of certain random combinatorial structures, and of electrical networks. Much (but not all) of the physical theory of Ising/Potts models is best implemented in the context of the random-cluster representation. This systematic summary of random-cluster models includes accounts of the fundamental methods and inequalities, the uniqueness and specification of infinite-volume measures, the existence and nature of the phase transition, and the structure of the subcritical and supercritical phases. The theory for two-dimensional lattices is better developed than for three and more dimensions. There is a rich collection of open problems, including some of substantial significance for the general area of disordered systems, and these are highlighted when encountered. Amongst the major open questions, there is the problem of ascertaining the exact nature of the phase transition for general values of the cluster-weighting factor q, and the problem of proving that the critical random-cluster model in two dimensions, with 1 ≤ q ≤ 4, converges when re-scaled to a stochastic Lowner evolution (SLE). Overall the emphasis is upon the random-cluster model for its own sake, rather than upon its applications to Ising and Potts systems.

396 citations


Journal ArticleDOI
TL;DR: A new generalized correlation measure is developed that includes the information of both the distribution and that of the time structure of a stochastic process.
Abstract: With an abundance of tools based on kernel methods and information theoretic learning, a void still exists in incorporating both the time structure and the statistical distribution of the time series in the same functional measure. In this paper, a new generalized correlation measure is developed that includes the information of both the distribution and that of the time structure of a stochastic process. It is shown how this measure can be interpreted from a kernel method as well as from an information theoretic learning points of view, demonstrating some relevant properties. To underscore the effectiveness of the new measure, a simple blind equalization problem is considered using a coded signal.

395 citations


Journal ArticleDOI
TL;DR: By analyzing simultaneous recordings of global and neuronal activities, this work confirms the 1/f scaling of global variables for selected frequency bands, but shows that neuronal activity is not consistent with critical states.
Abstract: Many complex systems display self-organized critical states characterized by $1/f$ frequency scaling of power spectra. Global variables such as the electroencephalogram, scale as $1/f$, which could be the sign of self-organized critical states in neuronal activity. By analyzing simultaneous recordings of global and neuronal activities, we confirm the $1/f$ scaling of global variables for selected frequency bands, but show that neuronal activity is not consistent with critical states. We propose a model of $1/f$ scaling which does not rely on critical states, and which is testable experimentally.

Proceedings ArticleDOI
25 Jun 2006
TL;DR: Stochastic Meta-Descent (SMD), a stochastic gradient optimization method with gain vector adaptation, is applied to the training of Conditional Random Fields (CRFs) and the resulting optimizer converges to the same quality of solution over an order of magnitude faster than limited-memory BFGS.
Abstract: We apply Stochastic Meta-Descent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several large data sets, the resulting optimizer converges to the same quality of solution over an order of magnitude faster than limited-memory BFGS, the leading method reported to date. We report results for both exact and inexact inference techniques.

Proceedings ArticleDOI
14 May 2006
TL;DR: A new technique for efficiently acquiring and reconstructing signals based on convolution with a fixed FIR filter having random taps, which is sufficiently generic to summarize many types of compressible signals and generalizes to streaming and continuous-time signals.
Abstract: We propose and study a new technique for efficiently acquiring and reconstructing signals based on convolution with a fixed FIR filter having random taps. The method is designed for sparse and compressible signals, i.e., ones that are well approximated by a short linear combination of vectors from an orthonormal basis. Signal reconstruction involves a non-linear Orthogonal Matching Pursuit algorithm that we implement efficiently by exploiting the nonadaptive, time-invariant structure of the measurement process. While simpler and more efficient than other random acquisition techniques like Compressed Sensing, random filtering is sufficiently generic to summarize many types of compressible signals and generalizes to streaming and continuous-time signals. Extensive numerical experiments demonstrate its efficacy for acquiring and reconstructing signals sparse in the time, frequency, and wavelet domains, as well as piecewise smooth signals and Poisson processes.

Journal ArticleDOI
TL;DR: The usual continuum product based on the Kolmogorov construction together with the Lebesgue measure as well as the usual finitely additive measure-theoretic framework is shown to be not suitable for modeling individual risks.

Journal ArticleDOI
Manuela Royer1
TL;DR: The notion of non-linear expectations has been studied only when the underlying filtration is given by a Brownian motion and in this article, the authors studied real-valued backward stochastic differential equations with jumps together with their applications to nonlinear expectations.

Journal ArticleDOI
TL;DR: It is shown that transport in the presence of entropic barriers exhibits peculiar characteristics which makes it distinctly different from that occurring through energy barriers, and this interesting property can be utilized to effectively control transport through quasi-one-dimensional structures in which irregularities or tortuosity of the boundaries causeEntropic effects.
Abstract: We show that transport in the presence of entropic barriers exhibits peculiar characteristics which makes it distinctly different from that occurring through energy barriers. The constrained dynamics yields a scaling regime for the particle current and the diffusion coefficient in terms of the ratio between the work done to the particles and available thermal energy. This interesting property, genuine to the entropic nature of the barriers, can be utilized to effectively control transport through quasi-one-dimensional structures in which irregularities or tortuosity of the boundaries cause entropic effects. The accuracy of the kinetic description has been corroborated by simulations. Applications to different dynamic situations involving entropic barriers are outlined.

Journal ArticleDOI
TL;DR: A framework for optimal routing policy problems in stochastic time-dependent networks is established, which the author believes is the first in the literature.
Abstract: We study optimal routing policy problems in stochastic time-dependent networks, where link travel times are modeled as random variables with time-dependent distributions. These are fundamental network optimization problems for a wide variety of applications, such as transportation and telecommunication systems. The routing problems studied can be viewed as counterparts of shortest path problems in deterministic networks. A routing policy is defined as a decision rule that specifies what node to take next at each decision node based on realized link travel times and the current time. We establish a framework for optimal routing policy problems in stochastic time-dependent networks, which we believe is the first in the literature. We give a comprehensive taxonomy and an in-depth discussion of variants of the problem. We then study in detail one variant that is particularly pertinent in traffic networks, where both link-wise and time-wise stochastic dependencies of link travel times are considered and online information is represented. We give an exact algorithm to this variant, analyze its complexity and point out the importance of finding good approximations to the exact solution. We then overview several approximations, and present a summary of a theoretical and computational analysis of their effectiveness against the exact algorithm.

Journal ArticleDOI
TL;DR: First hitting times arise naturally in many types of stochastic processes, ranging from Wiener processes to Markov chains, and have been investigated as models for survival data.
Abstract: Many researchers have investigated first hitting times as models for survival data. First hitting times arise naturally in many types of stochastic processes, ranging from Wiener processes to Markov chains. In a survival context, the state of the underlying process represents the strength of an item or the health of an individual. The item fails or the individual experiences a clinical endpoint when the process reaches an adverse threshold state for the first time. The time scale can be calendar time or some other operational measure of degradation or disease progression. In many applications, the process is latent (i.e., unobservable). Threshold regression refers to first-hitting-time models with regression structures that accommodate covariate data. The parameters of the process, threshold state and time scale may depend on the covariates. This paper reviews aspects of this topic and discusses fruitful avenues for future research.

Journal ArticleDOI
Chai Wah Wu1
TL;DR: Rather than using Lyapunov type methods, results from the theory of inhomogeneous Markov chains are used in the authors' analysis and it is shown that they are useful for deterministic consensus problems and more general random graph processes.
Abstract: Recently, methods in stochastic control are used to study the synchronization properties of a nonautonomous discrete-time linear system x(k+1)=G(k)x(k) where the matrices G(k) are derived from a random graph process. The purpose of this note is to extend this analysis to directed graphs and more general random graph processes. Rather than using Lyapunov type methods, we use results from the theory of inhomogeneous Markov chains in our analysis. These results have been used successfully in deterministic consensus problems and we show that they are useful for these problems as well. Sufficient conditions are derived that depend on the types of graphs that have nonvanishing probabilities. For instance, if a scrambling graph occurs with nonzero probability, then the system synchronizes.

Journal ArticleDOI
TL;DR: The key technical ingredient of the approach is a deep result of stochastic processes indicating that samples taken from consecutive steps of a random walk on an expander graph can achieve statistical properties similar to independent sampling.

Journal ArticleDOI
TL;DR: A hybrid method is presented here, which jointly propagates probabilistic and possibilistic uncertainty and produces results in the form of a random fuzzy interval.
Abstract: Random variability and imprecision are two distinct facets of the uncertainty affecting parameters that influence the assessment of risk. While random variability can be represented by probability distribution functions, imprecision (or partial ignorance) is better accounted for by possibility distributions (or families of probability distributions). Because practical situations of risk computation often involve both types of uncertainty, methods are needed to combine these two modes of uncertainty representation in the propagation step. A hybrid method is presented here, which jointly propagates probabilistic and possibilistic uncertainty. It produces results in the form of a random fuzzy interval. This paper focuses on how to properly summarize this kind of information; and how to address questions pertaining to the potential violation of some tolerance threshold. While exploitation procedures proposed previously entertain a confusion between variability and imprecision, thus yielding overly conservative results, a new approach is proposed, based on the theory of evidence, and is illustrated using synthetic examples

Journal ArticleDOI
TL;DR: This work presents a statistical approach to probabilistic model checking, employing hypothesis testing and discrete-event simulation, and can at least bound the probability of generating an incorrect answer to a verification problem.
Abstract: Probabilistic verification of continuous-time stochastic processes has received increasing attention in the model-checking community in the past 5 years, with a clear focus on developing numerical solution methods for model checking of continuous-time Markov chains. Numerical techniques tend to scale poorly with an increase in the size of the model (the "state space explosion problem"), however, and are feasible only for restricted classes of stochastic discrete-event systems. We present a statistical approach to probabilistic model checking, employing hypothesis testing and discrete-event simulation. Since we rely on statistical hypothesis testing, we cannot guarantee that the verification result is correct, but we can at least bound the probability of generating an incorrect answer to a verification problem.

Journal ArticleDOI
TL;DR: In this paper, the global asymptotic stability analysis problem for a class of uncertain stochastic Hopfield neural networks with discrete and distributed time-delays was studied.

Journal ArticleDOI
TL;DR: A numerical method based on Wiener Chaos expansion is proposed and applied to solve the stochastic Burgers and Navier-Stokes equations driven by Brownian motion and it is demonstrated that for short time solutions the numerical methods are more efficient and accurate than those based on the Monte Carlo simulations.

Journal ArticleDOI
TL;DR: In this paper, a generalized Bouc-Wen model with sufficient flexibility in shape control is proposed to describe highly asymmetric hysteresis loops, and a mathematical relation between the shape-control parameters and the slopes of the hysteretic loops is introduced.
Abstract: Bouc-Wen class models have been widely used to efficiently describe smooth hysteretic behavior in time history and random vibration analyses. This paper proposes a generalized Bouc-Wen model with sufficient flexibility in shape control to describe highly asymmetric hysteresis loops. Also introduced is a mathematical relation between the shape-control parameters and the slopes of the hysteresis loops, so that the model parameters can be identified systematically in conjunction with available parameter identification methods. For use in nonlinear random vibration analysis by the equivalent linearization method, closed-form expressions are derived for the coefficients of the equivalent linear system in terms of the second moments of the response quantities. As an example application, the proposed model is successfully fitted to the highly asymmetric hysteresis loops obtained in laboratory experiments for flexible connectors used in electrical substations. The model is then employed to investigate the effect of dynamic interaction between interconnected electrical substation equipment by nonlinear time-history and random vibration analyses.

Journal ArticleDOI
TL;DR: A formulation is presented for the impact of data limitations associated with the calibration of parameters for these models, on their overall predictive accuracy and a new method for the characterization of stochastic processes from corresponding experimental observations is obtained.

Proceedings ArticleDOI
09 Jul 2006
TL;DR: This work presents a method for secrecy extraction from jointly Gaussian random sources that has applications in enhancing security for wireless communications and is closely related to some well known lossy source coding problems.
Abstract: We present a method for secrecy extraction from jointly Gaussian random sources. The approach is motivated by and has applications in enhancing security for wireless communications. The problem is also found to be closely related to some well known lossy source coding problems.

Journal ArticleDOI
TL;DR: In this paper, the authors considered a class of uncertain stochastic neural networks with time delays and parameter uncertainties, and established easily verifiable conditions under which the delayed neural network is robustly asymptotically stable in the mean square for all admissible parameter uncertainties.
Abstract: In this paper, the asymptotic stability analysis problem is considered for a class of uncertain stochastic neural networks with time delays and parameter uncertainties. The delays are time-invariant, and the uncertainties are norm-bounded that enter into all the network parameters. The aim of this paper is to establish easily verifiable conditions under which the delayed neural network is robustly asymptotically stable in the mean square for all admissible parameter uncertainties. By employing a Lyapunov–Krasovskii functional and conducting the stochastic analysis, a linear matrix inequality (LMI) approach is developed to derive the stability criteria. The proposed criteria can be checked readily by using some standard numerical packages, and no tuning of parameters is required. Examples are provided to demonstrate the effectiveness and applicability of the proposed criteria.

Book
08 May 2006
TL;DR: In this paper, the term structure of interest rates in the Malliavin calculus is analyzed in terms of the term-stochastic analysis in infinite dimensions, and generalized models for the Term Structure of Interest Rates are presented.
Abstract: The Term Structure of Interest Rates.- Data and Instruments of the Term Structure of Interest Rates.- Term Structure Factor Models.- Infinite Dimensional Stochastic Analysis.- Infinite Dimensional Integration Theory.- Stochastic Analysis in Infinite Dimensions.- The Malliavin Calculus.- Generalized Models for the Term Structure of Interest Rates.- General Models.- Specific Models.

Journal ArticleDOI
TL;DR: In this paper, the authors present two applications with tick-by-tick stock and futures data, where the probability density functions for this limit process are solved to yield descriptions of long-term price changes, based on a high-resolution model of individual trades.
Abstract: Continuous time random walks (CTRWs) are used in physics to model anomalous diffusion, by incorporating a random waiting time between particle jumps. In finance, the particle jumps are log-returns and the waiting times measure delay between transactions. These two random variables (log-return and waiting time) are typically not independent. For these coupled CTRW models, we can now compute the limiting stochastic process (just like Brownian motion is the limit of a simple random walk), even in the case of heavy tailed (power-law) price jumps and/or waiting times. The probability density functions for this limit process solve fractional partial differential equations. In some cases, these equations can be explicitly solved to yield descriptions of long-term price changes, based on a high-resolution model of individual trades that includes the statistical dependence between waiting times and the subsequent log-returns. In the heavy tailed case, this involves operator stable space-time random vectors that generalize the familiar stable models. In this paper, we will review the fundamental theory and present two applications with tick-by-tick stock and futures data.