scispace - formally typeset
Search or ask a question

Showing papers on "Stochastic process published in 1996"


Journal ArticleDOI
TL;DR: It is shown that nonlinear rescalings of a Gaussian linear stochastic process cannot be accounted for by a simple amplitude adjustment of the surrogates which leads to spurious detection of nonlinearity.
Abstract: Current tests for nonlinearity compare a time series to the null hypothesis of a Gaussian linear stochastic process. For this restricted null assumption, random surrogates can be constructed which are constrained by the linear properties of the data. We propose a more general null hypothesis allowing for nonlinear rescalings of a Gaussian linear process. We show that such rescalings cannot be accounted for by a simple amplitude adjustment of the surrogates which leads to spurious detection of nonlinearity. An iterative algorithm is proposed to make appropriate surrogates which have the same autocorrelations as the data and the same probability distribution.

1,364 citations


Journal ArticleDOI
TL;DR: It is shown that the coevolutionary dynamic can be envisaged as a directed random walk in the community's trait space and a quantitative description of this stochastic process in terms of a master equation is derived.
Abstract: In this paper we develop a dynamical theory of coevolution in ecological communities. The derivation explicitly accounts for the stochastic components of evolutionary change and is based on ecological processes at the level of the individual. We show that the coevolutionary dynamic can be envisaged as a directed random walk in the community's trait space. A quantitative description of this stochastic process in terms of a master equation is derived. By determining the first jump moment of this process we abstract the dynamic of the mean evolutionary path. To first order the resulting equation coincides with a dynamic that has frequently been assumed in evolutionary game theory. Apart from recovering this canonical equation we systematically establish the underlying assumptions. We provide higher order corrections and show that these can give rise to new, unexpected evolutionary effects including shifting evolutionary isoclines and evolutionary slowing down of mean paths as they approach evolutionary equilibria. Extensions of the derivation to more general ecological settings are discussed. In particular we allow for multi-trait coevolution and analyze coevolution under nonequilibrium population dynamics.

1,147 citations


Journal ArticleDOI
Jorma Rissanen1
TL;DR: A sharper code length is obtained as the stochastic complexity and the associated universal process are derived for a class of parametric processes by taking into account the Fisher information and removing an inherent redundancy in earlier two-part codes.
Abstract: By taking into account the Fisher information and removing an inherent redundancy in earlier two-part codes, a sharper code length as the stochastic complexity and the associated universal process are derived for a class of parametric processes. The main condition required is that the maximum-likelihood estimates satisfy the central limit theorem. The same code length is also obtained from the so-called maximum-likelihood code.

906 citations



Journal ArticleDOI
TL;DR: In this article, the authors developed a model and a solution technique for the problem of generating electric power when demands are not certain, and provided techniques for improving the current methods used in solving the traditional unit commitment problem.
Abstract: The authors develop a model and a solution technique for the problem of generating electric power when demands are not certain. They also provide techniques for improving the current methods used in solving the traditional unit commitment problem. The solution strategy can be run in parallel due to the separable nature of the relaxation used. Numerical results indicate significant savings in the cost of operating power generating systems when the stochastic model is used instead of the deterministic model.

593 citations


Book
01 Jan 1996
TL;DR: This book provides a wide-angle view of stochastic approximation, linear and non-linear models, controlled Markov chains, estimation and adaptive control, learning, and algorithms with good performances and reasonably easy computation.
Abstract: From the Publisher: The recent development of computation and automation has led to quick advances in the theory and practice of recursive methods for stabilization, identification and control of complex stochastic models (guiding a rocket or a plane, organizing multi-access broadcast channels, self-learning of neural networks...). This book provides a wide-angle view of those methods: stochastic approximation, linear and non-linear models, controlled Markov chains, estimation and adaptive control, learning... Mathematicians familiar with the basics of Probability and Statistics will find here a self-contained account of many approaches to those theories, some of them classical, some of them leading up to current and future research. Each chapter can form the core material for lectures on stochastic processes. Engineers having to control complex systems will find here algorithms with good performances and reasonably easy computation.

504 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that an Ito's stochastic equation with discontinuous coefficients can be constructed on any probability space by using Euler's polygonal approximations.
Abstract: Given strong uniqueness for an Ito's stochastic equation with discontinuous coefficients, we prove that its solution can be constructed on “any” probability space by using, for example, Euler's polygonal approximations. Stochastic equations in ℝd and in domains in ℝd are considered.

466 citations


Journal ArticleDOI
TL;DR: In this paper, a simulation algorithm is proposed to generate ergodic sample functions of a stationary, multivariate stochastic process according to its prescribed cross-spectral density matrix.
Abstract: A simulation algorithm is proposed to generate sample functions of a stationary, multivariate stochastic process according to its prescribed cross-spectral density matrix. If the components of the vector process correspond to different locations in space, then the process is nonhomogeneous in space. The ensemble cross-correlation matrix of the generated sample functions is identical to the corresponding target. The simulation algorithm generates ergodic sample functions in the sense that the temporal cross-correlation matrix of each and every generated sample function is identical to the corresponding target, when the length of the generated sample function is equal to one period (the generated sample functions are periodic). The proposed algorithm is based on an extension of the spectral representation method and is very efficient computationally since it takes advantage of the fast Fourier transform technique. The generated sample functions are Gaussian in the limit as the number of terms in the frequency discretization of the cross-spectral density matrix approaches infinity. An example involving simulation of turbulent wind velocity fluctuations is presented in order to demonstrate the capabilities and efficiency of the proposed algorithm.

446 citations


Journal ArticleDOI
TL;DR: A new scheme for handling the random time delays is developed and successfully compared with previous schemes, based on stochastic control theory and a separation property is shown to hold for the optimal controller.

420 citations


Journal ArticleDOI
TL;DR: A spectral-representation-based simulation algorithm is used in this paper to generate sample functions of a non-stationary, multi-variate stochastic process with evolutionary power, according to its prescribed non- stationary cross-spectral density matrix.

370 citations


Journal ArticleDOI
TL;DR: The theory of spatial models over lattices, or random fields as they are known, has developed significantly over recent years as discussed by the authors, and a graduate-level introduction to the subject which assumes only a basic knowledge of probability and statistics, finite Markov chains, and the spectral theory of second-order processes.
Abstract: The theory of spatial models over lattices, or random fields as they are known, has developed significantly over recent years. This book provides a graduate-level introduction to the subject which assumes only a basic knowledge of probability and statistics, finite Markov chains, and the spectral theory of second-order processes. A particular strength of this book is its emphasis on examples - both to motivate the theory which is being developed, and to demonstrate the applications which range from statistical mechanics to image analysis and from statistics to stochastic algorithms.

Journal ArticleDOI
TL;DR: It is shown that the Hodgkin-Huxley equations can be approximated by a one-dimensional bistable Langevin equation and spontaneous action potentials can arise from the channel fluctuations and are analogous to escape by a particle over a potential barrier.

Journal ArticleDOI
TL;DR: This model reproduces many aspects of the force distribution observed both in experiment and in numerical simulations of sphere packings, including exact results for certain contact angle probability distributions.
Abstract: We study theoretically the complex network of forces that is responsible for the static structure and properties of granular materials. We present detailed calculations for a model in which the fluctuations in the force distribution arise because of variations in the contact angles and the constraints imposed by the force balance on each bead of the pile. We compare our results for the force distribution function for this model, including exact results for certain contact angle probability distributions, with numerical simulations of force distributions in random sphere packings. This model reproduces many aspects of the force distribution observed both in experiment and in numerical simulations of sphere packings. Our model is closely related to some that have been studied in the context of self-organized criticality. We present evidence that in the force distribution context, "critical" power-law force distributions occur only when a parameter (hidden in other interpretations) is tuned. Our numerical, mean field, and exact results all indicate that for almost all contact distributions the distribution of forces decays exponentially at large forces.

Journal ArticleDOI
01 May 1996
TL;DR: A new stochastic decomposition method well-suited to deal with large-scale unit commitment problems, where random disturbances are modeled as scenario trees and prices attached to nodes of the scenario trees are updated by the coordination level is applied.
Abstract: This paper presents a new stochastic decomposition method well-suited to deal with large-scale unit commitment problems. In this approach, random disturbances are modeled as scenario trees. Optimization consists in minimizing the average generation cost over this "tree-shaped future". An augmented Lagrangian technique is applied to this problem. At each iteration, nonseparable terms introduced by the augmentation are linearized so as to obtain a decomposition algorithm. This algorithm may be considered as a generalization of price decomposition methods, which are now classical in this field, to the stochastic framework. At each iteration, for each unit, a stochastic dynamic subproblem has to be solved. Prices attached to nodes of the scenario trees are updated by the coordination level. This method has been applied to a daily generation scheduling problem. The use of an augmented Lagrangian technique, provides satisfactory convergence properties to the decomposition algorithm. Moreover, numerical simulations show that compared to a classical deterministic optimization with reserve constraints, this new approach achieves substantial savings.

Journal ArticleDOI
TL;DR: In this paper, a theory of space-time rainfall, applicable to fields advecting without deformation of the coordinates, is presented and tested, where spatial rainfall fields are constructed from discrete multiplicative cascades of independent and identically distributed (iid) random variables called generators.
Abstract: Following a brief review of relevant theoretical and empirical spatial results, a theory of space-time rainfall, applicable to fields advecting without deformation of the coordinates, is presented and tested. In this theory, spatial rainfall fields are constructed from discrete multiplicative cascades of independent and identically distributed (iid) random variables called generators. An extension to space-time assumes that these generators are iid stochastic processes indexed by time. This construction preserves the spatial structure of the cascades, while enabling it to evolve in response to a nonstationary large-scale forcing, which is specified externally. The construction causes the time and space dimensions to have fundamentally different stochastic structures. The time dimension of the process has an evolutionary behavior that distinguishes between past and future, while the spatial dimensions have an isotropic stochastic structure. This anisotropy between time and space leads to the prediction of the breakdown of G. I. Taylor's hypothesis of fluid turbulence after a short time, as is observed empirically. General, nonparametric, predictions of the theory regarding the spatial scaling properties of two-point temporal cross moments are developed and applied to a tracked rainfall field in a case study. These include the prediction of the empirically observed increase of correlation times as resolution decreases and the scaling of temporal cross moments, a new finding suggested by this theory.

Journal ArticleDOI
TL;DR: In this article, it was shown that multiplicative random processes in (not necessarily equilibrium or steady state) stochastic systems with many degrees of freedom lead to Boltzmann distributions when the dynamics is expressed in terms of the logarithm of the elementary variables.
Abstract: Multiplicative random processes in (not necessarily equilibrium or steady state) stochastic systems with many degrees of freedom lead to Boltzmann distributions when the dynamics is expressed in terms of the logarithm of the elementary variables. In terms of the original variables this gives a power-law distribution. This mechanism implies certain relations between the constraints of the system, the power of the distribution and the dispersion law of the fluctuations. These predictions are validated by Monte Carlo simulations and experimental data. We speculate that stochastic multiplicative dynamics might be the natural origin for the emergence of criticality and scale hierarchies without fine-tuning.

Posted Content
TL;DR: The authors developed asymptotic distribution theory for generalized method of moments (GMM) estimators and test statistics when some of the parameters are well identified, but others are poorly identified because of weak instruments.
Abstract: This paper develops asymptotic distribution theory for generalized method of moments (GMM) estimators and test statistics when some of the parameters are well identified, but others are poorly identified because of weak instruments. The asymptotic theory entails applying empirical process theory to obtain a limiting representation of the (concentrated) objective function as a stochastic process. The general results are specialized to two leading cases, linear instrumental variables regression and GMM estimation of Euler equations obtained from the consumption-based capital asset pricing model with power utility. Numerical results of the latter model confirm that finite sample distributions can deviate substantially from normality, and indicate that these deviations are captured by the weak instruments asymptotic approximations.

Journal ArticleDOI
TL;DR: In this paper, the continuous time moving average fractional process (SIFT) model is proposed to reconcile two competitive types of long memory models: fractional integration of ARMA processes and fractional Brownian motion.

Journal ArticleDOI
01 Sep 1996-Nature
TL;DR: A computational procedure is presented, based on a comparison of the prediction power of linear and nonlinear models of the Volterra–Wiener form, which is capable of robust and highly sensitive statistical detection of deterministic dynamics, including chaotic dynamics, in experimental time series.
Abstract: THE accurate identification of deterministic dynamics in an experimentally obtained time series1–5 can lead to new insights regarding underlying physical processes, or enable prediction, at least on short timescales. But deterministic chaos arising from a nonlinear dynamical system can easily be mistaken for random noise6–8. Available methods to distinguish deterministic chaos from noise can be quite effective, but their performance depends on the availability of long data sets, and is severely degraded by measurement noise. Moreover, such methods are often incapable of detecting chaos in the presence of strong periodicity, which tends to hide underlying fractal structures9. Here we present a computational procedure, based on a comparison of the prediction power of linear and nonlinear models of the Volterra–Wiener form10, which is capable of robust and highly sensitive statistical detection of deterministic dynamics, including chaotic dynamics, in experimental time series. This method is superior to other techniques1–6,11,12 when applied to short time series, either continuous or discrete, even when heavily contaminated with noise, or in the presence of strong periodicity.

Proceedings ArticleDOI
27 Oct 1996
TL;DR: A novel approach to assist the user in exploring appropriate transfer functions for the visualization of volumetric datasets that shields the user from the complex and tedious "trial and error" approach and demonstrates effective and convenient generation of transfer functions.
Abstract: This paper presents a novel approach to assist the user in exploring appropriate transfer functions for the visualization of volumetric datasets. The search for a transfer function is treated as a parameter optimization problem and addressed with stochastic search techniques. Starting from an initial population of (random or pre-defined) transfer functions, the evolution of the stochastic algorithms is controlled by either direct user selection of intermediate images or automatic fitness evaluation using user-specified objective functions. This approach essentially shields the user from the complex and tedious "trial and error" approach, and demonstrates effective and convenient generation of transfer functions.

Book
01 Jan 1996
TL;DR: In this article, the authors considered the problem of growing random sum distributions in the Double Array Scheme Transfer Theorem and provided sufficient and sufficient conditions for the convergence of Random Sums of Independent Identically Distributed Random Variables.
Abstract: Examples Examples Related to Generalized Poisson Laws A Remarkable Formula of Queueing Theory Other Examples Doubling with Repair Mathematical Model A Limit Theorem for the Trouble-Free Performance Duration The Class of Limit Laws Some Properties of Limit Distributions Domains of Geometrical Attraction of the Laws from Class c Limit Theorems for "Growing" Random Sums A Transfer Theorem. Limit Laws Necessary and Sufficient Conditions for Convergence Convergence to Distributions from Identifiable Families Limit Theorems for Risk Processes Some Models of Financial Mathematics Rarefied Renewal Processes Limit Theorems for Random Sums in the Double Array Scheme Transfer Theorems. Limit Laws Converses of the Transfer Theorems Necessary and Sufficient Conditions for the Convergence of Random Sums of Independent Identically Distributed Random Variables More on Some Models of Financial Mathematics Limit Theorems for Supercritical Galton-Watson Processes Randomly Infinitely Divisible Distributions Mathematical Theory of Reliability Growth. A Bayesian Approach Bayesian Reliability Growth Models Conditionally Geometrical Models Conditionally Exponential Models Renewing Models Models with Independent Decrements of Volumes of Defective Sets Order-Statistics-Type (Mosaic) Reliability Growth Models Generalized Conditionally Exponential Models Statistical Prediction of Reliability by Renewing Models Statistical Prediction of Reliability by Order-Statistics-Type Models Appendix 1: Information Properties of Probability Distributions Mathematical Models of Information and Uncertainty Limit Theorems of Probability Theory and the Universal Principle of Non-Decrease of Uncertainty Appendix 2: Asymptotic Behavior of Generalized Doubly Stochastic Poisson Processes General Information on Doubly Stochastic Poisson Processes A General Limit Theorem for Superpositions of Random Processes Limit Theorem for Cox Processes Limit Theorems for Generalized Cox Processes Convergence Rate Estimates in Limit Theorems for Generalized Cox Processes Asymptotic Expansions for Generalized Cox Processes Estimates for the Concentration Functions of Generalized Cox Processes Bibliographical Commentary Index References

Book
05 Dec 1996
TL;DR: In this paper, the authors present an analysis of nonlinear linear systems with multiple inputs and outputs, and state-space analysis of stochastic processes with random variables and Dirac delta functions.
Abstract: 1. Introduction. 2. Analysis of Stochastic Processes. 3. Time Domain Linear Vibration Analysis. 4. Frequency Domain Analysis. 5. Gaussian and Non-Gaussian Stochastic Processes. 6. Occurrence Rates and Distributions of Extremes. 7. Linear Systems with Multiple Inputs and Outputs. 8. State-Space Analysis. 9. Introduction to Nonlinear Stochastic Vibration. 10. Stochastic Analysis of Fatigue Damage. Appendix A. Analysis of Random Variables. Appendix B. Gaussian Random Variables. Appendix C. Dirac Delta Functions. Appendix D. Fourier Analysis. References.

BookDOI
01 Jan 1996
TL;DR: The problem of optimal decisions can be seen as getting simulation and optimization effectively combined, and Optimization of Stochastic Models: The Interface Between Simulation andoptimization is suitable as a text for a graduate level course on Stochastics, or as a secondary text for an undergraduate level course in Operations Research.
Abstract: Stochastic models are everywhere. In manufacturing, queuing models are used for modeling production processes, realistic inventory models are stochastic in nature. Stochastic models are considered in transportation and communication. Marketing models use stochastic descriptions of the demands and buyer's behaviors. In finance, market prices and exchange rates are assumed to be certain stochastic processes, and insurance claims appear at random times with random amounts. To each decision problem, a cost function is associated. Costs may be direct or indirect, like loss of time, quality deterioration, loss in production or dissatisfaction of customers. In decision making under uncertainty, the goal is to minimize the expected costs. However, in practically all realistic models, the calculation of the expected costs is impossible due to the model complexity. Simulation is the only practicable way of getting insight into such models. Thus, the problem of optimal decisions can be seen as getting simulation and optimization effectively combined. The field is quite new and yet the number of publications is enormous. This book does not even try to touch all work done in this area. Instead, many concepts are presented and treated with mathematical rigor and necessary conditions for the correctness of various approaches are stated. Optimization of Stochastic Models: The Interface Between Simulation and Optimization is suitable as a text for a graduate level course on Stochastic Models or as a secondary text for a graduate level course in Operations Research.

Journal ArticleDOI
TL;DR: Kalman filtering methods are derived to track the channel by employing a multichannel autoregressive description of the time-varying taps in a decision-feedback equalization framework using higher-order statistics in order to estimate the model parameters from input/output data.

Journal ArticleDOI
TL;DR: In this paper, a stochastic control problem arising in financial economics is studied to maximize expected logarithmic utility from terminal wealth and/or consumption, where the portfoilo is allowed to anticipate the future, i.e. the terminal values of the prices or of the driving Brownian motion.
Abstract: We study a classical stochastic control problem arising in financial economics: to maximize expected logarithmic utility from terminal wealth and/or consumption. The novel feature of our work is that the portfoilo is allowed to anticipate the future, i.e. the terminal values of the prices, or of the driving Brownian motion, are known to the investor, either exactly or with some uncertainty. Results on the finiteness of the value of the control problem are obtained in various setups, using techniques from the so-called enlargement of filtrations. When the value of the problem is finite, we compute it explicitly and exhibit an optimal portfolio in closed form.

Journal ArticleDOI
TL;DR: In this paper, a weakly singular integral equation for the prediction weight function is solved for the fractional Brownian motion Z with Hurst parameter H ∈ (1/2, 1).
Abstract: Integration with respect to the fractional Brownian motion Z with Hurst parameter H∈ (1/2, 1) is discussed. The predictor E[Z a | Z s , s ∈ (- T, 0)] is represented as an integral with respect to Z, solving a weakly singular integral equation for the prediction weight function.

Journal ArticleDOI
TL;DR: The variability in extracellular records of action potentials is studied by using microwire electrode pairs to record from primary somatosensory cortex of awake, behaving rat to construct a filter that optimizes the detection of differences between single-unit waveforms.
Abstract: 1. Here we study the variability in extracellular records of action potentials. Our work is motivated, in part, by the need to construct effective algorithms to classify single-unit waveforms from multiunit recordings. 2. We used microwire electrode pairs (stereotrodes) to record from primary somatosensory cortex of awake, behaving rat. Our data consist of continuous records of extracellular activity and segmented records of extracellular spikes. Spectral and principal component techniques are used to analyze mean single-unit wave-forms, the variability between different instances of a single-unit waveform, and the underlying background activity. 3. The spectrum of the variability between different instances of a single-unit waveforms is not white, and falls off above 1 kHz with a frequency dependence of roughly f-2. This spectrum is different from that of the mean spike waveforms, which falls off roughly as f-4, but is essentially identical with the spectrum of background activity. The spatial coherence of the variability on the 10-micron scale also falls off at high frequencies. 4. The variability between different instances of a single-unit waveform is dominated by a relatively small number of principal components. As a consequence, there is a large anisotropy in the cluster of the spike waveforms. 5. The background noise cannot be represented as a stationary Gaussian random process. In particular, we observed that the spectrum changes significantly between successive 20-ms intervals. Furthermore, the total power in the background activity exhibits larger fluctuations than is consistent with a stationary Gaussian random process. 6. Roughly half of the single-unit spike waveforms exhibit systematic changes as a function of the interspike interval. Although this results in a non-Gaussian distribution in the space of waveforms, the distribution can be modeled by a scalar function of the interspike interval. 7. We use a set of 44 mean single-unit waveforms to define the space of differences between spike waveforms. This characterization, together with that of the background activity, is used to construct a filter that optimizes the detection of differences between single-unit waveforms. Further, an information theoretic measure is defined that characterizes the detectability.

Journal ArticleDOI
TL;DR: The proposed LOGIT type assignment that does not restrict the assignment paths is presented and it is shown that the proposed approach can be easily extended to the flow dependent case (i.e. stochastic equilibrium assignment).
Abstract: Dial's stochastic assignment algorithm restricts the assignment path set to “efficient path.” As a result, it sometimes produces the unrealistic flow pattern that no flow is loaded on some paths where many vehicles are running in reality. To remove the drawback of Dial's algorithm, this paper presents the LOGIT type assignment that does not restrict the assignment paths. We first show the theoretical relation between the proposed model and Sasaki's assignment model through Markov process. This analysis makes it clear that the proposed assignment model can be calculated by some matrix operations. Next, we propose an efficient algorithm that does not require the matrix operation nor path enumeration over a network. The algorithm solves an equivalent program based on the entropy decomposition derived from the Markov property of LOGIT model. Finally, it is shown that the proposed approach can be easily extended to the flow dependent case (i.e. stochastic equilibrium assignment).

01 Jan 1996
TL;DR: A new class of stochastic Petri nets in which one or more places can hold fluid rather than discrete tokens is introduced, and equations for their transient and steady-state behavior are provided.
Abstract: In this paper we introduce a new class of stochastic Petri nets in which one or more places can hold fluid rather than discrete tokens. We define a class of fluid stochastic Petri nets in such a way that the discrete and continuous portions may affect each other. Following this definition we provide equations for their transient and steady-state behavior. We present several examples showing the utility of the construct in communication network modeling and reliability analysis, and discuss important special cases. We then discuss numerical methods for computing the transient behavior of such nets. Finally, some numerical examples are presented.

Journal ArticleDOI
TL;DR: In this article, the authors present the first treatment of diffusion, in the stochastic method, for non-linear reaction?diffusion processes, which is equivalent to solving the time evolution of the spatially inhomogeneous master equation.