scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 2008"


Journal ArticleDOI
TL;DR: It is shown that good quality random bit sequences can be generated at very fast bit rates using physical chaos in semiconductor lasers, which means that the performance of random number generators can be greatly improved by using chaotic laser devices as physical entropy sources.
Abstract: Random number generators in digital information systems make use of physical entropy sources such as electronic and photonic noise to add unpredictability to deterministically generated pseudo-random sequences1,2. However, there is a large gap between the generation rates achieved with existing physical sources and the high data rates of many computation and communication systems; this is a fundamental weakness of these systems. Here we show that good quality random bit sequences can be generated at very fast bit rates using physical chaos in semiconductor lasers. Streams of bits that pass standard statistical tests for randomness have been generated at rates of up to 1.7 Gbps by sampling the fluctuating optical output of two chaotic lasers. This rate is an order of magnitude faster than that of previously reported devices for physical random bit generators with verified randomness. This means that the performance of random number generators can be greatly improved by using chaotic laser devices as physical entropy sources. Random-number generators are important in digital information systems. However, the speed at which current sources operate is much slower than the typical data rates used in communication and computing. Chaos in semiconductor lasers might help to bridge the gap.

823 citations


Journal ArticleDOI
TL;DR: This presentation will focus on random algorithms, reviewing some algorithms present in the literature and proposing some new ones, and establishing some probabilistic concentration results which will give a stronger significance to previous results.
Abstract: Various randomized consensus algorithms have been proposed in the literature. In some case randomness is due to the choice of a randomized network communication protocol. In other cases, randomness is simply caused by the potential unpredictability of the environment in which the distributed consensus algorithm is implemented. Conditions ensuring the convergence of these algorithms have already been proposed in the literature. As far as the rate of convergence of such algorithms, two approaches can be proposed. One is based on a mean square analysis, while a second is based on the concept of Lyapunov exponent. In this paper, by some concentration results, we prove that the mean square convergence analysis is the right approach when the number of agents is large. Differently from the existing literature, in this paper we do not stick to average preserving algorithms. Instead, we allow to reach consensus at a point which may differ from the average of the initial states. The advantage of such algorithms is that they do not require bidirectional communication among agents and thus they apply to more general contexts. Moreover, in many important contexts it is possible to prove that the displacement from the initial average tends to zero, when the number of agents goes to infinity.

385 citations


Journal ArticleDOI
18 Jan 2008-Science
TL;DR: In this article, a generalized theory for stochastic gene expression is presented, formulating the variance in protein abundance in terms of the randomness of the individual gene expression events.
Abstract: Many cellular components are present in such low numbers per cell that random births and deaths of individual molecules can cause substantial "noise" in concentrations. But biochemical events do not necessarily occur in single steps of individual molecules. Some processes are greatly randomized when synthesis or degradation occurs in large bursts of many molecules during a short time interval. Conversely, each birth or death of a macromolecule could involve several small steps, creating a memory between individual events. We present a generalized theory for stochastic gene expression, formulating the variance in protein abundance in terms of the randomness of the individual gene expression events. We show that common types of molecular mechanisms can produce gestation and senescence periods that reduce noise without requiring higher abundances, shorter lifetimes, or any concentration-dependent control loops. We also show that most single-cell experimental methods cannot distinguish between qualitatively different stochastic principles, although this in turn makes such methods better suited for identifying which components introduce fluctuations. Characterizing the random events that give rise to noise in concentrations instead requires dynamic measurements with single-molecule resolution.

351 citations


Journal ArticleDOI
01 Mar 2008
TL;DR: This paper mathematically and experimentally proves that the simultaneous consideration of randomness and opposition is more advantageous than pure randomness, and applies that to accelerate differential evolution (DE).
Abstract: For many soft computing methods, we need to generate random numbers to use either as initial estimates or during the learning and search process. Recently, results for evolutionary algorithms, reinforcement learning and neural networks have been reported which indicate that the simultaneous consideration of randomness and opposition is more advantageous than pure randomness. This new scheme, called opposition-based learning, has the apparent effect of accelerating soft computing algorithms. This paper mathematically and also experimentally proves this advantage and, as an application, applies that to accelerate differential evolution (DE). By taking advantage of random numbers and their opposites, the optimization, search or learning process in many soft computing techniques can be accelerated when there is no a priori knowledge about the solution. The mathematical proofs and the results of conducted experiments confirm each other.

303 citations


Journal ArticleDOI
Jie Li1, Jianbing Chen1
TL;DR: A uniform and rigorous theoretical basis for the family of newly developed probability density evolution method is provided and the principle of preservation of probability is revisited from the two descriptions: the state space description and the random event description.

275 citations


Journal ArticleDOI
TL;DR: In this article, the authors present numerical-model experiments to investigate the dynamics of tropical cyclone amplification and its predictability in three dimensions, and conclude that the flow on the convective scales exhibits a degree of randomness, and only those asymmetric features that survive in an ensemble average of many realizations can be regarded as robust.
Abstract: We present numerical-model experiments to investigate the dynamics of tropical-cyclone amplification and its predictability in three dimensions. For the prototype amplification problem beginning with a weak-tropical-storm-strength vortex, the emergent flow becomes highly asymmetric and dominated by deep convective vortex structures, even though the problem as posed is essentially axisymmetric. The asymmetries that develop are highly sensitive to the boundary-layer moisture distribution. When a small random moisture perturbation is added in the boundary layer at the initial time, the pattern of evolution of the flow asymmetries is changed dramatically, and a non-negligible spread in the local and azimuthally-averaged intensity results. We conclude, first, that the flow on the convective scales exhibits a degree of randomness, and only those asymmetric features that survive in an ensemble average of many realizations can be regarded as robust; and secondly, that there is an intrinsic uncertainty in the prediction of maximum intensity using either maximum-wind or minimum-surface-pressure metrics. There are clear implications for the possibility of deterministic forecasts of the mesoscale structure of tropical cyclones, which may have a major impact on the intensity and on rapid intensity changes. Some other aspects of vortex structure are addressed also, including vortex-size parameters, and sensitivity to the inclusion of different physical processes or higher spatial resolution. We investigate also the analogous problem on a β-plane, a prototype problem for tropical-cyclone motion. A new perspective on the putative role of the wind--evaporation feedback process for tropical-cyclone intensification is offered also. The results provide new insight into the fluid dynamics of the intensification process in three dimensions, and at the same time suggest limitations of deterministic prediction for the mesoscale structure. Larger-scale characteristics, such as the radius of gale-force winds and β-gyres, are found to be less variable than their mesoscale counterparts. Copyright © 2008 Royal Meteorological Society

244 citations


Journal ArticleDOI
TL;DR: There is a difference between the optimal rates of fixed-length source coding and intrinsic randomness when the authors care about the second-order asymptotic behavior of the rates, and this difference proves that the outputs of fixed -length source codes are not uniformly distributed.
Abstract: There is a difference between the optimal rates of fixed-length source coding and intrinsic randomness when we care about the second-order asymptotics. We prove this difference for general information sources and then investigate independent and identically distributed (i.i.d.) random variables and Markovian variables as examples. The difference is demonstrated through an investigation of the second-order asymptotic behavior of the rates. A universal fixed-length source code attaining the second-order optimal rate is also proposed. The difference between the rates of fixed-length source coding and intrinsic randomness proves that the outputs of fixed-length source codes are not uniformly distributed.

187 citations


Journal ArticleDOI
TL;DR: In this article, a simulation of flow over random urban-like obstacles is presented to gain a deeper insight into the effects of randomness in the obstacle topology, e.g. spatially-averaged mean velocity, Reynolds stresses, turbulence kinetic energy and dispersive stresses.
Abstract: Further to our previous large-eddy simulation (LES) of flow over a staggered array of uniform cubes, a simulation of flow over random urban-like obstacles is presented. To gain a deeper insight into the effects of randomness in the obstacle topology, the current results, e.g. spatially-averaged mean velocity, Reynolds stresses, turbulence kinetic energy and dispersive stresses, are compared with our previous LES data and direct numerical simulation data of flow over uniform cubes. Significantly different features in the turbulence statistics are observed within and immediately above the canopy, although there are some similarities in the spatially-averaged statistics. It is also found that the relatively high pressures on the tallest buildings generate contributions to the total surface drag that are far in excess of their proportionate frontal area within the array. Details of the turbulence characteristics (like the stress anisotropy) are compared with those in regular roughness arrays and attempts to find some generality in the turbulence statistics within the canopy region are discussed.

175 citations


Journal ArticleDOI
TL;DR: Contrary to the almost uniformly negative perception of probability matching, it is concluded that there can be a potentially smart strategy behind probability matching.

172 citations


Journal ArticleDOI
TL;DR: The statistical distribution of capture times is obtained from Monte Carlo calculations and shows a crossover from power-law to exponential behavior, and predicts the distribution function for a lattice with perfect mixing.

168 citations


Journal ArticleDOI
TL;DR: The state of the art in discrete stochastic and multiscale algorithms for simulation of biochemical systems and the StochKit software toolkit are reviewed.
Abstract: Traditional deterministic approaches for simulation of chemically reacting systems fail to capture the randomness inherent in such systems at scales common in intracellular biochemical processes. In this manuscript, we briefly review the state of the art in discrete stochastic and multiscale algorithms for simulation of biochemical systems and we present the StochKit software toolkit.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a framework for seismic loss estimation based on the Pacific Earthquake Engineering Research (PEER) Center, which involves breaking the analysis into separate components associated with ground motion hazard, structural response, damage to components and repair costs.

Journal ArticleDOI
14 Oct 2008-Chaos
TL;DR: In this article, the authors use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions.
Abstract: Intrinsic computation refers to how dynamical systems store, structure, and transform historical and spatial information. By graphing a measure of structural complexity against a measure of randomness, complexity-entropy diagrams display the different kinds of intrinsic computation across an entire class of systems. Here, we use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines. Since complexity-entropy diagrams are a function only of observed configurations, they can be used to compare systems without reference to system coordinates or parameters. It has been known for some time that in special cases complexity-entropy diagrams reveal that high degrees of information processing are associated with phase transitions in the underlying process space, the so-called “edge of chaos.” Generally, though, complexity-entropy diagrams differ substantially in character, demonstrating a genuine diversity of distinct kinds of intrinsic computation.

Journal ArticleDOI
TL;DR: This work uses complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines.
Abstract: Intrinsic computation refers to how dynamical systems store, structure, and transform historical and spatial information. By graphing a measure of structural complexity against a measure of randomness, complexity-entropy diagrams display the range and different kinds of intrinsic computation across an entire class of system. Here, we use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines. Since complexity-entropy diagrams are a function only of observed configurations, they can be used to compare systems without reference to system coordinates or parameters. It has been known for some time that in special cases complexity-entropy diagrams reveal that high degrees of information processing are associated with phase transitions in the underlying process space, the so-called ``edge of chaos''. Generally, though, complexity-entropy diagrams differ substantially in character, demonstrating a genuine diversity of distinct kinds of intrinsic computation.

Journal ArticleDOI
TL;DR: By explicitly representing the reaction times of discrete chemical systems as the firing times of independent, unit rate Poisson processes, a new adaptive tau-leaping procedure is developed that is novel in that accuracy is guaranteed by performing postleap checks.
Abstract: By explicitly representing the reaction times of discrete chemical systems as the firing times of independent, unit rate Poisson processes, we develop a new adaptive tau-leaping procedure. The procedure developed is novel in that accuracy is guaranteed by performing postleap checks. Because the representation we use separates the randomness of the model from the state of the system, we are able to perform the postleap checks in such a way that the statistics of the sample paths generated will not be biased by the rejections of leaps. Further, since any leap condition is ensured with a probability of one, the simulation method naturally avoids negative population values.

Journal ArticleDOI
TL;DR: In this paper, an improved perturbation method is developed for the statistical identification of structural parameters by using the measured modal parameters with randomness, which enables structural design and analysis, damage detection, condition assessment, and evaluation in the framework of probability and statistics.
Abstract: In this paper, an improved perturbation method is developed for the statistical identification of structural parameters by using the measured modal parameters with randomness. On the basis of the first-order perturbation method and sensitivity-based finite element (FE) model updating, two recursive systems of equations are derived for estimating the first two moments of random structural parameters from the statistics of the measured modal parameters. Regularization technique is introduced to alleviate the ill-conditioning in solving the equations. The numerical studies of stochastic FE model updating of a truss bridge are presented to verify the improved perturbation method under three different types of uncertainties, namely natural randomness, measurement noise, and the combination of the two. The results obtained using the perturbation method are in good agreement with, although less accurate than, those obtained using the Monte Carlo simulation (MCS) method. It is also revealed that neglecting the correlation of the measured modal parameters may result in an unreliable estimation of the covariance matrix of updating parameters. The statistically updated FE model enables structural design and analysis, damage detection, condition assessment, and evaluation in the framework of probability and statistics. Copyright © 2007 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper focuses on uncertain systems, where the randomness is assumed spatial and traditional computational approaches usually use some form of perturbation or Monte Carlo simulation, compared with more recent methods based on stochastic Galerkin approximations.
Abstract: Uncertainty estimation arises at least implicitly in any kind of modelling of the real world, and it is desirable to actually quantify the uncertainty in probabilistic terms. Here the emphasis is on uncertain systems, where the randomness is assumed spatial. Traditional computational approaches usually use some form of perturbation or Monte Carlo simulation. This is contrasted here with more recent methods based on stochastic Galerkin approximations. Also some approaches to an adaptive uncertainty quantification are pointed out. \abstract{Uncertainty estimation arises at least implicitly in any kind of modelling of the real world, and it is desirable to actually quantify the uncertainty in probabilistic terms. Here the emphasis is on uncertain systems, where the randomness is assumed spatial. Traditional computational approaches usually use some form of perturbation or Monte Carlo simulation. This is contrasted here with more recent methods based on stochastic Galerkin approximations. Also some approaches to an adaptive uncertainty quantification are pointed out.}

Proceedings ArticleDOI
Paul Cuff1
06 Jul 2008
TL;DR: This work characterizes the optimal tradeoff between the amount of common randomness used and the required rate of description in a discrete memoryless channel.
Abstract: Two familiar notions of correlation are re-discovered as extreme operating points for simulating a discrete memoryless channel, in which a channel output is generated based only on a description of the channel input. Wynerpsilas ldquocommon informationrdquo coincides with the minimum description rate needed. However, when common randomness independent of the input is available, the necessary description rate reduces to Shannonpsilas mutual information. This work characterizes the optimal tradeoff between the amount of common randomness used and the required rate of description.

Journal ArticleDOI
TL;DR: It is proved that certain extractors are suitable for key expansion in the bounded-storage model where the adversary has a limited amount of quantum memory.
Abstract: An extractor is a function that is used to extract randomness. Given an imperfect random source X and a uniform seed Y, the output E(X,Y) is close to uniform. We study properties of such functions in the presence of prior quantum information about X, with a particular focus on cryptographic applications. We prove that certain extractors are suitable for key expansion in the bounded-storage model where the adversary has a limited amount of quantum memory. For extractors with one-bit output we show that the extracted bit is essentially equally secure as in the case where the adversary has classical resources. We prove the security of certain constructions that output multiple bits in the bounded-storage model.

Journal ArticleDOI
TL;DR: In this article, it is shown how a noise-filtering setup with an operator theoretic interpretation can be relevant for analyzing the intrinsic stochasticity in jump processes described by master equations.
Abstract: Life processes in single cells and at the molecular level are inherently stochastic. Quantifying the noise is, however, far from trivial, as a major contribution comes from intrinsic fluctuations, arising from the randomness in the times between discrete jumps. It is shown in this paper how a noise-filtering setup with an operator theoretic interpretation can be relevant for analyzing the intrinsic stochasticity in jump processes described by master equations. Such interpretation naturally exists in linear noise approximations, but it also provides an exact description of the jump process when the transition rates are linear. As an important example, it is shown in this paper how, by addressing the proximity of the underlying dynamics in an appropriate topology, a sequence of coupled birth-death processes, which can be relevant in gene expression, tends to a pure delay; this implies important limitations in noise suppression capabilities. Despite the exactness, in a linear regime, of the analysis of noise in conjunction with the network dynamics, we emphasize in this paper the importance of also analyzing dynamic behavior when transition rates are highly nonlinear; otherwise, steady-state solutions can be misinterpreted. The examples are taken from systems with macroscopic models leading to bistability. It is discussed that bistability in the deterministic mass action kinetics and bimodality in the steady-state solution of the master equation neither always imply one another nor do they necessarily lead to efficient switching behaviours: the underlying dynamics need to be taken into account. Finally, we explore some of these issues in relation to a model of the lac operation.

Journal ArticleDOI
TL;DR: A formalism based on the master equation is adopted and it is shown how the probability density for the position of a molecular motor at a given time can be solved exactly in Fourier-Laplace space.
Abstract: Dynamic biological processes such as enzyme catalysis, molecular motor translocation, and protein and nucleic acid conformational dynamics are inherently stochastic processes. However, when such processes are studied on a nonsynchronized ensemble, the inherent fluctuations are lost, and only the average rate of the process can be measured. With the recent development of methods of single-molecule manipulation and detection, it is now possible to follow the progress of an individual molecule, measuring not just the average rate but the fluctuations in this rate as well. These fluctuations can provide a great deal of detail about the underlying kinetic cycle that governs the dynamical behavior of the system. However, extracting this information from experiments requires the ability to calculate the general properties of arbitrarily complex theoretical kinetic schemes. We present here a general technique that determines the exact analytical solution for the mean velocity and for measures of the fluctuations. We adopt a formalism based on the master equation and show how the probability density for the position of a molecular motor at a given time can be solved exactly in Fourier-Laplace space. With this analytic solution, we can then calculate the mean velocity and fluctuation-related parameters, such as the randomness parameter (a dimensionless ratio of the diffusion constant and the velocity) and the dwell time distributions, which fully characterize the fluctuations of the system, both commonly used kinetic parameters in single-molecule measurements. Furthermore, we show that this formalism allows calculation of these parameters for a much wider class of general kinetic models than demonstrated with previous methods.

Journal ArticleDOI
TL;DR: In this article, a priori and purposely tailoring the randomness of textured surfaces is proposed to increase the number of absorbed photons in a solar cell compared to flat-surface solar cells.
Abstract: Photon management by means of random textured surfaces is known to be a promising route to increase the light absorption in a solar cell. To date this randomness was only a posteriori assessed and related to the absorption. Here, we will outline a meaningful strategy for a priori and purposely tailoring the randomness. By defining appropriate angular scattering functions and optimizing the surface profiles, it is shown that the number of absorbed photons can be enhanced by 55% compared to flat-surface solar cells.

Journal ArticleDOI
TL;DR: It is shown that Fisher-Shannon and LMC complexities are qualitatively and numerically equivalent for these systems and new complexity candidates are defined, computed, and compared by using the following information-theoretic magnitudes.
Abstract: Fisher-Shannon (FS) and Lopez-Ruiz, Mancini, and Calbet (LMC) complexity measures, detecting not only randomness but also structure, are computed by using near Hartree-Fock wave functions for neutral atoms with nuclear charge Z=1-103 in position, momentum, and product spaces. It is shown that FS and LMC complexities are qualitatively and numerically equivalent for these systems. New complexity candidates are defined, computed, and compared by using the following information-theoretic magnitudes: Shannon entropy, Fisher information, disequilibrium, and variance. Localization-delocalization planes are constructed for each complexity measure, where the subshell pattern of the periodic table is clearly shown. The complementary use of r and p spaces provides a compact and more complete understanding of the information content of these planes.

Journal ArticleDOI
Yingting Luo1, Yunmin Zhu1, Dandan Luo1, Jie Zhou1, Enbin Song1, Donghua Wang1 
08 Dec 2008-Sensors
TL;DR: It is proved that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements; therefore, it achieves the best performance.
Abstract: This paper proposes a new distributed Kalman filtering fusion with random state transition and measurement matrices, i.e., random parameter matrices Kalman filtering. It is proved that under a mild condition the fused state estimate is equivalent to the centralized Kalman filtering using all sensor measurements; therefore, it achieves the best performance. More importantly, this result can be applied to Kalman filtering with uncertain observations including the measurement with a false alarm probability as a special case, as well as, randomly variant dynamic systems with multiple models. Numerical examples are given which support our analysis and show significant performance loss of ignoring the randomness of the parameter matrices.

Journal ArticleDOI
TL;DR: A general proof of aging for trap models using the arcsine law for stable subordinators is given, based on abstract conditions on the potential theory of the underlying graph and on the randomness of the trapping landscape.
Abstract: We give a general proof of aging for trap models using the arcsine law for stable subordinators. This proof is based on abstract conditions on the potential theory of the underlying graph and on the randomness of the trapping landscape. We apply this proof to aging for trap models on large, two-dimensional tori and for trap dynamics of the random energy model on a broad range of time scales. (C) 2006 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: In this article, symmetry-based methods are introduced to test for isotropy in cosmic microwave background radiation. But the results show that anisotropy is not confined to the low l region, but extends over a much larger l range.
Abstract: We introduce new symmetry-based methods to test for isotropy in cosmic microwave background radiation. Each angular multipole is factored into unique products of power eigenvectors, related multipoles and singular values that provide 2 new rotationally invariant measures mode by mode. The power entropy and directional entropy are new tests of randomness that are independent of the usual CMB power. Simulated galactic plane contamination is readily identified, and the new procedures mesh perfectly with linear transformations employed for windowed-sky analysis. The ILC -WMAP data maps show 7 axes well aligned with one another and the direction Virgo. Parameter free statistics find 12 independent cases of extraordinary axial alignment, low power entropy, or both having 5% probability or lower in an isotropic distribution. Isotropy of the ILC maps is ruled out to confidence levels of better than 99.9%, whether or not coincidences with other puzzles coming from the Virgo axis are included. Our work shows that anisotropy is not confined to the low l region, but extends over a much larger l range.

Journal Article
TL;DR: An optimal construction for such codes is given and it is found that their success probability exactly--it is less than in the quantum case.
Abstract: We consider a communication method, where the sender encodes n classical bits into 1 qubit and sends it to the receiver who performs a certain measurement depending on which of the initial bits must be recovered. This procedure is called (n,1,p) quantum random access code (QRAC) where p > 1/2 is its success probability. It is known that (2,1,0.85) and (3,1,0.79) QRACs (with no classical counterparts) exist and that (4,1,p) QRAC with p > 1/2 is not possible. We extend this model with shared randomness (SR) that is accessible to both parties. Then (n,1,p) QRAC with SR and p > 1/2 exists for any n > 0. We give an upper bound on its success probability (the known (2,1,0.85) and (3,1,0.79) QRACs match this upper bound). We discuss some particular constructions for several small values of n. We also study the classical counterpart of this model where n bits are encoded into 1 bit instead of 1 qubit and SR is used. We give an optimal construction for such codes and find their success probability exactly--it is less than in the quantum case. Interactive 3D quantum random access codes are available on-line at this http URL .

Journal ArticleDOI
TL;DR: In this article, the effect of input data uncertainty on model output for hazardous geophysical mass flows is analyzed using Monte Carlo and Latin hypercube sampling, polynomial chaos quadrature, spectral projection and a newly developed extension thereof.
Abstract: [1] This paper presents several standard and new methods for characterizing the effect of input data uncertainty on model output for hazardous geophysical mass flows. Note that we do not attempt here to characterize the inherent randomness of such flow events. We focus here on the problem of characterizing uncertainty in model output due to lack of knowledge of such input for a particular event. Methods applied include classical Monte Carlo and Latin hypercube sampling and more recent stochastic collocation, polynomial chaos, spectral projection and a newly developed extension thereof named polynomial chaos quadrature. The simple and robust samplings based Monte Carlo type methods are usually computationally intractable for reasonable physical models, while the more sophisticated and computationally efficient polynomial chaos method often breaks down for complex models. The spectral projection and polynomial chaos quadrature methods discussed here produce results of quality comparable to the polynomial chaos type methods while preserving the simplicity and robustness of the Monte Carlo-type sampling based approaches at much lower cost. The computational efficiency, however, degrades with increasing numbers of random variables. A procedure for converting the output uncertainty characterization into a map showing the probability of a hazard threshold being exceeded is also presented. The uncertainty quantification procedures are applied first in simple settings to illustrate the procedure and then subsequently applied to the 1991 block-and-ash flows at Colima Volcano, Mexico.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the asymptotic behavior of a non-autonomous stochastic reaction-diffusion equation with memory and proved the existence of a random pullback attractor.
Abstract: We study the asymptotic behaviour of a non-autonomous stochastic reaction-diffusion equation with memory. In fact, we prove the existence of a random pullback attractor for our stochastic parabolic PDE with memory. The randomness enters in our model as an additive Hilbert valued noise. We first prove that the equation generates a random dynamical system (RDS) in an appropriate phase space. Due to the fact that the memory term takes into account the whole past history of the phenomenon, we are not able to prove compactness of the generated RDS, but its asymptotic compactness, ensuring thus the existence of the random pullback attractor.

Journal ArticleDOI
TL;DR: In this article, it was shown that massless Dirac particles with Coulomb repulsion and quenched random gauge field are described by a manifold of fixed points which can be accessed perturbatively in disorder and interaction strength, thereby confirming and extending the results of arXiv:0707.4171.
Abstract: We argue that massless Dirac particles in two spatial dimensions with $1/r$ Coulomb repulsion and quenched random gauge field are described by a manifold of fixed points which can be accessed perturbatively in disorder and interaction strength, thereby confirming and extending the results of arXiv:0707.4171. At small interaction and small randomness, there is an infra-red stable fixed curve which merges with the strongly interacting infra-red unstable line at a critical endpoint, along which the dynamical critical exponent $z=1$.