scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 1999"


Book
01 Nov 1999
TL;DR: Basic Concept of Reliability, Commonly Used Probability Distributions, and Determination of Distributions and Parameters from Observed Data.
Abstract: Basic Concept of Reliability. Mathematics of Probability. Modeling of Uncertainty. Commonly Used Probability Distributions. Determination of Distributions and Parameters from Observed Data. Randomness in Response Variables. Fundamentals of Reliability Analysis. Advanced Topics on Reliability Analysis. Simulation Techniques. Appendices. Conversion Factors. References. Index.

1,456 citations


Journal ArticleDOI
TL;DR: It is contended that the small-world network model displays a normal continuous phase transition with a divergent correlation length as the degree of randomness tends to zero, and a real-space renormalization group transformation is proposed and demonstrated that it is exact in the limit of large system size.

1,202 citations


Journal ArticleDOI
TL;DR: In this paper, a real-space renormalization group transformation for the model is proposed and the scaling form for the average number of degrees of separation between two nodes on the network as a function of the three independent variables is derived.
Abstract: We study the small-world network model, which mimics the transition between regular-lattice and random-lattice behavior in social networks of increasing size. We contend that the model displays a normal continuous phase transition with a divergent correlation length as the degree of randomness tends to zero. We propose a real-space renormalization group transformation for the model and demonstrate that the transformation is exact in the limit of large system size. We use this result to calculate the exact value of the single critical exponent for the system, and to derive the scaling form for the average number of "degrees of separation" between two nodes on the network as a function of the three independent variables. We confirm our results by extensive numerical simulation.

1,076 citations


Journal ArticleDOI
17 Oct 1999
TL;DR: This work provides a novel algorithmic analysis via a model of robust concept learning (closely related to “margin classifiers”), and shows that a relatively small number of examples are sufficient to learn rich concept classes.
Abstract: We study the phenomenon of cognitive learning from an algorithmic standpoint. How does the brain effectively learn concepts from a small number of examples despite the fact that each example contains a huge amount of information? We provide a novel analysis for a model of robust concept learning (closely related to "margin classifiers"), and show that a relatively small number of examples are sufficient to learn rich concept classes (including threshold functions, Boolean formulae and polynomial surfaces). As a result, we obtain simple intuitive proofs for the generalization bounds of Support Vector Machines. In addition, the new algorithm has several advantages-they are faster conceptually simpler and highly resistant to noise. For example, a robust half-space can be PAC-learned in linear time using only a constant number of training examples, regardless of the number of attributes. A general (algorithmic) consequence of the model, that "more robust concepts are easier to learn", is supported by a multitude of psychological studies.

396 citations


Journal ArticleDOI
TL;DR: This paper presents a new tool for constructing explicit extractors and gives two new constructions that greatly improve upon previous results, and shows how to build good explicit mergers, and how mergers can be used to build better extractors.

234 citations


Journal ArticleDOI
TL;DR: In this article, a method of using a fully random force to numerically generate statistically stationary homogeneous turbulence has been developed, where the forcing is implemented in spectral space where it is concentrated at small wave numbers.
Abstract: A method of using a fully random force to numerically generate statistically stationary homogeneous turbulence has been developed. The forcing is implemented in spectral space where it is concentrated at small wave numbers. Hence, the power input is introduced into the flow at large scales. The randomness in time makes the force neutral, in the sense that it does not directly correlate with any of the time scales of the turbulent flow, and it also makes the power input determined solely by the force-force correlation. This means that it is possible to generate different desirable turbulence states, such as axisymmetric turbulence, where the degree of anisotropy of the forcing can be chosen a priori through forcing parameters. In particular, the total amount of power input from the forcing can be set to balance a desired dissipation at a statistically stationary state. In order to only get a contribution from the force-force correlation to the input power in the discrete equations, the force is determined ...

229 citations


Journal ArticleDOI
TL;DR: The notion of random numbers and their generation using computers using these computer generated random numbers for statistical computing and simulations is discussed in this series.
Abstract: Elements of statistical computing are discussed in this series. We begin with the notion of random numbers and their generation using computers. Using these computer generated random numbers for statistical computing and simulations is discussed in the later parts of the series.

209 citations


Proceedings ArticleDOI
01 May 1999
TL;DR: In this article, the authors showed that a weaker notion of "combinatorial design" suffices for the Nisan-Wigderson pseudorandom generator, which underlies the recent extractor of Trevisan.
Abstract: We give explicit constructions of extractors which work for a source of any min-entropy on strings of length n. These extractors can extract any constant fraction of the min-entropy using O(log2 n) additional random bits, and can extract all the min-entropy using O(log3 n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all min-entropies and extracts a constant fraction of the min-entropy. We then improve our second construction and show that we can reduce the entropy loss to 2log(1/e)+O(1) bits, while still using O(log3n) truly random bits (where entropy loss is defined as [(source min-entropy)+(# truly random bits used)-(# output bits)], and e is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. Our extractors are obtained by observing that a weaker notion of "combinatorial design" suffices for the Nisan-Wigderson pseudorandom generator, which underlies the recent extractor of Trevisan. We give near-optimal constructions of such "weak designs" which achieve much better parameters than possible with the notion of designs used by Nisan-Wigderson and Trevisan. We also show how to improve our constructions (and Trevisan's construction) when the required statistical difference e from the uniform distribution is relatively small. This improvement is obtained by using multilinear error-correcting codes over finite fields, rather than the arbitrary error-correcting codes used by Trevisan.

192 citations


Journal Article
TL;DR: Near-optimal constructions of such "weak designs" which achieve much better parameters than possible with the notion of designs used by Nisan-Wigderson and Trevisan are given.
Abstract: We give explicit constructions of extractors which work for a source of any min-entropy on strings of length n. These extractors can extract any constant fraction of the min-entropy using O(log2 n) additional random bits, and can extract all the min-entropy using O(log3 n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all min-entropies and extracts a constant fraction of the min-entropy. We then improve our second construction and show that we can reduce the entropy loss to 2log(1/e)+O(1) bits, while still using O(log3n) truly random bits (where entropy loss is defined as [(source min-entropy)+(# truly random bits used)-(# output bits)], and e is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. Our extractors are obtained by observing that a weaker notion of "combinatorial design" suffices for the Nisan-Wigderson pseudorandom generator, which underlies the recent extractor of Trevisan. We give near-optimal constructions of such "weak designs" which achieve much better parameters than possible with the notion of designs used by Nisan-Wigderson and Trevisan. We also show how to improve our constructions (and Trevisan's construction) when the required statistical difference e from the uniform distribution is relatively small. This improvement is obtained by using multilinear error-correcting codes over finite fields, rather than the arbitrary error-correcting codes used by Trevisan.

191 citations


01 Jan 1999
TL;DR: New metrics that may be employed to investigate the randomness of cryptographic RNGs are developed and issues such as statistical test suites, evaluation frameworks, and the interpretation of results are addressed.
Abstract: Random Number Generators (RNGs) are an important building block for algorithms and protocols in cryptography. They are paramount in the construction of encryption keys and other cryptographic algorithm parameters. In practice, statistical testing is employed to gather evidence that a generator indeed produces numbers that appear to be random. Few resources are readily available to researchers in academia and industry who wish to analyze their newly developed RNG. To address this problem, NIST has developed new metrics that may be employed to investigate the randomness of cryptographic RNGs. In this paper, issues such as statistical test suites, evaluation frameworks, and the interpretation of results are addressed.

184 citations


Journal ArticleDOI
TL;DR: A Monte Carlo simulation model for the random packing of unequal spherical particles is presented and the randomness, homogeneity, and isotropy, which have not been evaluated before for packing of distributed particles, are examined.
Abstract: A Monte Carlo simulation model for the random packing of unequal spherical particles is presented in this paper. With this model, the particle radii obeying a given distribution are generated and randomly placed within a cubic packing domain with high packing density and many overlaps. Then a relaxation iteration is applied to reduce or eliminate the overlaps, while the packing space is gradually expanded. The simulation is completed once the mean overlap value falls below a preset value. To simulate the random close packing, a ``vibration'' process is applied after the relaxation iteration. For log-normal distributed particles, the effect of particle size standard deviation, and for bidisperse particles, the effects of particle size ratio and the volume fraction of large particles on packing density and on coordination number are investigated. Simulation results show good agreement with that obtained by experiments and by other simulations. The randomness, homogeneity, and isotropy, which have not been evaluated before for packing of distributed particles, are also examined using statistical measures.

Book
06 Dec 1999
TL;DR: In this paper, the authors define single orbit dynamics topological and topological dynamics, ergodicity and unique Ergodicity Ergodic and uniquely ergodic orbits Translation invariant graphs and recurrence patterns in large sets Entropy and disjointness.
Abstract: What is single orbit dynamics Topological dynamics Invariant measures, ergodicity and unique ergodicity Ergodic and uniquely ergodic orbits Translation invariant graphs and recurrence Patterns in large sets Entropy and disjointness What is randomness? Recurrence rates and entropy Universal schemes.

Proceedings ArticleDOI
01 May 1999
TL;DR: This paper introduces the idea of recycling the state S of the machine M at time i apart of the random string for the same machine at later time and uses the entropy of therandom variable S in order to save truly random bits later on.
Abstract: Let M be a logarithmic space Turing machine (or a polynomial width branching pmgram) that USES up to k z p/G (read once) random bits. For a fixed input, let E(S) be the probability (over rhe random string) that at time i the machine M is in state S, and assume that some weak estimation of the pmbabilities P;(S) is known or given or can be easily computed. We construct a logarithmic space pseudo-random generator that USES only logarithmic number of truly random bits and outputs a sequence of k bits that looks random to M. This means that a very weak estimation of the stateprobabilities of M is suf/icientforafill derandomization of M and for constructing pseudo-random sequencesfor M. We have several applications of the main theorem, as stated within. To prove our theorem, we introduce the idea of recycling the state S of the machine M at time i apart of the random string for the same machine at later time. That is, we use the entropy of the random variable S in order to save truly random bits later on. Our techniques and results can both be generalized to larger size of space.

Journal ArticleDOI
TL;DR: A stochastic optimization technique that was recently developed to reconstruct realizations of random media (given limited microstructural information in the form of correlation functions) is investigated further, critically assessed, and refined.
Abstract: Random media abound in nature and in manmade situations. Examples include porous media, biological materials, and composite materials. A stochastic optimization technique that we have recently developed to reconstruct realizations of random media (given limited microstructural information in the form of correlation functions) is investigated further, critically assessed, and refined. The reconstruction method is based on the minimization of the sum of squared differences between the calculated and reference correlation functions. We examine several examples, including one that has appreciable short-range order, and focus more closely on the kinetics of the evolution process. The method is generally successful in reconstructing or constructing random media with target correlation functions, but one must be careful in implementing an earlier proposed time-saving step when treating random media possessing significant short-range order. The issue of the uniqueness of the obtained solutions is also discussed.

Journal ArticleDOI
TL;DR: This paper developed exact non-local conditional moment equations for transient flow in bounded domains driven by random forcing terms (sources, initial head, and boundary conditions) for second moment equations.
Abstract: In a recent paper we developed exact nonlocal conditional moment equations for transient flow in bounded domains driven by random forcing terms (sources, initial head, and boundary conditions). Whereas our conditional mean equations took into account the randomness of forcing terms, our conditional second moment equations did not. We do so in this brief addendum.

Journal ArticleDOI
TL;DR: In this paper, a single defect approximation (SDA) scheme is proposed for the analysis of diffusion on a random lattice with fluctuating site connectivities, which is shown to be induced by geometrical defects, that is sites with abnormally low or large connectivities.
Abstract: Geometrical disorder is present in many physical situations giving rise to eigenvalue problems. The simplest case of diffusion on a random lattice with fluctuating site connectivities is studied analytically and by exact numerical diagonalizations. Localization of eigenmodes is shown to be induced by geometrical defects, that is sites with abnormally low or large connectivities. We expose a 'single defect approximation' (SDA) scheme founded on this mechanism that provides an accurate quantitative description of both extended and localized regions of the spectrum. We then present a systematic diagrammatic expansion allowing to use SDA for finite-dimensional problems, e.g. to determine the localized harmonic modes of amorphous media. Since Anderson's fundamental work (1), physical systems in the presence of disorder are well known for exhibiting localization effects (2). While most attention has been paid so far to Hamiltonians with random potentials (e.g. stemming from impurities), there are situations in which disorder also originates from geometry. Of particular interest among these are the harmonic vibrations of amorphous materials such as liquids, colloids, glasses, etc around random particle configurations. Recent experiments on sound propagation in granular media (3, 4) have stressed the possible presence of localization effects, highly correlated with the microscopic structure of the sample. The existing theoretical framework for calculating the density of harmonic modes in amorphous systems was developed in liquid theory. In this context, microscopic configurations are not frozen but instantaneous normal modes (INM) give access to short time dynamics (5). Wu and Loring (6) and Wan and Stratt (7) have calculated good estimates of the density of INM for Lennard-Jones liquids, averaged over instantaneous particle configurations. However, localization-delocalization properties of the eigenvectors have not been considered. Diffusion on random lattices is another problem where geometrical randomness plays a crucial role (8). Long-time dynamics is deeply related to the small eigenvalues of the Laplacian on the lattice and therefore to its spectral dimension. Campbell suggested that diffusion on a random lattice could also mimic the dynamics taking place in a complicated phase space, e.g. for glassy systems (9). From this point of view, sites on the lattice represent microscopic configurations and edges represent allowed moves from one configuration to another. At low temperatures, most edges correspond to very improbable jumps and may be erased. The tail of the density of states of the Laplacian on random graphs was studied by means of heuristic arguments by Bray and Rodgers (10). Localized eigenvectors, closely related to metastable states are of particular relevance for asymptotic dynamics.

Journal ArticleDOI
TL;DR: Of independent interest is the randomness-efficient Leftover Hash Lemma, a key tool for extracting randomness from weak random sources, and applications to time-space tradeoffs, expander constructions, and to the hardness of approximation.
Abstract: We give an efficient algorithm to extract randomness from a very weak random source using a small additional number t of truly random bits. Our work extends that of Nisan and Zuckerman [ J. Comput. System Sci., 52 (1996), pp. 43--52] in that t remains small even if the entropy rate is well below constant. A key application of this is in running randomized algorithms using such a very weak source of randomness. For any fixed $\gamma > 0$, we show how to simulate RP algorithms in time $n^{O(\log n)}$ using the output of a \ds\ with min-entropy $R^\gamma$. Such a weak random source is asked once for $R$ bits; it outputs an $R$-bit string according to any probability distribution that places probability at most $2^{-R^\gamma}$ on each string. If $\gamma > 1/2$, our simulation also works for BPP; for $\gamma > 1-1/(k+1)$, our simulation takes time $n^{O(\logk n)}$ (log(k) is the logarithm iterated k times). We also give a polynomial-time BPP simulation using Chor--Goldreich sources of min-entropy $R^{\Omega(1)}$, which is optimal. We present applications to time-space tradeoffs, expander constructions, and to the hardness of approximation. Of independent interest is our randomness-efficient Leftover Hash Lemma, a key tool for extracting randomness from weak random sources.

Journal ArticleDOI
TL;DR: In this paper, the authors present exact results for the effect of disorder on the critical properties of an anisotropic XY spin chain in a transverse held, and show that in the Griffiths phase near the Ising transition, the ground state energy has an essential singularity.
Abstract: We present some exact results for the effect of disorder on the critical properties of an anisotropic XY spin chain in a transverse held. The continuum limit of the corresponding fermion model is taken and in various cases results in a Dirac equation with a random mass. Exact analytic techniques can then be used to evaluate the density of states and the localization length. In the presence of disorder the ferromagnetic-paramagnetic or Ising transition of the model is in the same universality class as the random transverse field Ising model solved by Fisher using a real-space renormalization-group decimation technique (RSRGDT). If there is only randomness in the anisotropy of the magnetic exchange then the anisotropy transition (from a ferromagnet in the x direction to a ferromagnet in the y direction) is also in this universality class. However, if there is randomness in the isotropic part of the exchange or in the transverse held then in a nonzero transverse field the anisotropy transition is destroyed by the disorder. We show that in the Griffiths' phase near the Ising transition that the ground-state energy has an essential singularity. The results obtained for the dynamical critical exponent, typical correlation length, and for the temperature dependence of the specific heat near the Ising transition agree with the results of the RSRODT and numerical work. [S0163-1829(99)07125-8].

Book ChapterDOI
TL;DR: The main result of this paper is that the non-branching plan existence problem in unobservable domains with an expressive operator formalism is EXPSPACE-complete.
Abstract: Planning with incomplete information may mean a number of different things; that certain facts of the initial state are not known, that operators can have random or nondeterministic effects, or that the plans created contain sensing operations and are branching. Study of the complexity of incomplete information planning has so far been concentrated on probabilistic domains, where a number of results have been found. We examine the complexity of planning in nondeterministic propositional domains. This differs from domains involving randomness, which has been well studied, in that for a nondeterministic choice, not even a probability distribution over the possible outcomes is known. The main result of this paper is that the non-branching plan existence problem in unobservable domains with an expressive operator formalism is EXPSPACE-complete. We also discuss several restrictions, which bring the complexity of the problem down to PSPACE-complete, and extensions to the fully and partially observable cases.

Journal ArticleDOI
TL;DR: In this paper, a generalized version of Van Hove's method was proposed to deal with the case of non-ordinary statistical mechanics, that being phenomena with no time-scale separation.
Abstract: We generalize the method of Van Hove [Physica (Amsterdam) 21, 517 (1955)] so as to deal with the case of nonordinary statistical mechanics, that being phenomena with no time-scale separation. We show that in the case of ordinary statistical mechanics, even if the adoption of the Van Hove method imposes randomness upon Hamiltonian dynamics, the resulting statistical process is described using normal calculus techniques. On the other hand, in the case where there is no time-scale separation, this generalized version of Van Hove's method not only imposes randomness upon the microscopic dynamics, but it also transmits randomness to the macroscopic level. As a result, the correct description of macroscopic dynamics has to be expressed in terms of the fractional calculus.

DOI
01 Sep 1999
TL;DR: An internal report lists several characteristics which an encryption algorithm exhibiting random behavior should possess, describes how the output for each candidate algorithm was evaluated for randomness, discusses what has been learned utilizing the NIST statistical tests, and finally provides an interpretation of the results.
Abstract: One of the criteria used to evaluate the Advanced Encryption Standard candidate algorithms was their demonstrated suitability as random number generators. That is, the evaluation of their output utilizing statistical tests should not provide any means by which to computationally distinguish them from a truly random source. This internal report lists several characteristics which an encryption algorithm exhibiting random behavior should possess, describes how the output for each candidate algorithm was evaluated for randomness, discusses what has been learned utilizing the NIST statistical tests, and finally provides an interpretation of the results.

Journal ArticleDOI
TL;DR: In this paper, the spectrum of the random non-hermitian Hamiltonian on a ring which models the physics of vortex line pinning in superconductors is shown to be one-dimensional.

Journal ArticleDOI
TL;DR: In this article, a Schwartz-type distribution associated with Pollicott-Ruelle resonances is shown to exist for chemio-hydrodynamic modes with exponential decay.
Abstract: Microscopic chaos is the dynamical randomness in the collisional motion of atoms and molecules in fluids. This chaos animates different mesoscopic stochastic phenomena and, in particular, the reaction-diffusion processes. For different chemical reactions, we show how the reaction rate can be related to the characteristic quantities of chaos like the Lyapunov exponents and the Kolmogorov-Sinai entropy which are associated with a fractal repeller. In spatially extended deterministic chaotic systems, chemio-hydrodynamic modes with exponential decay are shown to exist as Schwartz-type distributions associated with Pollicott-Ruelle resonances. The problem of entropy production is also discussed.

Journal ArticleDOI
TL;DR: In this article, the authors analyze the theoretical foundations of the efficient market hypothesis by stressing the efficient use of information and its effect upon price volatility, and show that price volatility reflects the output of a higher order dynamic system with an underlying stochastic foundation.
Abstract: We analyze the theoretical foundations of the efficient market hypothesis by stressing the efficient use of information and its effect upon price volatility. The “random walk” hypothesis assumes that price volatility is exogenous and unexplained. Randomness means that a knowledge of the past cannot help to predict the future. We accept the view that randomness appears because information is incomplete. The larger the subset of information available and known, the less emphasis one must place upon the generic term randomness. We construct a general and well accepted intertemporal price determination model, and show that price volatility reflects the output of a higher order dynamic system with an underlying stochastic foundation. Our analysis is used to explain the learning process and the efficient use of information in our archetype model. We estimate a general unrestricted system for financial and agricultural markets to see which specifications we can reject. What emerges is that a system very close to our archetype model is consistent with the evidence. We obtain an equation for price volatility which looks a lot like the GARCH equation. The price variability is a serially correlated variable which is affected by the Bayesian error, and the Bayesian error is a serially correlated variable which is affected by the noisiness of the system. In this manner we have explained some of the determinants of what has been called the “randomness” of price changes.

Journal ArticleDOI
TL;DR: This paper provides numerical evidence that the replica symmetric solution to this problem is the correct one, and notes a surprising result concerning the distribution of k-nearest neighbor links in optimal tours, and invites a theoretical understanding of this phenomenon.
Abstract: We study the random link traveling salesman problem, where lengths lij between city i and city j are taken to be independent, identically distributed random variables. We discuss a theoretical approach, the cavity method, that has been proposed for finding the optimum tour length over this random ensemble, given the assumption of replica symmetry. Using finite size scaling and a renormalized model, we test the cavity predictions against the results of simulations, and find excellent agreement over a range of distributions. We thus provide numerical evidence that the replica symmetric solution to this problem is the correct one. Finally, we note a surprising result concerning the distribution of k th-nearest neighbor links in optimal tours, and invite a theoretical understanding of this phenomenon.

Journal ArticleDOI
John Cardy1
TL;DR: In this article, it was shown that there is an asymptotically exact mapping to the random field Ising model, at the level of the interface between the ordered and disordered phases.
Abstract: A rigorous theorem due to Aizenman and Wehr asserts that there can be no latent heat in a two-dimensional system with quenched random impurities. We examine this result, and its possible extensions to higher dimensions, in the context of several models. For systems whose pure versions undergo a strong first-order transition, we show that there is an asymptotically exact mapping to the random field Ising model, at the level of the interface between the ordered and disordered phases. This provides a physical explanation for the above result and also implies a correspondence between the problems in higher dimensions, including scaling relations between their exponents. The particular example of the q -state Potts model in two dimensions has been considered in detail by various authors and we review the numerical results obtained for this case. Turning to weak, fluctuation-driven first-order transitions, we describe analytic renormalization group calculations which show how the continuous nature of the transition is restored by randomness in two dimensions.

Journal ArticleDOI
TL;DR: In this article, a random Schrodinger operator in an external magnetic field is considered, where the random potential consists of delta functions of random strengths situated on the sites of a regular two-dimensional lattice.
Abstract: We consider a random Schrodinger operator in an external magnetic field. The random potential consists of delta functions of random strengths situated on the sites of a regular two-dimensional lattice. We characterize the spectrum in the lowest N Landau bands of this random Hamiltonian when the magnetic field is sufficiently strong, depending on N. We show that the spectrum in these bands is entirely pure point, that the energies coinciding with the Landau levels are infinitely degenerate and that the eigenfunctions corresponding to energies in the remainder of the spectrum are localized with a uniformly bounded localization length. By relating the Hamiltonian to a lattice operator we are able to use the Aizenman–Molchanov method to prove localization.

Journal ArticleDOI
TL;DR: In this paper, randomness is introduced both through the time between impulses, which is distributed exponentially, and through the sign of the impulses which are fixed in amplitude and orientation, and a general theorem which establishes the existence of invariant measures for the randomly forced problem is proved.

01 Jan 1999
TL;DR: This internal report lists several characteristics which an encryption algorithm exhibiting random behavior should possess, describes how the output for each candidate algorithm was evaluated for randomness, discusses what has been learned utilizing the NIST statistical tests, and finally provides an interpretation of the results.
Abstract: One of the criteria used to evaluate the AES candidate algorithms was their demonstrated suitability as random number generators. That is, the evaluation of their output utilizing statistical tests should not provide any means by which to computationally distinguish them from a truly random source. This internal report lists several characteristics which an encryption algorithm exhibiting random behavior should possess, describes how the output for each candidate algorithm was evaluated for randomness, discusses what has been learned utilizing the NIST statistical tests, and finally provides an interpretation of the results.