scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 1979"


Journal ArticleDOI
TL;DR: In this article, the authors investigated the power of edge-correction for spatial point patterns and found that edge correction can substantially reduce the sampling fluctuations of a statistic and so boost the test power based on it.
Abstract: SUMMARY Tests of "randomness" and methods of edge-correction for spatial point patterns are surveyed. The asymptotic distribution theory and power of tests based on the nearest-neighbour distances and estimates of the variance function are investigated. A MAP of small objects is often described as "random" if it is consistent with the null hypothesis of a binomial or Poisson process. The usual first step in the analysis of such a pattern is a test of this null hypothesis; indeed the analysis is often confined to quoting a test statistic or its significance level as a "measure of non-randomness". The aim of this paper is to investigate the power of such tests, particularly tests based on nearest-neighbour distances, interpoint distances and estimators of moment measures, and to assess the efficiency of various corrections for edge-effects. One interesting conclusion is that edge-correction such as applied in the k of Ripley (1977) can substantially reduce the sampling fluctuations of a statistic and so boost the power of a test based on it.

416 citations



Book ChapterDOI
30 Jun 1979
TL;DR: In this article, the Rabin-Strassen-Solovay primality algorithm is used to test polynomial identities and properties of systems of polynomials.
Abstract: The startling success of the Rabin-Strassen-Solovay primality algorithm, togehter with the intriguing foundational possibility that axioms of randomness may constitute a useful fundamental source of mathematical truth independent of the standard axiomatic structure of mathematics, suggests a vigorous search for probabilistic algorithms. In illustration of this observation, we present various fast probabilistic algorithms, with probability of correctness guaranteed a priori, for testing polynomial identities and properties of systems of polynomials. Ancillary fast algorithms for calculating resultants and Sturm sequences are given. Theorems of elementary geometry can be proved much more efficiently by the techniques presented than by any known artificial intelligence approach.

68 citations


Journal ArticleDOI
A. Nadas1
01 May 1979
TL;DR: The probability distribution of the critical pathlength turns out to be a solution of an unconstrained minimization problem, which can be recast as a convex programming problem with linear constraints.
Abstract: A solution is offered to the problem of determining a probability distribution for the length of the longest path from source (start) to sink (finish) in an arbitrary PERT network (directed acyclic graph), as well as determining associated probabilities that the various paths are critical ("bottleneck probabilities"). It is assumed that the durations of delays encountered at a node are random variables having known but arbitrary probability distributions with finite expected values. The solution offered is, in a certain sense, a worst-case bound over all possible joint distributions of delays for given marginal distributions for delays. This research was motivated by the engineering problem of the timing analysis of computer hardware logic block graphs where randomness in circuit delay is associated with manufucturing variations. The probability distribution of the critical pathlength turns out to be a solution of an unconstrained minimization problem, which can be recast as a convex programming problem with linear constraints. The probability that a given path is critical turns out to be the Lagrange multiplier associated with the constraint determined by the path. The discrete version of the problem can be solved numerically by means of various parametric linear programming formulations, in particular by one which is effciently solved by Fulkerson's network flow algorithm for project cost curves.

53 citations


Journal ArticleDOI
TL;DR: In this paper, a technique used in pseudorandom number generation is to combine two or more different generators with the goal of producing a new generator with improved randomness properties, and it is shown that in a strong sense the combined generator does offer improvement.
Abstract: A technique used in pseudorandom number generation is to combine two or more different generators with the goal of producing a new generator with improved randomness properties. We study such a class of generators and show that in a strong sense the combined generator does offer improvement. Our approach applies results from majorization theory.

31 citations



Journal ArticleDOI
TL;DR: In this article, a numerical study of two-dimensional electron systems in strong magnetic fields in the presence of both random potential and electron-electron interaction is performed, and the results exhibit behaviour ranging from the Wigner crystallization or the charge density wave with strong Coulomb interaction to the previously reported Anderson localisation with dominant random potential.
Abstract: A numerical study of two-dimensional electron systems in strong magnetic fields in the presence of both random potential and electron-electron interaction is performed. The interplay of randomness and many-body effects in this quantum system is investigated by varying the relative strength of random potential and Coulcomb interaction. The electron-electron interaction is treated in the self-consistent Hartree-Fock approximation. The formalism covers arbitrary value of the relative strength of random potential and Coulomb interaction, and the results exhibit behaviour ranging from the Wigner crystallisation or the charge density wave with strong Coulomb interaction to the previously reported Anderson localisation with dominant random potential. It is shown that, in the general case when both random potential and electron-electron interaction are present with comparable magnitudes, the ground state of the system consists of an amorphous array of charge density maxima, which may be called a 'Wigner glass' or a 'charge density glass'. A discussion is made of the behaviour of the charge density in the ground state and its dependence on the occupancy of a Landau sub-band.

23 citations


Journal ArticleDOI
TL;DR: In this article, a renormalization-group method is used to study localization of electrons in a random system, and a position-space method yields a localization edge in three dimensions.
Abstract: A renormalization-group method is used to study localization of electrons in a random system. A position-space method yields a localization edge in three dimensions. In two dimensions our results indicate localization even for small randomness in agreement with Abrahams, Anderson, Licciardello, and Ramakrishnan; however we cannot rule out the existence of a localization edge.

18 citations


Journal ArticleDOI
TL;DR: Extensive computer experiments have demonstrated that this is a convenient method for generating sequences of pseudo-random numbers and despite the eventual domination of cumulative roundoff errors the asymptotic statistical features of the mixing are preserved.

15 citations


Proceedings ArticleDOI
29 Oct 1979
TL;DR: It is shown that in certain situations parallelism and stochastic features ('distributed random choices') are provably more powerful than either parallelism or randomness alone.
Abstract: We study the power of RAM acceptors with several instruction sets. We exhibit several instances where the availability of the division operator increases the power of the acceptors. We also show that in certain situations parallelism and stochastic features ('distributed random choices') are provably more powerful than either parallelism or randomness alone. We relate the class of probabilistic Turing machine computations to random access machines with multiplication (but without boolean vector operations). Again, the availability of integer division seems to play a crucial role in these results.

13 citations


Journal ArticleDOI
TL;DR: In this article, the eigenvectors of W have been studied and it is shown that the Eigenvector of W behaves in a completely chaotic manner, which is described in terms of the normalized uniform (Haar) measure on the group of orthogonal transformations on a finite dimensional space.
Abstract: A model for the generation of neural connections at birth led to the study of W, a random, symmetric, nonnegative definite linear operator defined on a finite, but very large, dimensional Euclidean space [1]. A limit law, as the dimension increases, on the eigenvalue spectrum of W was proven, implying that realizations of W (being identified with organisms in a species) appear totally different on the microscopic level and yet have almost identical spectral densities.The present paper considers the eigenvectors of W. Evidence is given to support the conjecture that, contrary to the deterministic aspect of the eigenvalues, the eigenvectors behave in a completely chaotic manner, which is described in terms of the normalized uniform (Haar) measure on the group of orthogonal transformations on a finite dimensional space. The validity of the conjecture would imply a tabula rasa property on the ensemble (“species”) of all realizations of W.


Journal ArticleDOI
TL;DR: In this article, the problem of propagation of scalar waves over a random surface was formulated as a wave equation in the new coordinates with an additional term, the fluctuation operator, which depends on derivatives of the surface in space and time.
Abstract: We give a formulation of the problem of propagation of scalar waves over a random surface. By a judicious choice of variables we are able to show that this situation is equivalent to propagation of these waves through a medium of random fluctuations with fluctuating source and receiver. The wave equation in the new coordinates has an additional term, the fluctuation operator, which depends on derivatives of the surface in space and time. An expansion in the fluctuation operator is given which guarantees the desired boundary conditions at every order. We treat both the cases where the surface is time dependent, such as the sea or surface, or fixed in time. Also discussed is the situation where the source and receiver lie between the random surface and another, possibly also random, surface. In detail we consider acoustic waves for which the surfaces are pressure release. The method is directly applicable to electromagnetic waves and other boundary conditions.

Journal ArticleDOI
TL;DR: In this paper, the authors give limiting results for arrays {Xij (m, n) (i, j) Dmn } of binary random variables distributed as particular types of Markov random fields over m x n rectangular lattices Dmn.
Abstract: In this article we give limiting results for arrays {Xij (m, n) (i, j) Dmn } of binary random variables distributed as particular types of Markov random fields over m x n rectangular lattices Dmn. Under some sparseness conditions which restrict the number of X ij (m, n)'s which are equal to one we show that the random variables (l = 1, ···, r) converge to independent Poisson random variables for 0 < d1 < d2 < · ·· < dr when m→∞ nd∞. The particular types of Markov random fields considered here provide clustering (or repulsion) alternatives to randomness and involve several parameters. The limiting results are used to consider statistical inference for these parameters. Finally, a simulation study is presented which examines the adequacy of the Poisson approximation and the inference techniques when the lattice dimensions are only moderately large.

Journal ArticleDOI
Masaki Goda1
TL;DR: In this paper, the authors generalized Matsuda and Ishii's theory, based on Furstenberg's convergent theorem on products of random matrices, for one-dimensional infinite disordered systems with off-diagonal randomness.
Abstract: Problem of localization of eigenstates is examined for one-dimensional infinite disordered systems with off-diagonal randomness. For this purpose Matsuda and Ishii's theory, based on Furstenberg's convergent theorem on products of random matrices, is generalized by intro­ ducing "irreducible sequences" S and "irreducible transfer matrices" Q* as useful mathematical tools. A Furstenberg-type theorem is established for the product of matrices associated with a Markov-chain. This theorem leads to some conclusions about the localization of eigenstates, which are very similar, except for some minor differences, to those obtained. by Matsuda and Ishii for systems with diagonal-randomness only.

Book ChapterDOI
TL;DR: Some expressions in terms of Kolmogorov's and other complexities of the initial segments of x whose difference from d(x|P) is bounded by a constant are given.

Journal ArticleDOI
TL;DR: In this paper, the effect of load on cable reliability of random cyclic loading, such as that generated by the wave-induced rocking of ocean vessels deploying these cables, is examined, and exact results concerning the distribution of bundle or cable failure time, and especially the mean failure time are obtained.
Abstract: The effect on cable reliability of random cyclic loading such as that generated by the wave-induced rocking of ocean vessels deploying these cables is examined. A simple model yielding exact formulas is first explored. In this model, the failure time of a single fiber under a constant load is assumed to be exponentially distributed, and the random loadings are a two-state stationary Markov process. The effect of load on failure time is assumed to follow a power law breakdown rule. In this setting, exact results concerning the distribution of bundle or cable failure time, and especially the mean failure time, are obtained. Where the fluctuations in load are frequent relative to bundle life, such as may occur in long-lived cables, it is shown that randomness in load tends to decrease mean bundle life, but it is suggested that the reduction in mean life often can be restored by modestly reducing the base load on the structure or by modestly increasing the number of elements in the bundle. In later pages this simple model is extended to cover a broader range of materials and random loadings. Asymptotic distributions and mean failure times are given where fibers follow a Weibull distribution of failure time under constant load, and loads that are general non-negative stationary processes subject only to some mild condition of asymptotic independence. When the power law breakdown exponent is large, the mean time to bundle failure depends heavily on the exact form of the marginal probability distribution for the random load process and cannot be summarized by the first two moments of this distribution alone.

Journal ArticleDOI
TL;DR: The generation of long, high quality random number sequences for Monte Carlo simulations using minicomputers is considered and a recommendation is given to authors of Monte Carlo papers to specify their random number generator and to describe the randomness testing which that generator has undergone.
Abstract: The generation of long, high quality random number sequences for Monte Carlo simulations using minicomputers is considered. The importance of the thorough testing of Monte Carlo random number generators is emphasized. A recommendation is given to authors of Monte Carlo papers to specify their random number generator and to describe the randomness testing which that generator has undergone.

Journal ArticleDOI
TL;DR: In this paper, the transition point of the random Ising model on the square lattice is exactly calculated under the assumption of quenched randomness of bond defects using duality transformation and replica method.
Abstract: The transition point of the random Ising model on the square lattice is exactly calculated under the assumption of quenched randomness of bond defects. The author makes use of a duality transformation and the replica method. The transition point thus obtained is given by exp(-2Kc)=exp((1-1/2p)ln2)-1 which satisfies the boundary conditions: Tc to 0 as p to pc=1/2; and Kc to 1/2ln( square root 2+1) as p to 1.

Journal ArticleDOI
TL;DR: In this paper, alternative aggregation assumptions in the context of probabilistic spatial choice models for localized samples and single-choice data are discussed in terms of cognitive-behavioral geography.
Abstract: Alternative aggregation assumptions are discussed in the context of probabilistic spatial choice models for localized samples and single-choice data. Three aggregation schemes are treated: sampling randomness, simply-aggregated intrinsic randomness and randomly-aggregated intrinsic randomness. The first and the last are consistent with heterogeneous populations, but simple aggregation imposes strong conditions on population homogeneity. Some implications for cognitive-behavioral geography are considered in reference to the constant and random utility models.

Journal ArticleDOI
TL;DR: In this article, the authors apply the homomorphic cluster coherent potential approximation (HCPA) to systems with site-diagonal and/or offdiagonal randomness, and show numerically that the average Green's function obtained on the basis of the HCPA is analytic off the real axis on the complex energy plane.
Abstract: We apply the homomorphic cluster coherent potential approximation (HCPA) to systems with site-diagonal and/or off-diagonal randomness. We show numerically that the average Green's function obtained on the basis of the HCPA is analytic off the real axis on the complex energy plane. We also show that the HCPA reproduces the ordinary single-site CPA and the molecular CPA when systems include only diagonal disorder. The HCPA is advantageous in that it can deal with both site-diagonal and off-diagonal disorder in a unified manner. We also present a numerical result for a disordered chain by taking both kinds of disorder into account. The density of states calculated in the HCPA is in good agreement with the result of a computer simulation.

Journal ArticleDOI
TL;DR: Von Neumann's method of generating random variables with the exponential distribution and Forsythe's method for obtaining distributions with densities of the form e/sup -G/sup( x/) are generalized to apply to certain power series representations.
Abstract: Von Neumann's method of generating random variables with the exponential distribution and Forsythe's method for obtaining distributions with densities of the form e/sup -G//sup( x/) are generalized to apply to certain power series representations. The flexibility of the power series methods is illustrated by algorithms for the Cauchy and geometric distributions.

Journal ArticleDOI
TL;DR: In this article, the response of a movable rigid sphere embedded in an elastic medium to random compressional wave disturbances is considered, and the exact solution for the case of a narrow-band process is obtained numerically.

Journal ArticleDOI
TL;DR: Pielou's index of non-randomness a measuring the distribution of point spatial patterns on a plane generalizes to an index in n -dimensional Euclidean space E n, which is defined as the extent of regularity, randomness, or aggregation in distributions of pointatial patterns in E n.


Journal ArticleDOI
TL;DR: A FORTRAN program—RANTEST—was written that performs various statistical and empirical tests of randomness on uniform (0, 1) pseudorandom numbers.

Journal ArticleDOI
TL;DR: In this article, the decay time of a radioactive atom is an exponentially distributed random variable and the decay times of individual atoms in an aggregrate of identical radioactive atoms are independent.
Abstract: The good‐as‐new postulate applied to radioactivity says: The conditional probability that a radioactive atom will ’’live’’ past time s+t, given that it has lived past time s, is equal to the unconditional probability that it would have lived past time t, starting from time zero. This postulate leads directly to the conclusion that the decay time of a radioactive atom is an exponentially distributed random variable. Add the postulate that the decay times of individual atoms in an aggregrate of identical radioactive atoms are independent, and a complete description of the random behavior of an aggregrate can be constructed. This approach stresses from the outset the randomness inherent in the radioactive decay process and is offered as a complement to the deterministic approach that treats fluctuations about mean values. All results are exact within the context of the model and are applicable to aggregrates of any size—no assumptions about large numbers are required.

01 Jan 1979
TL;DR: In this paper, the probability of failure of elastic-plastic one-degree-of-freedom structures excited by stochastic infrequent dynamic loads is investigated, where the structure is assumed to fail when the inelastic displacement overcomes a given permissible limit.
Abstract: The "probability of failure" analysis of elastic- plastic one- degree- of- freedom structures excited by stochastic infrequent dynamic loads is investigated. The structure is assumed to fail when the inelastic displacement overcomes a given permissible limit. The randomness of the structural response to a whole single excitation is analysed making use of simulation methods. The time- history of the inelastic displacement stochastic process is described by a suitable markov process model. This approach is presented and its computational aspects discussed. Finally a numerical example is considered: the reliability of a portal frame subject to deterministic vertical loads in a seismic region is analysed and its sensitivity to changes in the problem parameters is discussed (a).

Proceedings ArticleDOI
01 Jan 1979
TL;DR: In this article, the authors survey several modeling approaches in this area, and also existing computer implementations of the models; it is the author's view that no model is really complete unless numbers are available for comparison with experimental data.
Abstract: There are many sources of acoustic signal fluctuations which set practical limits on coherent processing in space and time. In particular, the problem of acoustic fluctuations due to randomness in the ocean medium has received considerable recent attention. The purpose of this paper is to survey recent modeling approaches in this area, and also existing computer implementations of the models; it is the author's view that no model is really complete unless numbers are available for comparison with experimental data.