scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 1990"


Journal ArticleDOI
TL;DR: It is shown that it is possible to design "special quasirandom structures" (SQS's) that mimic for small N (even %=8) the first few, physically most relevant radial correlation functions of an infinite, perfectly random structure far better than the standard technique does.
Abstract: Structural models needed in calculations of properties of substitutionally random ${\mathit{A}}_{1\mathrm{\ensuremath{-}}\mathit{x}}$${\mathit{B}}_{\mathit{x}}$ alloys are usually constructed by randomly occupying each of the N sites of a periodic cell by A or B. We show that it is possible to design ``special quasirandom structures'' (SQS's) that mimic for small N (even N=8) the first few, physically most relevant radial correlation functions of an infinite, perfectly random structure far better than the standard technique does. These SQS's are shown to be short-period superlattices of 4--16 atoms/cell whose layers are stacked in rather nonstandard orientations (e.g., [113], [331], and [115]). Since these SQS's mimic well the local atomic structure of the random alloy, their electronic properties, calculable via first-principles techniques, provide a representation of the electronic structure of the alloy. We demonstrate the usefulness of these SQS's by applying them to semiconductor alloys. We calculate their electronic structure, total energy, and equilibrium geometry, and compare the results to experimental data.

771 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that, starting from an arbitrary matching, the process of allowing randomly chosen blocking pairs to match will converge to a stable matching with probability one, and that every stable matching can arise.
Abstract: EMPIRICAL STUDIES OF TWO SIDED MATCHING have so far concentrated on markets in which certain kinds of market failures were addressed by resorting to centralized, deterministic matching procedures. Loosely speaking, the results of these studies are that those centralized procedures which achieved stable outcomes resolved the market failures, while those markets organized through procedures that yielded unstable outcomes continued to fail.2 So the market failures seem to be associated with instability of the outcomes. But many entry-level labor markets and other two-sided matching situations don't employ centralized matching procedures, and yet aren't observed to experience such failures. So we can conjecture that at least some of these markets may reach stable outcomes by means of decentralized decision making. And decentralized decision making in complex environments presumably introduces some randomness into what matchings are achieved. However, as far as we are aware, no nondeterministic models leading to stable outcomes have yet been studied. The present paper demonstrates that, starting from an arbitrary matching, the process of allowing randomly chosen blocking pairs to match will converge to a stable matching with probability one. (This resolves an open question raised by Knuth (1976), who showed that such a process may cycle.) Furthermore, every stable matching can arise

365 citations


Proceedings ArticleDOI
22 Oct 1990
TL;DR: The authors present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent, and two of the constructions are based on bit sequences that are widely believed to possess randomness properties.
Abstract: The authors present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is O(log log n+k+log 1/ epsilon ), where epsilon is the statistical difference between the distribution induced on any k-bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by J. Naor and M. Naor (1990). An advantage of the present constructions is their simplicity. Two of the constructions are based on bit sequences that are widely believed to possess randomness properties, and the results can be viewed as an explanation and establishment of these beliefs. >

292 citations


Journal ArticleDOI
TL;DR: In this article, an approach towards a random number generator that passes all of the stringent tests for randomness we have put to it, and that is able to produce exactly the same sequence of uniform random variables in a wide variety of computers, including TRS80, Apple, Macintosh, Commodore, Kaypro, IBM PC, AT, PC and AT clones, Sun, Vax, IBM 360 370, 3090, Amdahl, CDC Cyber and even 205 and ETA supercomputers.

276 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the Gibbs state is unique for almost all field configurations, and that the vanishing of the latent heat at the transition point can be explained by the randomness in dimensions d ≥ 4.
Abstract: Frozen-in disorder in an otherwise homogeneous system, is modeled by interaction terms with random coefficients, given by independent random variables with a translation-invariant distribution. For such systems, it is proven that ind=2 dimensions there can be no first-order phase transition associated with discontinuities in the thermal average of a quantity coupled to the randomized parameter. Discontinuities which would amount to a continuous symmetry breaking, in systems which are (stochastically) invariant under the action of a continuous subgroup ofO(N), are suppressed by the randomness in dimensionsd≦4. Specific implications are found in the Random-Field Ising Model, for which we conclude that ind=2 dimensions at all (β,h) the Gibbs state is unique for almost all field configurations, and in the Random-Bond Potts Model where the general phenomenon is manifested in the vanishing of the latent heat at the transition point. The results are explained by the argument of Imry and Ma [1]. The proofs involve the analysis of fluctuations of free energy differences, which are shown (using martingale techniques) to be Gaussian on the suitable scale.

267 citations


Patent
16 Apr 1990
TL;DR: In this paper, information-theoretic notions are employed to establish the predictability of a random number generated from a circuit exhibiting chaos in order to obtain a number from a sequence of numbers with a known level of randomness and security.
Abstract: Information-theoretic notions are employed to establish the predictability of a random number generated from a circuit exhibiting chaos in order to obtain a number from a sequence of numbers with a known level of randomness and security. The method provides a measure of information loss whereby one may select the number of iterations before or between bit sampling in order to extract a secure pseudo-random number. A chaotic output is obtained by use of a sample and hold circuit coupled in a feedback loop to a variable frequency oscillator, such as a voltage controlled oscillator circuit, and operated with a positive Lyapunov exponent. A source signal generator, such as a periodic wave generator, provides a driving signal to the sample and hold circuit.

141 citations


Journal ArticleDOI
TL;DR: The transition to chaos for random dynamical systems is studied and the long-time particle distribution that evolves from an initial smooth distribution exhibits an extreme form of temporally intermittent bursting whose scaling is investigated.
Abstract: We study the transition to chaos for random dynamical systems. Near the transition, on the chaotic side, the long-time particle distribution (which is fractal) that evolves from an initial smooth distribution exhibits an extreme form of temporally intermittent bursting whose scaling we investigate. As a physical example, the problem of the distribution of particles floating on the surface of a fluid whose flow velocity has a complicated time dependence is considered.

139 citations


Journal ArticleDOI
TL;DR: In this paper, the authors take a step towards selecting the optimal yield randomness, jointly with lot sizing decisions, and derive conditions for the superiority of diversification between two sources with distinct yield distributions over a single source.
Abstract: Existing production/inventory models with random (variable) yield take the yield distribution as given. This work takes a step towards selecting the optimal yield randomness, jointly with lot sizing decisions. First, we analyze an EOQ model where yield variance and lot size are to be selected simultaneously. Two different cost structures are considered. Secondly, we consider source diversification (‘second sourcing’) as a means of reducing effective yield randomness, and trade its benefits against its costs. Conditions for the superiority of diversification between two sources with distinct yield distributions over a single source are derived. The optimal number of identical sources is also analyzed. Some comments on the congruence of the results with recent JIT practices are provided.

133 citations


Journal ArticleDOI
TL;DR: The Lyapunov number partition function method is used to calculate the spectra of generalized dimensions and of scaling indices for these attractors and special attention is devoted to the numerical implementation of the method and the evaluation of statistical errors due to the finite number of sample orbits.
Abstract: We consider qualitative and quantitative properties of ``snapshot attractors'' of random maps. By a random map we mean that the parameters that occur in the map vary randomly from iteration to iteration according to some probability distribution. By a ``snapshot attractor'' we mean the measure resulting from many iterations of a cloud of initial conditions viewed at a single instant (i.e., iteration). In this paper we investigate the multifractal properties of these snapshot attractors. In particular, we use the Lyapunov number partition function method to calculate the spectra of generalized dimensions and of scaling indices for these attractors; special attention is devoted to the numerical implementation of the method and the evaluation of statistical errors due to the finite number of sample orbits. This work was motivated by problems in the convection of particles by chaotic fluid flows.

114 citations


Book ChapterDOI
Ivan Damgård1
01 Feb 1990
TL;DR: This paper proposes a new candidate problem, namely the problem of predicting a sequence of consecutive Legendre (Jacobi) symbols modulo a prime (composite), when the starting point and possibly also the prime is unknown.
Abstract: Most of the work done in cryptography in the last few years depend on the hardness of a few specific number theoretic problems, such as factoring, discrete log, etc. Since no one has so far been able to prove that these problems are genuinely hard, it is clearly of interest to find new candidates for hard problems. In this paper, we propose such a new candidate problem. namely the problem of predicting a sequence of consecutive Legendre (Jacobi) symbols modulo a prime (composite), when the starting point and possibly also the prime is unknown. Clearly, if this problem turns out to be hard, it can be used directly to construct a cryptographically strong pseudorandom bitgenerator. Its complexity seems to be unrelated to any of the well known number theoretical problems, whence it may be able to survive the discovery of fast factoring or discrete log algorithms. Although the randomness of Legendre sequences has part of the folklore in number theory at least since the thirties, they have apparently not been considered for use in cryptography before.

108 citations


Journal ArticleDOI
TL;DR: The Laplace equation can be solved in any two and three-dimensional porous medium by means of a vectorized numerical code as mentioned in this paper, which is applied to several structures such as random media derived from site percolation.
Abstract: The Laplace equation can be solved in any two‐ and three‐dimensional porous medium by means of a vectorized numerical code. It is applied to several structures such as random media derived from site percolation; close to the percolation threshold, the critical exponents are found to be very close to the ones corresponding to networks; the results are usefully compared to previous variational upper bounds and to the prediction of an approximate space renormalization. Media with double porosity such as catalyst pellets are also addressed. Finally the conductivity of most fractals is shown to follow an Archie’s law in the limit of large generation numbers; the exponents of the power laws can be retrieved by various renormalization arguments.

Journal ArticleDOI
TL;DR: In this article, the authors define three faces of randomness: stochasticness, chaoticness, typicalness, and chaotic sequences: ways to a mathematical definition 1.1. Typicalness 1.2.
Abstract: CONTENTS Introduction Chapter I. The main notions and facts § 1.1. The notion of randomness depends on a given probability distribution § 1.2. Three faces of randomness: stochasticness, chaoticness, typicalness § 1.3. Typical, chaotic and stochastic sequences: ways to a mathematical definition 1.3.1. Typicalness 1.3.2. Chaoticness 1.3.3. Stochasticness 1.3.4. Comments § 1.4. Typical and chaotic sequences: basic definitions (for the case of the uniform Bernoulli distribution) 1.4.1. Typicalness 1.4.2. Chaoticness Chapter II. Effectively null sets, constructive support, and typical sequences § 2.1. Effectively null sets, computable distributions, and the statement of Martin-Lof's theorem § 2.2. Proof of Martin-Lof's theorem § 2.3. Different versions of the definition of the notion of typicalness 2.3.1. Schorr's definition of typicalness 2.3.2. Solovay's criterion for typicalness 2.3.3. The axiomatic approach to the definition of typicalness Chapter III. Complexity, entropy, and chaotic sequences § 3.1. Computable mappings § 3.2. Kolmogorov's theorem. Monotone entropy § 3.3. Chaotic sequences Chapter IV. What is a random sequence? § 4.1. The proof of the Levin-Schorr theorem for the uniform Bernoulli distribution § 4.2. The case of an arbitrary probability distribution § 4.3. The proofs of the lemmas Chapter V. Probabilistic machines, a priori probability, and randomness § 5.1. Probabilistic machines § 5.2. A priori probability § 5.3. A priori probability and entropy § 5.4. A priori probability and randomness Chapter VI. The frequency approach to the definition of a random sequence § 6.1. Von Mises' approach. The Church and Kolmogorov-Loveland definitions § 6.2. Relations between different definitions. Ville's construction. Muchnik's theorem. Lambalgen's example 6.2.1. Relations between different definitions 6.2.2. Ville's example 6.2.3. Muchnik's theorem 6.2.4. Lambalgen's example § 6.3. A game-theoretic criterion for typicalness Addendum. A timid criticism regarding probability theory References

Journal ArticleDOI
TL;DR: In this article, the statistical properties of random quantum states are examined for four different kinds of random states: a pure state chosen at random with respect to the uniform measure on the unit sphere in a finite-dimensional Hilbert space, a random pure state in a real space, and a mixed state with fixed eigenvalues.
Abstract: This paper examines the statistical properties of random quantum states, for four different kinds of random state:(1) a pure state chosen at random with respect to the uniform measure on the unit sphere in a finite-dimensional Hilbert space;(2) a random pure state in a real space;(3) a pure state chosen at random except that a certain expectation value is fixed;(4) a random mixed state with fixed eigenvalues. For the first two of these, we give examples of simple states of a model system, the kicked top, which have the statistical properties of random states. Interestingly, examples of both kinds of randomness can be found in the same system. In studying the last two kinds of random state, we obtain new results concerning the application of information theory to quantum systems.

Journal ArticleDOI
TL;DR: This work directly considers the phenomenon of wetting, which plays an important role in porous media, and derived a wetting phase diagram and found that the experimentally observed features are qualitatively consistent with the model, which contains no randomness
Abstract: The wetting behavior of two-phase systems confined inside cylindrical pores is studied theoretically. The confined geometry gives rise to wetting configurations, or microstructures, which have no analog in the well-studied planar case. Many features observed in experiments on binary liquid mixtures in porous media, previously interpreted in terms of random fields, are shown to be consistent with wetting in a confined geometry with no randomness.

01 Jan 1990
TL;DR: The mathematical theory of Kolmogorov complexity has its roots in probability theory, combinatorics, and philosophical notions of randomness, and came to fruition using the recent development of the theory of algorithms as discussed by the authors.
Abstract: Publisher Summary This chapter focuses on Kolmogorov complexity and its applications. The mathematical theory of Kolmogorov complexity contains deep and sophisticated mathematics. Yet, the amount of this mathematics that should be known to apply the notions fruitfully in widely divergent areas, from recursive function theory to chip technology, is very little. However, formal knowledge does not necessarily imply the wherewithal to apply it, especially so in the case of Kolmogorov complexity. Kolmogorov complexity has its roots in probability theory, combinatorics, and philosophical notions of randomness, and came to fruition using the recent development of the theory of algorithms. Shannon's classical information theory assigns a quantity of information to an ensemble of possible messages. All messages in the ensemble being equally probable, this quantity is the number of bits needed to count all possibilities. Each message in the ensemble can be communicated using this number of bits. However, it does not say anything about the number of bits needed to convey any individual message in the ensemble.

Journal ArticleDOI
TL;DR: In this article, it was shown that unless the randomness is nonessential, in the sense that limΨv/|V| has a unique value in the absolute (i.e., not just probabilistic) sense, the variance of such a quantity grows as the volume ofV.
Abstract: An extensive quantity is a family of functionsΨv of random parameters, indexed by the finite regionsV (subsets of ℤd) over whichΨv are additive up to corrections satisfying the boundary estimate stated below. It is shown that unless the randomness is nonessential, in the sense that limΨv/|V| has a unique value in the absolute (i.e., not just probabilistic) sense, the variance of such a quantity grows as the volume ofV. Of particular interest is the free energy of a system with random couplings; for suchΨv bounds are derived also for the generating functionE(etΨ). In a separate application, variance bounds are used for an inequality concerning the characteristic exponents of directed polymers in a random environment.

Journal ArticleDOI
TL;DR: It is shown that the following two conditions are equivalent: (1) the existence of pseudorandom generators; (2) theexistence of a pair of efficiently constructible distributions that are computationally indistinguishable but statistically very different.

Book
01 May 1990
TL;DR: In this paper, the importance of random phenomena occurring in nature is discussed, such as Brownian motion, certain reactions in Physical Chemistry and Biology, and intermittency in magnetic field generation by turbulent fluid motion.
Abstract: This book is about the importance of random phenomena occurring in nature Cases are selected in which randomness is most important or crucial, such as Brownian motion, certain reactions in Physical Chemistry and Biology, and intermittency in magnetic field generation by turbulent fluid motion, etc Due to “almighty chance” the structures can originate from chaos even in linear problems This idea is complementary as well as competes with a basic concept of synergetics where structures appear mainly due to the pan-linear nature of phenomena This book takes a new look at the problem of structure formation in random media, qualitative physical representation of modern conceptions, intermittency, fractals, percolation and many examples from different fields of science

Journal ArticleDOI
Miki Wadati1
TL;DR: In this article, the amplitude of a soliton decreases asymptotically as x − 1/2, where x being the distance of the propagation is the distance from the source to the sink.
Abstract: A partial differential equation which describes nonlinear wave propagations in random media is presented. Based on the equation, behaviors of soliton propagations can be analysed exactly. Under the assumption of Gaussian white randomness, it is shown that the amplitude of a soliton decreases asymptotically as x −1/2, x being the distance of the propagation.

Journal ArticleDOI
TL;DR: This work extends the theory of the Hermitian optical phase operator to analyze the quantum phase properties of pairs of electromagnetic field modes and reveals the fundamental property of two-mode squeezed states that the phase sum is locked to the argument of the squeezing parameter.
Abstract: We extend the theory of the Hermitian optical phase operator to analyze the quantum phase properties of pairs of electromagnetic field modes. The operators representing the sum and difference of the two single-mode phases are simply the sum and difference of the two single-mode phase operators. The eigenvalue spectra of the sum and difference operators have widths of 4\ensuremath{\pi}, but phases differing by 2\ensuremath{\pi} are physically indistinguishable. This means that the phase sum and difference probability distributions must be cast into a 2\ensuremath{\pi} range. We obtain mod(2\ensuremath{\pi}) probability distributions for the phase sum and difference that unambiguously reveal the signatures of randomness, phase correlations, and phase locking. We use our approach to investigate the phase sum and difference properties for uncorrelated modes in random and partial phase states and the phase-locked properties of the two-mode squeezed vacuum states. We reveal the fundamental property of two-mode squeezed states that the phase sum is locked to the argument of the squeezing parameter. The variance of the phase sum depends dilogarithmically on 1+tanhr, where r is the magnitude of the squeezing parameter, vanishing in the large squeezing limit.

Book ChapterDOI
11 Aug 1990
TL;DR: The present work begins to answer this question by establishing that a single weakly random source of either model cannot be used to obtain a secure "one-time-pad" type of cryptosystem.
Abstract: The properties of weak sources of randomness have been investigated in many contexts and using several models of weakly random behaviour For two such models, developed by Santha and Vazirani, and Chor and Goldreich, it is known that the output from one such source cannot be "compressed" to produce nearly random bits At the same time, however, a single source is sufficient to solve problems in the randomized complexity classes BPP and RP It is natural to ask exactly which tasks can be done using a single, weak source of randomness and which cannot The present work begins to answer this question by establishing that a single weakly random source of either model cannot be used to obtain a secure "one-time-pad" type of cryptosystem

Journal ArticleDOI
TL;DR: In this paper, a stochastic optimal control problem where the randomness is essentially concentrated in the stopping time terminating the process is formulated as an infinite-horizon optimization problem.
Abstract: This paper deals with a stochastic optimal control problem where the randomness is essentially concentrated in the stopping time terminating the process. If the stopping time is characterized by an intensity depending on the state and control variables, one can reformulate the problem equivalently as an infinite-horizon optimal control problem. Applying dynamic programming and minimum principle techniques to this associated deterministic control problem yields specific optimality conditions for the original stochastic control problem. It is also possible to characterize extremal steady states. The model is illustrated by an example related to the economics of technological innovation.

Journal ArticleDOI
TL;DR: It is suggested to test microphysical undecidability by physical processes with low extrinsic complexity, such as polarized laser light, to ensure that this sequence can be safely applied for all purposes requiring stochasticity and high complexity.

Journal ArticleDOI
TL;DR: In this paper, an extended Bose condensate can be stable in a random potential for a suitable weak-repulsive limit of a dense Bose gas, even though the non-interacting case is pathological.
Abstract: The authors demonstrate that an extended Bose condensate can be stable in a random potential for a suitable weak-repulsive limit of a dense Bose gas, even though the non-interacting case is pathological. The condensate exists primarily because the interactions allow screening of the random potential. This may happen even when the chemical potential is in the Lifshitz tails of the single-particle case. Indeed, the authors argue that there are no Lifshitz tail states in their dense but weakly-interacting system. Using a number-phase representation, they calculate the increase in the depletion of the condensate with increasing randomness (at fixed density) which indicates the eventual destruction of the condensed phase-perhaps to a localized phase. The physical picture discussed should be relevant to the understanding of helium thin films.

Journal ArticleDOI
TL;DR: In this paper, an extensive computer simulation study has been performed for the 2D Ising model with randomness in lattice couplings, and the results on the scaling of the maximum of the specific heat and on the magnetization critical exponent show the perfect ising model critical behaviour, for the amount of randomness studied in the present work.

Book ChapterDOI
03 Jan 1990
TL;DR: The theory of nonlinear systems has become increasingly useful and relevant to the study of empirical dynamics and, as outlined by Rossler (1983), more complex structures “beyond chaos” may await discovery.
Abstract: The theory of nonlinear systems has become increasingly useful and relevant to the study of empirical dynamics. When a system produces irregularity in one or more of its variables, it is of interest whether this behavior results from randomness (meaning that the number of degrees of freedom is infinite) or whether a finite, and possibly small, number of degrees of freedom has produced the chaos (meaning that the system is deterministic). Our understanding of deterministic systems was greatly enhanced when Lorenz (1963) discovered that a simple system with as few as three differential equations can generate totally irregular fluctuations of the system’s variables — a phenomenon nowadays generally referred to as deterministic chaos. The prominent features of chaos are unpredictability over extended time periods, and sensitive dependence on initial conditions. Once started with specific initial values, the system’s future might be totally different from what it would have been if it had been started under slightly different initial conditions. Chaos may not be the ultimate description for a system’s irregular dynamic. As outlined by Rossler (1983), more complex structures “beyond chaos” may await discovery.

Journal ArticleDOI
TL;DR: A time-domain model based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation, which can detect both chaos and randomness, distinguish them from each other, and separate them if both are present.
Abstract: While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

Journal ArticleDOI
TL;DR: A general method for designing chaotic circuits with prescribed random properties is proposed and numerous examples are given to illustrate the theory and methods developed in this paper.
Abstract: We have designed, built and tested a switched-capacitor circuit for generating pseudo-random signals. the circuit is an electronic implementation of a one-dimensional discrete map, which has been fully understood mathematically. the time series of the circuit has extremely rich dynamics, e.g. aperiodicity, non-asymptoticity, ergodicity and fractal dimension. That is, the circuit is indeed chaotic. Furthermore, a general method for designing chaotic circuits with prescribed random properties is proposed. Numerous examples are given to illustrate the theory and methods developed in this paper.

Journal ArticleDOI
TL;DR: In this article, a maximum-likelihood estimation procedure for Poisson regression models was developed and a set of measures of the influence of data on the fit and the parameter estimates was obtained.
Abstract: The validity of least-squares procedures commonly used nowadays for the analysis of single-crystal, X- ray and neutron diffraction data is examined. An improved methodology that rests on sound statistical theory is proposed and turns out to be a fruitful way to consider any crystallographic refinement. A maximum-likelihood estimation procedure is developed for Poisson regression models. Measures of the goodness of fit (other than the R factor), generalized residuals and diagnostic plots are described. Confidence regions and intervals are also discussed. A set of measures of the influence of data on the fit and the parameter estimates is obtained for Poisson statistics. Finally, the effect of under or over dispersion of the data randomness with respect to a true Poisson distribution is considered and model-independent estimates of this dispersion are discussed.

Journal ArticleDOI
TL;DR: A new type of Monte Carlo algorithm to calculate packing fraction and general particle dispersion characteristics for arbitrary random packs of spherical particles is presented, using a dimension‐reducing trick to turn a computationally intractable problem into a tractable one.
Abstract: A new type of Monte Carlo algorithm to calculate packing fraction and general particle dispersion characteristics for arbitrary random packs of spherical particles is presented. Given arbitrary quantities of arbitrary sizes with arbitrary mass densities, the algorithms calculate the close random packing fraction. If desired, they can return the position and type of each particle in the pack. Since every detail of the positions and types of particles in the pack is known, any pack characteristic can be calculated. The algorithms use a dimension‐reducing trick to turn a computationally intractable problem into a tractable one. Planned extensions and improvements of the algorithms are discussed.