scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 1996"


Journal ArticleDOI
TL;DR: In this article, it was shown that any randomized algorithm that runs in spaceSand timeT and uses poly(S) random bits can be simulated using only O(S ) random bits in space Sand timeT+poly(S).

650 citations


Proceedings Article
01 Jan 1996
TL;DR: Of independent interest is the main technical tool: a procedure which extracts randomness from a defective random source using a small additional number of truly random bits.
Abstract: We show that any randomized algorithm that runs in space S and time T and uses poly(S) random bits can be simulated using only O(S) random bits in space Sand time T+ poly(S). A deterministic simulation in space S follows. Of independent interest is our main technical tool : a procedure which extracts randomness from a defective random source using a small additional number of truly random bits.

513 citations


Journal ArticleDOI
TL;DR: ApEn (approximate entropy), defining maximal randomness for sequences of arbitrary length, indicating the applicability to sequences as short as N = 5 points, and an infinite sequence formulation of randomness is introduced that retains the operational (and computable) features of the finite case.
Abstract: The fundamental question "Are sequential data random?" arises in myriad contexts, often with severe data length constraints. Furthermore, there is frequently a critical need to delineate nonrandom sequences in terms of closeness to randomness--e.g., to evaluate the efficacy of therapy in medicine. We address both these issues from a computable framework via a quantification of regularity. ApEn (approximate entropy), defining maximal randomness for sequences of arbitrary length, indicating the applicability to sequences as short as N = 5 points. An infinite sequence formulation of randomness is introduced that retains the operational (and computable) features of the finite case. In the infinite sequence setting, we indicate how the "foundational" definition of independence in probability theory, and the definition of normality in number theory, reduce to limit theorems without rates of convergence, from which we utilize ApEn to address rates of convergence (of a deficit from maximal randomness), refining the aforementioned concepts in a computationally essential manner. Representative applications among many are indicated to assess (i) random number generation output; (ii) well-shuffled arrangements; and (iii) (the quality of) bootstrap replicates.

467 citations


Book ChapterDOI
TL;DR: This chapter presents and compares two statistical approaches to computer experiments and introduces randomness by modeling the function, f, as a realization of a Gaussian process.
Abstract: Publisher Summary This chapter presents and compares two statistical approaches to computer experiments. The second approach does so by taking random input points. Randomness is required to generate probability or confidence intervals. The first approach introduces randomness by modeling the function, f, as a realization of a Gaussian process. Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Some of the most widely used computer models arise in the design of the semiconductors used in the computers themselves. There are two main statistical approaches to computer experiments, one based on Bayesian statistics and a frequentist one based on sampling techniques. A Bayesian approach to modeling simulator output can be based on a spatial model adapted from the geo-statistical Kriging model. This approach treats the bias or systematic departure of the response surface from a linear model as the realization of a stationary random function. This model has exact predictions at the observed responses and predicts with increasing error variance as the prediction point moves away from all the design points.

326 citations


Proceedings ArticleDOI
01 Jul 1996
TL;DR: It is shown how to generate OT – in the sense of random number generation – using any one-way function in a black-box manner, thus placing OT on an equal footing with random numbergeneration, and resolving an artificial asymmetry in the analysis of randomness and partiallycorrelated randomness.
Abstract: The race to find the weakest possible assumptions on which to base cryptographic primitives such aa oblivious transfer was abruptly baited by Impagliazzo’s and Rudich’s surprising result: basing oblivious transfer or other related problems on a black-box one-way permutation (as opposed to a one-way trapdoor permutation ) is tantamount to showing P#NP. In contrast, we show how to generate OT – in the sense of random number generation – using any one-way function in a black-box manner. That is, an initial “seed” of k OT’S suffices to generate O(kc) OT’S. In turn, we show that such generation is impossible in an information-theoretic setting, thus placing OT on an equal footing with random number generation, and resolving an artificial asymmetry in the analysis of randomness and partiallycorrelated randomness.

255 citations


Journal ArticleDOI
TL;DR: In this paper, the authors quantify the degree of local crystallization through an order parameter and study it as a function of time and initial conditions to determine the necessary conditions to obtain truly random systems.
Abstract: We present comprehensive results of large‐scale molecular dynamics and Monte Carlo simulations of systems of dense hard spheres at volume fraction φ along the disordered, metastable branch of the phase diagram from the freezing‐point φf to random close packing volume φc. It is shown that many previous simulations contained deficiencies caused by crystallization and finite‐size effects. We quantify the degree of local crystallization through an order parameter and study it as a function of time and initial conditions to determine the necessary conditions to obtain truly random systems. This ordering criterion is used to show that previous methods employed to ascertain the degree of randomness are inadequate. A careful study of the pressure is also carried out along the entire metastable branch. In the vicinity of the random‐close packing fraction, we show that the pressure scales as (φc−φ)−γ, where γ=1 and φc=0.644±0.005. Contrary to previous studies, we find no evidence of a thermodynamic glass transition.

253 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the distribution of roots of polynomials of high degree with random coefficients which appear naturally in the context of quantum chaotic dynamics and showed that under quite general conditions their roots tend to concentrate near the unit circle in the complex plane.
Abstract: We investigate the distribution of roots of polynomials of high degree with random coefficients which, among others, appear naturally in the context of “quantum chaotic dynamics.” It is shown that under quite general conditions their roots tend to concentrate near the unit circle in the complex plane. In order to further increase this tendency, we study in detail the particular case of self-inversive random polynomials and show that for them a finite portion of all roots lies exactly on the unit circle. Correlation functions of these roots are also computed analytically, and compared to the correlations of eigenvalues of random matrices. The problem of ergodicity of chaotic wavefunctions is also considered. For that purpose we introduce a family of random polynomials whose roots spread uniformly over phase space. While these results are consistent with random matrix theory predictions, they provide a new and different insight into the problem of quantum ergodicity Special attention is devoted to the role of symmetries in the distribution of roots of random polynomials.

171 citations


Journal ArticleDOI
TL;DR: In the case of Ornstein, Prohorov and other distances of the Kantorovich-Vasershtein type, it is shown that the finite-precision resolvability is equal to the rate-distortion function with a fidelity criterion derived from the accuracy measure, which leads to new results on nonstationary rate- Distortion theory.
Abstract: We study the randomness necessary for the simulation of a random process with given distributions, on terms of the finite-precision resolvability of the process. Finite-precision resolvability is defined as the minimal random-bit rate required by the simulator as a function of the accuracy with which the distributions are replicated. The accuracy is quantified by means of various measures: variational distance, divergence, Orstein (1973), Prohorov (1956) and related measures of distance between the distributions of random process. In the case of Ornstein, Prohorov and other distances of the Kantorovich-Vasershtein type, we show that the finite-precision resolvability is equal to the rate-distortion function with a fidelity criterion derived from the accuracy measure. This connection leads to new results on nonstationary rate-distortion theory. In the case of variational distance, the resolvability of stationary ergodic processes is shown to equal entropy rate regardless of the allowed accuracy. In the case of normalized divergence, explicit expressions for finite-precision resolvability are obtained in many cases of interest; and connections with data compression with minimum probability of block error are shown.

161 citations


Journal ArticleDOI
TL;DR: In this paper, the authors apply large deviation theory to particle systems with a random mean-field interaction in the McKean-Vlasov limit and describe large deviations and normal fluctuations around the MCV equation.
Abstract: We apply large-deviation theory to particle systems with a random mean-field interaction in the McKean-Vlasov limit. In particular, we describe large deviations and normal fluctuations around the McKean-Vlasov equation. Due to the randomness in the interaction, the McKean-Vlasov equation is a collection of coupled PDEs indexed by the state space of the single components in the medium. As a result, the study of its solution and of the finite-size fluctuation around this solution requires some new ingredient as compared to existing techniques for nonrandom interaction.

134 citations


Journal ArticleDOI
TL;DR: The results convincingly show that the amino acid sequences in proteins differ from what is expected from random sequences in a statistically significant way, and can be interpreted as originating from anticorrelations in terms of an Ising spin model for the hydrophobicities.
Abstract: The question of whether proteins originate from random sequences of amino acids is addressed. A statistical analysis is performed in terms of blocked and random walk values formed by binary hydrophobic assignments of the amino acids along the protein chains. Theoretical expectations of these variables from random distributions of hydrophobicities are compared with those obtained from functional proteins. The results, which are based upon proteins in the SWISS-PROT data base, convincingly show that the amino acid sequences in proteins differ from what is expected from random sequences in a statistically significant way. By performing Fourier transforms on the random walks, one obtains additional evidence for nonrandomness of the distributions. We have also analyzed results from a synthetic model containing only two amino acid types, hydrophobic and hydrophilic. With reasonable criteria on good folding properties in terms of thermodynamical and kinetic behavior, sequences that fold well are isolated. Performing the same statistical analysis on the sequences that fold well indicates similar deviations from randomness as for the functional proteins. The deviations from randomness can be interpreted as originating from anticorrelations in terms of an Ising spin model for the hydrophobicities. Our results, which differ from some previous investigations using other methods, might have impact on how permissive with respect to sequence specificity protein folding process is-only sequences with nonrandom hydrophobicity distributions fold well. Other distributions give rise to energy landscapes with poor folding properties and hence did not survive the evolution.

112 citations


01 Jan 1996
TL;DR: Extractors are Boolean functions that allow, in some precise sense, extraction of randomness from somewhat random distributions as discussed by the authors, and the closely related "Dispersers", exhibit some of the most "random-like" properties of explicitly constructed combinatorial structures.
Abstract: Extractors are Boolean functions that allow, in some precise sense, extraction of randomness from somewhat random distributions. Extractors, and the closely related "Dispersers", exhibit some of the most "random-like" properties of explicitly constructed combinatorial structures. In turn, extractors and dispersers have many applications in "removing randomness" in various settings and in making randomized constructions explicit. This manuscript surveys extractors and dispersers: what they are, how they can be designed, and some of their applications. The work described is due to of a long list of research papers by various authors-most notably by David Zuckerman.

Journal ArticleDOI
TL;DR: The theoretical basis for the annealing method is derived, on which it is based the development of a novel design algorithm and its effectiveness and superior performance in the design of practical classifiers for some of the most popular structures currently in use are demonstrated.
Abstract: A global optimization method is introduced that minimize the rate of misclassification. We first derive the theoretical basis for the method, on which we base the development of a novel design algorithm and demonstrate its effectiveness and superior performance in the design of practical classifiers for some of the most popular structures currently in use. The method, grounded in ideas from statistical physics and information theory, extends the deterministic annealing approach for optimization, both to incorporate structural constraints on data assignments to classes and to minimize the probability of error as the cost objective. During the design, data are assigned to classes in probability so as to minimize the expected classification error given a specified level of randomness, as measured by Shannon's entropy. The constrained optimization is equivalent to a free-energy minimization, motivating a deterministic annealing approach in which the entropy and expected misclassification cost are reduced with the temperature while enforcing the classifier's structure. In the limit, a hard classifier is obtained. This approach is applicable to a variety of classifier structures, including the widely used prototype-based, radial basis function, and multilayer perceptron classifiers. The method is compared with learning vector quantization, back propagation (BP), several radial basis function design techniques, as well as with paradigms for more directly optimizing all these structures to minimize probability of error. The annealing method achieves significant performance gains over other design methods on a number of benchmark examples from the literature, while often retaining design complexity comparable with or only moderately greater than that of strict descent methods. Substantial gains, both inside and outside the training set, are achieved for complicated examples involving high-dimensional data and large class overlap.

Journal ArticleDOI
TL;DR: In this paper, the authors show that a state-dependent control is optimal for continuous-time random walks in a random environment, which is a generalization of the celebrated Kelly strategy.
Abstract: We derive optimal gambling and investment policies for cases in which the underlying stochastic process has parameter values that are unobserved random variables. For the objective of maximizing logarithmic utility when the underlying stochastic process is a simple random walk in a random environment, we show that a state-dependent control is optimal, which is a generalization of the celebrated Kelly strategy: the optimal strategy is to bet a fraction of current wealth equal to a linear function of the posterior mean increment. To approximate more general stochastic processes, we consider a continuous-time analog involving Brownian motion. To analyze the continuous-time problem, we study the diffusion limit of random walks in a random environment. We prove that they converge weakly to a Kiefer process, or tied-down Brownian sheet. We then find conditions under which the discrete-time process converges to a diffusion, and analyze the resulting process. We analyze in detail the case of the natural conjugate prior, where the success probability has a beta distribution, and show that the resulting limit diffusion can be viewed as a rescaled Brownian motion. These results allow explicit computation of the optimal control policies for the continuous-time gambling and investment problems without resorting to continuous-time stochastic-control procedures. Moreover they also allow an explicit quantitative evaluation of the financial value of randomness, the financial gain of perfect information and the financial cost of learning in the Bayesian problem.

Proceedings ArticleDOI
12 May 1996
TL;DR: A behavioral model for the simulation of oscillator-based random number generators, using the random frequency variations of free-running ring oscillators, is presented for the purpose of addressing design issues.
Abstract: The design of integrated-circuit random number generators is receiving increased attention for the purpose of secure communications. Many high-speed cryptographic circuit-systems require a nondeterministic source of random bits. The security of these systems depends on the predictability or level of randomness of the generated bit stream. One popular method of generating random bits is to use the random frequency variations of free-running ring oscillators. This paper presents a behavioral model for the simulation of oscillator-based random number generators. The method of random number generation using oscillators is described and important design issues are stated. A model is developed and simulation results are presented for the purpose of addressing these design issues.

Journal ArticleDOI
TL;DR: In this paper, an analysis of acoustic wave propagation in a random shallow-water waveguide with an energy absorbing sub-bottom is presented, in which deviations of the index of refraction are a stochastic process.
Abstract: An analysis of acoustic wave propagation in a random shallow‐water waveguide with an energy absorbing sub‐bottom is presented, in which deviations of the index of refraction are a stochastic process. The specific model studied is motivated by the oceanic waveguide in shallow waters, in which the sub‐bottom sediment leads to energy loss from the acoustic field, and the stochastic process results from internal (i.e., density) waves. In terms of the normal modes of the waveguide, the randomness leads to mode coupling while the energy loss results from different attenuation rates for the various modes (i.e., mode stripping). The distinction in shallow water is that there exists a competition between the mode‐coupling terms, which redistribute the modal energies, and mode stripping, which results in an irreversible loss of energy. Theoretically, averaged equations are formulated for both the modal intensities and fluctuations (the second and fourth moments of acoustic pressure, respectively), similar to previous formulations which, however, did not include the effects of sub‐bottom absorption of acoustic energy. The theory developed here predicts that there is a mismatch between decay rates between the second and fourth moments, implying that the scintillation index (which is a measure of the strength of the random scattering) grows exponentially in range. Thus the usual concept of equilibrium or saturated statistics must be modified. This theoretical prediction is generally valid and depends only on assuming the forward scattering approximation, the Markov approximation (i.e., the short‐range nature of the correlations between sound‐speed fluctuations) and neglecting the cross‐modal coherences. In order to assess the importance of these assumptions Monte Carlo simulations of stochastic coupled‐mode equations are presented. For these simulations, models of internal‐wave processes, deterministic shallow‐water acoustic environments, and sub‐bottom attenuation that simplify the numerical computation were chosen. While these models are unrealistic they illustrate the theoretically predicted behavior.

Journal ArticleDOI
TL;DR: In this article, a steady 2D non-uniform groundwater flow is considered in a statistically isotropic field of hydraulic conductivity K(x), with a single production well and a uniform base gradient.

Journal ArticleDOI
TL;DR: In this paper, the problem of a medium with two layers separated by an interface randomly fluctuating in space was formulated using the second-moments characteristics of the interface spatial fluctuations, and the Karhunen-Loeve and polynomial chaos expansions were used to transform the problem into a computationally tractable form.
Abstract: This paper addresses the problem of a medium with two layers separated by an interface randomly fluctuating in space. The medium is subjected to an in-plane strain field simulating the effect of a surface foundation. The second-moments characteristics of the interface spatial fluctuations are used to formulate the problem. The Karhunen-Loeve and the polynomial chaos expansions are utilized to transform the problem into a computationally tractable form, thus resulting in a system of linear algebraic equations to solve. The difficulty in this problem stems from the geometric nature of the randomness, resulting in a stiffness matrix that is nonlinear in the randomness. This leads to a nonlinear stochastic problem, the solution of which is accomplished by relying on the polynomial chaos representation of stochastic processes.

Journal ArticleDOI
TL;DR: It is found that dimerization is a relevant perturbation at the random singlet fixed point and conjecture that random integer spin chains in the Haldane phase exhibit similar thermodynamic and topological properties.
Abstract: Using an asymptotically exact real space renormalization procedure, we find that the dimerized spin-1/2 chain is extremely stable against bond randomness. For weak dimerization or, equivalently, strong randomness, it is in a Griffiths phase with short-range spin-spin correlations and a divergent susceptibility. The string topological order persists. We conjecture that random integer spin chains in the Haldane phase exhibit similar thermodynamic and topological properties.

Journal ArticleDOI
TL;DR: The effect of bond randomness on tricritical and critical end-point phenomena is studied by renormalization-group theory in three dimensions, which indicates a violation of the empirical universality principle.
Abstract: The effect of bond randomness on tricritical and critical end-point phenomena is studied by renormalization-group theory. In three dimensions, the pure-system tricritical point is replaced by a line segment of second-order transitions dominated by randomness and bounded by a multicritical point and a random-bond tricritical point, which reaches zero temperature at threshold randomness. This topology indicates a violation of the empirical universality principle. The random-bond tricritical point renormalizes onto the fixed distribution of random-field Ising criticality. {copyright} {ital 1996 The American Physical Society.}

Journal ArticleDOI
TL;DR: It is proposed that apparent fractal behavior observed experimentally over a limited range may often have its origin in underlying randomness.
Abstract: The fractal properties of models of randomly placed $n$-dimensional spheres ($n=1, 2, 3$) are studied using standard techniques for calculating fractal dimensions in empirical data (the box counting and Minkowski-sausage techniques). Using analytical and numerical calculations it is shown that in the regime of low volume fraction occupied by the spheres, apparent fractal behavior is observed for a range of scales between physically relevant cutoffs. The width of this range, typically spanning between one and two orders of magnitude, is in very good agreement with the typical range observed in experimental measurements of fractals. The dimensions are not universal and depend on density. These observations are applicable to spatial, temporal, and spectral random structures. Polydispersivity in sphere radii and impenetrability of the spheres (resulting in short range correlations) are also introduced and are found to have little effect on the scaling properties. We thus propose that apparent fractal behavior observed experimentally over a limited range may often have its origin in underlying randomness.

Proceedings ArticleDOI
01 Jul 1996
TL;DR: A new tool, a “merger”, is devised, which is a function that accepts d strings, one of which is uniformly distributed, and outputs a single string that is guaranteed to be uniformly distributed.
Abstract: We deal with the problem of extracting as much randomness as possible from a defective random source. We devise a new tool, a “merger”, which is a function that accepts d strings, one of which is uniformly distributed, and outputs a single string that is guaranteed to be uniformly distributed. We show how to build good explicit mergers, and how mergers can be used to build better extractors. Previous work has succeeded in extracting “some” of the randomness from sources with “large” rein-entropy. We improve on this in two respects. First, we build extractors for any source, whatever its rein-entropy is, and second, we extract all the randomness in the given source. Efficient extractors have many applications, and we show that using our extractor we get better results in many of these applications, e.g., we achieve the first explicit IV-superconcentrators of linear size and polyloglog(N) depth.

Journal ArticleDOI
TL;DR: It is shown that for strong randomness there is a second order transition with critical properties that can be determined exactly by use of a renormalization group procedure.
Abstract: We study zero temperature phase transitions in two classes of random quantum systems---the $q$-state quantum Potts and clock models. For models with purely ferromagnetic interactions in one dimension, we show that for strong randomness there is a second order transition with critical properties that can be determined exactly by use of a renormalization group procedure. Somewhat surprisingly, the critical behavior is completely independent of $q$. For the $qg4$ clock model, we suggest the existence of a novel multicritical point at intermediate randomness. We also consider the $T\phantom{\rule{0ex}{0ex}}=\phantom{\rule{0ex}{0ex}}0$ transition from a paramagnet to a spin glass in an infinite-range model, and find $q$ independent exponents.

Proceedings Article
01 Jan 1996
TL;DR: In this paper, the phase matrix of a dense discrete random medium is developed by relaxing the far-field approximation and accounting for the effect of volume fraction and randomness properties characterized by the variance and correlation function of scatterer positions within the medium.
Abstract: In the derivation of the conventional scattering phase matrix of a discrete random medium, the far-field approximation is usually assumed. In this paper, the phase matrix of a dense discrete random medium is developed by relaxing the far-field approximation and accounting for the effect of volume fraction and randomness properties characterized by the variance and correlation function of scatterer positions within the medium. The final expression for the phase matrix differs from the conventional one in two major aspects: there is an amplitude and a phase correction. The concept used in the derivation is analogous to the antenna array theory. The phase matrix for a collection of scatterers is found to be the Stokes matrix of the single scatterer multiplied by a dense medium phase correction factor. The close spacing amplitude correction appears inside the Stokes matrix. When the scatterers are uncorrelated, the phase correction factor approaches unity. The phase matrix is used to calculate the volume scattering coefficients for a unit volume of spherical scatterers, and the results are compared with calculations from other theories, numerical simulations, and laboratory measurements. Results indicate that there should be a distinction between physically dense medium and electrically dense medium.

Journal ArticleDOI
TL;DR: A systematic analysis of the amount of randomness needed by secret sharing schemes and secure key distribution schemes is given and a lower bound is provided, thus showing the optimality of a recently proposed key distribution protocol.
Abstract: Randomness is a useful computation resource due to its ability to enhance the capabilities of other resources. Its interaction with resources such as time, space, interaction with provers and its role in several areas of computer science has been extensively studied. In this paper we give a systematic analysis of the amount of randomness needed by secret sharing schemes and secure key distribution schemes. We give both upper and lower bounds on the number of random bits needed by secret sharing schemes. The bounds are tight for several classes of secret sharing schemes. For secure key distribution schemes we provide a lower bound on the amount of randomness needed, thus showing the optimality of a recently proposed key distribution protocol.

Journal ArticleDOI
TL;DR: In this paper, the phase matrix of a dense discrete random medium is developed by relaxing the far-field approximation and accounting for the effect of volume fraction and randomness properties characterized by the variance and correlation function of scatterer positions within the medium.
Abstract: In the derivation of the conventional scattering phase matrix of a discrete random medium, the far-field approximation is usually assumed. In this paper, the phase matrix of a dense discrete random medium is developed by relaxing the far-field approximation and accounting for the effect of volume fraction and randomness properties characterized by the variance and correlation function of scatterer positions within the medium. The final expression for the phase matrix differs from the conventional one in two major aspects: there is an amplitude and a phase correction. The concept used in the derivation is analogous to the antenna array theory. The phase matrix for a collection of scatterers is found to be the Stokes matrix of the single scatterer multiplied by a dense medium phase correction factor. The close spacing amplitude correction appears inside the Stokes matrix. When the scatterers are uncorrelated, the phase correction factor approaches unity. The phase matrix is used to calculate the volume scattering coefficients for a unit volume of spherical scatterers, and the results are compared with calculations from other theories, numerical simulations, and laboratory measurements. Results indicate that there should be a distinction between physically dense medium and electrically dense medium.

Proceedings ArticleDOI
01 Jul 1996
TL;DR: A constructive O(log n) round protocol for leader election in the full information model that is resilient against any coalition of size /3n for any constant ~ < 1/2 and gives two applications of these tools.
Abstract: We present the first efficient universal oblivious sampler that uses an optimal number of random bits, up to an arbitrary constant factor bigger than 1. Specifically, for any a >0 and c(m) ~ exp(–cr210K* ‘m), our sampler can use (1 + a) (m + log 7-1 ) random bits to output d = poly(c-l, log-y-l, m) sample points Z1, ..., z~ C {O, I}m such that for any function ~ : {O, l}m ~ [0, 1], Our proof is based on an improved extractor construction. An extractor is a procedure which takes as input the output of a defective random source and a small number of truly random bits, and outputs a nearly-random string. We present the first optimal extractor, up to constant factors, for defective random sources with constant entropy rate. We give two applications of these tools. First, we exhibit a constructive O(log n) round protocol for leader election in the full information model that is resilient against any coalition of size /3n for any constant ~ < 1/2. Each player sends only log n bits per round. Second, given a 2g (n) round AM proof for L in which Arthur sends l(n) random bits per round and Merlin responds with a q(n) bit string, we construct a g(n) round AM proof for a language L in which Arthur sends O(l(n) + q(n)) random bits per round and Merlin’s response remains of polynomial length. “ Dept. of Computer sciences, The University af T.sxaa at Austin, Austin, TX 7S712, diz@cs.utexas.edu. Supported in part by NSF NY1 Grant No. CCF&9457799. Permieeion to make dlgitrd/hard copies of all or part of Wla materiel for pereonel or cleaaroom uee is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication and ita date appear, and notice is given that copyright ia by permiaaion of the ACM, Inc. To copy otherwiee, to republish, to poet on acrvem or to redatribute ta Mite, mquima epwific venniasion andlor fee. STOC’96, Philadelphia PA, USA ~ 199(j ACM &89791-785.5/96/05. .$3+50

Book ChapterDOI
01 Jan 1996
TL;DR: In this paper, a continuous semimarkov process is proposed to model the growth of cracks in metal fatigue and the Paris-Erdogan law for the mean behavior of cracks.
Abstract: Metal fatigue is a major cause for failure of mechanical and structural components. We review the fracture mechanics of fatigue and Paris-Erdogan law for the mean behavior. After a consideration of experimental data reported by Virkler et al. (1979), we propose a continuous semimarkov process to model crack growth. The model accounts for the material randomness and sees crack as a motion in a random field.

Proceedings ArticleDOI
24 May 1996
TL;DR: This manuscript surveys extractors and dispersers: what they are, how they can be designed, and some of their applications.
Abstract: Extractors are Boolean functions that allow, in some precise sense, extraction of randomness from somewhat random distributions. Extractors, and the closely related "Dispersers", exhibit some of the most "random-like" properties of explicitly constructed combinatorial structures. In turn, extractors and dispersers have many applications in "removing randomness" in various settings and in making randomized constructions explicit. This manuscript surveys extractors and dispersers: what they are, how they can be designed, and some of their applications. The work described is due to of a long list of research papers by various authors-most notably by David Zuckerman.

Journal ArticleDOI
TL;DR: In this paper, the authors deal with a settlement analysis of shallow foundations resting on a layered subsoil, which is based on the finite element method coupled with stochastic versions of the perturbation and the Neumann expansion methods.

Journal ArticleDOI
Dilip Sarkar1
01 May 1996
TL;DR: A novel method for measuring generalization ability is defined and it has been shown that if correct classification probability of a single network is greater than half, then as the number of networks in a voting network is increased so does its generalized ability.
Abstract: Among several models of neurons and their interconnections, feedforward artificial neural networks (FFANNs) are most popular, because of their simplicity and effectiveness. Difficulties such as long learning time and local minima may not affect FFANNs as much as the question of generalization ability, because a network needs only one training, and then it may be used for a long time. This paper reports our observations about randomness in generalization ability of FFANNs. A novel method for measuring generalization ability is defined. This method can be used to identify degree of randomness in generalization ability of learning systems. If an FFANN architecture shows randomness in generalization ability for a given problem, multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks in a voting network is increased so does its generalization ability. Further analysis has shown that VC-dimension of the voting network model may increase monotonically as the number of networks in the voting networks is increased.