scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 2001"


Journal ArticleDOI
TL;DR: The dynamics of networks between order and randomness, characteristics of small world networks, and the structure and dynamic of networks mark newman.
Abstract: small worlds the dynamics of networks between order and. download small worlds the dynamics of networks between. small worlds and the dynamics of networks. small world networks oxford handbooks. small worlds the dynamics of networks between order and. small worlds the dynamics of networks between order and. book review small worlds the dynamics of networks. small worlds the dynamics of networks between order and. small worlds the dynamics of networks between order and. networks dynamics and the small world phenomenon. small world networks math insight. grossman oakland edu the american mathematical monthly. small world network. small world networks cs brynmawr edu. characteristics of small world networks. small worlds the dynamics of networks between order and. watts d j 1999 small worlds the dynamics of networks. small worlds the dynamics of networks between order and. ef?cient behavior of small world networks. small worlds the dynamics of networks between order and. the structure and dynamics of networks mark newman. small worlds the dynamics of networks between order and. small worlds the dynamics of networks between order and randomness. small worlds the

1,218 citations


BookDOI
01 Jan 2001
TL;DR: The hierarchy of climate models The emergence of randomness - chaos, averaging, limit theorems tools and methods - SDE, dynamical systems, SPDE, multiscale techniques reduced stochastic models and particular techniques as mentioned in this paper.
Abstract: The hierarchy of climate models The emergence of randomness - chaos, averaging, limit theorems tools and methods - SDE, dynamical systems, SPDE, multiscale techniques reduced stochastic models and particular techniques.

887 citations


Book
31 Jul 2001
TL;DR: In this paper, the Randomness Assumption is used to define non-parametric models and Parametric models for word frequency distributions, and Mixture distributions are used for mixture distributions.
Abstract: 1. Word Frequencies. 2. Non-parametric models. 3. Parametric models. 4. Mixture distributions. 5. The Randomness Assumption. 6. Examples of Applications. A. List of Symbols. B. Solutions of the exercises. C. Software. D. Data sets. Bibliography. Index.

606 citations


Journal ArticleDOI
TL;DR: This paper showed that the causal-state representation of ∈-machine is the minimal one consistent with accurate prediction and established several results on ∈machine optimality and uniqueness and on how ∆-machines compare to alternative representations.
Abstract: Computational mechanics, an approach to structural complexity, defines a process's causal states and gives a procedure for finding them. We show that the causal-state representation—an ∈-machine—is the minimal one consistent with accurate prediction. We establish several results on ∈-machine optimality and uniqueness and on how ∈-machines compare to alternative representations. Further results relate measures of randomness and structural complexity obtained from ∈-machines to those from ergodic and information theories.

492 citations


Journal ArticleDOI
TL;DR: In this article, the authors computed the density and microstructure dependence of the Young's modulus (E) and Poisson's ratio (PR) for several different isotropic random models based on Voronoi tessellations and level-cut Gaussian random fields.

403 citations


Journal ArticleDOI
TL;DR: In this paper, a unified model of atmospheric turbulence is proposed to determine the 3D gust-excited response of structures, all parameters are assigned through first and second order statistical moments derived from a wide set of selected experimental measurements.

283 citations


Journal ArticleDOI
TL;DR: Numerical evidence is reported that an epidemiclike model, which can be interpreted as the propagation of a rumor, exhibits critical behavior at a finite randomness of the underlying small-world network.
Abstract: We report numerical evidence that an epidemiclike model, which can be interpreted as the propagation of a rumor, exhibits critical behavior at a finite randomness of the underlying small-world network The transition occurs between a regime where the rumor ``dies'' in a small neighborhood of its origin, and a regime where it spreads over a finite fraction of the whole population Critical exponents are evaluated through finite-size scaling analysis, and the dependence of the critical randomness with the network connectivity is studied The behavior of this system as a function of the small-network randomness bears noticeable similarities with an epidemiological model reported recently [M Kuperman and G Abramson, Phys Rev Lett 86, 2909 (2001)], in spite of substantial differences in the respective dynamical rules

276 citations


Journal ArticleDOI
TL;DR: The resulting diversity, and the combinatorial complexity created by so many dimensions of random variation, mean that the failure of a simple test of statistical independence performed on iris patterns can serve as a reliable rapid basis for automatic personal identification.
Abstract: We investigated the randomness and uniqueness of human iris patterns by mathematically comparing 2.3 million different pairs of eye images. The phase structure of each iris pattern was extracted by demodulation with quadrature wavelets spanning several scales of analysis. The resulting distribution of phase sequence variation among different eyes was precisely binomial, revealing 244 independent degrees of freedom. This amount of statistical variability corresponds to an entropy (information density) of about 3.2 bits mm(-2) over the iris. It implies that the probability of two different irides agreeing by chance in more than 70% of their phase sequence is about one in 7 billion. We also compared images of genetically identical irides, from the left and right eyes of 324 persons, and from monozygotic twins. Their relative phase sequence variation generated the same statistical distribution as did unrelated eyes. This indicates that apart from overall form and colour, iris patterns are determined epigenetically by random events in the morphogenesis of this tissue. The resulting diversity, and the combinatorial complexity created by so many dimensions of random variation, mean that the failure of a simple test of statistical independence performed on iris patterns can serve as a reliable rapid basis for automatic personal identification.

272 citations


Journal ArticleDOI
TL;DR: A study on the applicability of different kinds of neural networks for the probabilistic analysis of structures, when the sources of randomness can be modeled as random variables, is summarized.

250 citations


Journal ArticleDOI
TL;DR: This work applies to edge detection a recently introduced method for computing geometric structures in a digital image, without any a priori information, to define and compute edges and boundaries in an image by a parameter-free method.
Abstract: We apply to edge detection a recently introduced method for computing geometric structures in a digital image, without any a priori information. According to a basic principle of perception due to Helmholtz, an observed geometric structure is perceptually “meaningful” if its number of occurences would be very small in a random situation: in this context, geometric structures are characterized as large deviations from randomness. This leads us to define and compute edges and boundaries (closed edges) in an image by a parameter-free method. Maximal detectable boundaries and edges are defined, computed, and the results compared with the ones obtained by classical algorithms.

232 citations


Proceedings ArticleDOI
06 Jul 2001
TL;DR: A clean method is given for overcoming this bottleneck by constructing loss-less condensers, which compress the n-bit input source without losing any min-entropy, using O(\log n) additional random bits.
Abstract: An extractor is a procedure which extracts randomness from a detective random source using a few additional random bits. Explicit extractor constructions have numerous applications and obtaining such constructions is an important derandomization goal. Trevisan recently introduced an elegant extractor construction, but the number of truly random bits required is suboptimal when the input source has low-min-entropy. Significant progress toward overcoming this bottleneck has been made, but so far has required complicated recursive techniques that lose the simplicity of Trevisan's construction. We give a clean method for overcoming this bottleneck by constructing {\em loss-less condensers}. which compress the n-bit input source without losing any min-entropy, using O(\log n) additional random bits. Our condensers are built using a simple modification of Trevisan's construction, and yield the best extractor constructions to date. Loss-less condensers also produce unbalanced bipartite expander graphs with small (polylogarithmic) degree D and very strong expansion of (1-\epilon)D. We give other applications of our construction, including dispersers with entropy loss O(\log n), depth two super-concentrators whose size is within a polylog of optimal, and an improved hardness of approximation result.

Proceedings ArticleDOI
14 Oct 2001
TL;DR: A simple, self-contained extractor construction that produces good extractors for all min-entropies and a hitting set generator with optimal seed length that outputs s/sup /spl Omega/(1)/ bits when given a function that requires circuits of size s (for any s).
Abstract: We present a simple, self-contained extractor construction that produces good extractors for all min-entropies (min-entropy measures the amount of randomness contained in a weak random source). Our construction is algebraic and builds on a new polynomial-based approach introduced by A. Ta-Shma et al. (2001). Using our improvements, we obtain, for example, an extractor with output length m=k/sup 1-/spl delta// and seed length O(log n). This matches the parameters of L. Trevisan's (1999) breakthrough result and additionally achieves those parameters for small min-entropies k. Our construction gives a much simpler and more direct solution to this problem. Applying similar ideas to the problem of building pseudo-random generators, we obtain a new pseudo-random generator construction that is not based on the NW generator (N. Nisan and A. Widgerson, 1994), and turns worst-case hardness directly into pseudo-randomness. The parameters of this generator are strong enough to obtain a new proof that P=BPP if E requires exponential size circuits. Essentially, the same construction yields a hitting set generator with optimal seed length that outputs s/sup /spl Omega/(1)/ bits when given a function that requires circuits of size s (for any s). This implies a hardness versus randomness trade off for RP and BPP that is optimal (up to polynomial factors), solving an open problem raised by R. Impagliazzo et al. (1999). Our generators can also be used to derandomize AM.

Book ChapterDOI
01 Jan 2001
TL;DR: It is shown how to solve network combinatorial optimization problems using a randomized algorithm based on the cross-entropy method, and it is shown that for a finite sample the algorithm converges with very high probability to a very small subset of the optimal values.
Abstract: We show how to solve network combinatorial optimization problems using a randomized algorithm based on the cross-entropy method. The proposed algorithm employs an auxiliary random mechanism, like a Markov chain, which converts the original deterministic network into an associated stochastic one, called the associated stochastic network (ASN). Depending on a particular problem, we introduce the randomness in ASN by making either the nodes or the edges of the network random. Each iteration of the randomized algorithm based on the ASN involves the following two phases: 1. Generation of trajectories using the random mechanism and calculation of the associated path (objective functions) and some related quantities, such as rare-event probabilities. 2. Updating the parameters associated with the random mechanism, like the probability matrix P of the Markov chain, on the basis of the data collected at first phase. We show that asymptotically the matrix P converges to a degenerated one P* d in the sense that at each row of the MC P* d only a single element equals unity, while the remaining elements in each row are zeros. Moreover, the unity elements of each row uniquely define the optimal solution. We also show numericaly that for a finite sample the algorithm converges with very high probability to a very small subset of the optimal values. We finally show that the proposed method can also be used for noisy networks, namely where the deterministic edge distances in the network are replaced by random variables with unknown expected values. Supporting numerical results are given as well. Our numerical studies suggest that the proposed algorithm typically has polynomial complexity in the size of the network.

Journal ArticleDOI
TL;DR: A theory to explain random behavior for the digits in the expansions of fundamental mathematical constants and proofs of base-2 normality for a collection of celebrated constants, including π, log 2, ζ(3), and others are proposed.
Abstract: We propose a theory to explain random behavior for the digits in the expansions of fundamental mathematical constants. At the core of our approach is a general hypothesis concerning the distribution of the iterates generated by dynamical maps. On this main hypothesis, one obtains proofs of base-2 normality—namely bit randomness in a specific technical sense—for a collection of celebrated constants, including π, log 2, ζ(3), and others. Also on the hypothesis, the number ζ(5) is either rational or normal to base 2. We indicate a research connection between our dynamical model and the theory of pseudorandom number generators.

Journal ArticleDOI
TL;DR: It is proved that there are uncountably many sets that are low for the class of Schnorr random reals and it is shown that they all have Turing degree incomparable to 0′.
Abstract: We prove that there are uncountably many sets that are low for the class of Schnorr random reals. We give a purely recursion theoretic characterization of these sets and show that they all have Turing degree incomparable to 0′. This contrasts with a result of Kucera and Terwijn [5] on sets that are low for the class of Martin-Lof random reals.

Journal ArticleDOI
TL;DR: In this article, the Clausius-Mossotti (Maxwell-Garnett) formula was extended for the non-dilated mixtures by adding the higher order terms in concentration and qualitatively evaluated the effect of randomness in the fibers locations.
Abstract: An important area of materials science is the study of effective dielectric, thermal and electrical properties of two phase composite materials with very different properties of the constituents. The case of small concentration is well studied and analytical formulas such as Clausius–Mossotti (Maxwell–Garnett) are successfully used by physicists and engineers. We investigate analytically the case of an arbitrary number of unidirectional circular fibers in the periodicity cell when the concentration of the fibers is not small, i.e., we account for interactions of all orders (pair, triplet, etc.). We next consider transversely-random unidirectional composite of the parallel fibers and obtain a closed form representation for the effective conductivity (as a power series in the concentration v). We express the coefficients in this expansion in terms of integrals of the elliptic Eisenstein functions. These integrals are evaluated and the explicit dependence of the parameter d, which characterizes random position of the fibers centers, is obtained. Thus we have extended the Clausius–Mossotti formula for the non dilute mixtures by adding the higher order terms in concentration and qualitatively evaluated the effect of randomness in the fibers locations. In particular, we have proven that the periodic array provides extremum for the effective conductivity in our class of random arrays (“shaking” geometries). Our approach is based on complex analysis techniques and functional equations, which are solved by the successive approximations method.

Journal ArticleDOI
01 Jun 2001-Fractals
TL;DR: In this article, the authors introduce a method of statistical analysis alternative to the compression procedures, with which the limitations of the traditional Kolmogorov-Sinai (KS) approach are bypassed and prove that this method makes it possible for us to build up a memory detector, which signals the presence of even very weak memory, provided that this is persistent over large time intervals.
Abstract: We argue that a process of social interest is a balance of order and randomness, thereby producing a departure from a stationary diffusion process. The strength of this departure effect vanishes if the order to randomness intensity ratio vanishes, and this property allows us to reveal, although in an indirect way, the existence of a finite order to randomness intensity ratio. We aim at detecting this effect. We introduce a method of statistical analysis alternative to the compression procedures, with which the limitations of the traditional Kolmogorov-Sinai (KS) approach are bypassed. We prove that this method makes it possible for us to build up a memory detector, which signals the presence of even very weak memory, provided that this is persistent over large time intervals. We apply the analysis to the study of the teen birth phenomenon and we find that the unmarried teen births are a manifestation of a social process with a memory more intense than that of the married teens. We attempt to give a social interpretation of this effect.

Journal ArticleDOI
TL;DR: In this article, the authors solve the problem posed by SA Kalikow whether the event that the $x$-coordinate of a random walk in a two-dimensional random environment approaches $infty$ has necessarily probability either zero or one.
Abstract: We solve the problem posed by SA Kalikow whether the event that the $x$-coordinate of a random walk in a two-dimensional random environment approaches $\infty$ has necessarily probability either zero or one The answer is yes if we assume the environment to be iidand in general no if we allow the environment to be just stationary and ergodic

Journal ArticleDOI
TL;DR: This paper investigated the possibility that the randomness mechanism lies not within the individual players but in the interaction between the players, and a model of this process was developed and shown to be capable of generating chaos-like behaviors as an emergent property.

BookDOI
01 Jan 2001
TL;DR: In this article, the randomness of Fatigue and Fracture Behaviour in Metallic Materials and Mechanical Structures by A. Pineau and D. R. Willis is discussed.
Abstract: Preface Statistical Continuum Mechanics: an Introduction by M. J. Beran.- Random Structure Models for Homogenization and Fracture Statistics by D. Jeulin.- Mechanics of Random Materials: Stochastics, Scale Effects, and Computation by M. Ostoja-Starzewski.- The Randomness of Fatigue and Fracture Behaviour in Metallic Materials and Mechanical Structures by A. Pineau.- Lectures on Mechanics of Random Media by J. R. Willis.

Journal ArticleDOI
TL;DR: It is shown that choosing different partitions to obtain coarse-grained symbolic sequences may lead to spurious results for the estimated entropy and will not fully reveal the randomness of the sequence.
Abstract: The concept of symbolic dynamics, entropy and complexity measures has been widely utilized for the analysis of measured time series. However, little attention as been devoted to investigate the effects of choosing different partitions to obtain the coarse-grained symbolic sequences. Because the theoretical concepts of generating partitions mostly fail in the case of empirical data, one commonly introduces a homogeneous partition which ensures roughly equidistributed symbols. We will show that such a choice may lead to spurious results for the estimated entropy and will not fully reveal the randomness of the sequence.

Posted Content
20 Nov 2001
TL;DR: In this article, the authors develop the theory of recursive functions statistic, the maximum and minimum value, the existence of absolutely nonstochastic objects (that have maximal sophistication), determine its relation with the more restricted model classes of finite sets, and computable probability distributions, in particular with respect to the algorithmic (Kolmogorov) minimal sufficient statistics, the relation to the halting problem and further algorithmic properties.
Abstract: The information in an individual finite object (like a binary string) is commonly measured by its Kolmogorov complexity One can divide that information into two parts: the information accounting for the useful regularity present in the object and the information accounting for the remaining accidental information There can be several ways (model classes) in which the regularity is expressed Kolmogorov has proposed the model class of finite sets, generalized later to computable probability mass functions The resulting theory, known as Algorithmic Statistics, analyzes the algorithmic sufficient statistic when the statistic is restricted to the given model class However, the most general way to proceed is perhaps to express the useful information as a recursive function The resulting measure has been called the ``sophistication'' of the object We develop the theory of recursive functions statistic, the maximum and minimum value, the existence of absolutely nonstochastic objects (that have maximal sophistication--all the information in them is meaningful and there is no residual randomness), determine its relation with the more restricted model classes of finite sets, and computable probability distributions, in particular with respect to the algorithmic (Kolmogorov) minimal sufficient statistic, the relation to the halting problem and further algorithmic properties

Journal ArticleDOI
TL;DR: In this article, it was shown that the competition between interactions on different length scales can cause a glass transition in a system with no explicitly quenched disorder, and a universal criterion for the emergence of an exponentially large number of metastable configurations that leads to a finite configurational entropy and a landscape dominated viscous flow.
Abstract: We show that the competition between interactions on different length scales, as relevant for the formation of stripes in doped Mott insulators, can cause a glass transition in a system with no explicitly quenched disorder. We analytically determine a universal criterion for the emergence of an exponentially large number of metastable configurations that leads to a finite configurational entropy and a landscape dominated viscous flow. We demonstrate that glassines is unambiguously tied to a new length scale which characterizes the typical length over which defects and imperfections in the stripe pattern are allowed to wander over long times.

Journal ArticleDOI
TL;DR: In this paper, Monte Carlo simulation is used to assess the effects of microstructural randomness on the local stress response of composite materials, where the mean, variance and spectral density functions describing the randomly varying elastic properties are required as input.

Journal ArticleDOI
TL;DR: This study verifies the prediction that independent random variation of latency at different locations will give rise to randomness of choice of target when pairs of targets are presented asynchronously.

Posted Content
TL;DR: The aim of this paper is to describe a procedure, that combines Java programming and mathematical proofs, to compute the exact values of the first 64 bits of a Chaitin Omega: full description of programs and proofs will be given elsewhere.
Abstract: A Chaitin Omega number is the halting probability of a universal Chaitin (self-delimiting Turing) machine. Every Omega number is both computably enumerable (the limit of a computable, increasing, converging sequence of rationals) and random (its binary expansion is an algorithmic random sequence). In particular, every Omega number is strongly non-computable. The aim of this paper is to describe a procedure, which combines Java programming and mathematical proofs, for computing the exact values of the first 64 bits of a Chaitin Omega: 0000001000000100000110001000011010001111110010111011101000010000. Full description of programs and proofs will be given elsewhere.

Journal ArticleDOI
TL;DR: Using the photosensitive character of the chlorine dioxide-iodine-malonic acid reaction-diffusion system, spatial randomness is introduced in the system and Turing patterns appear and are stable at levels of average illumination that would be more than sufficient to suppress pattern formation in the case of homogeneous illumination.
Abstract: The effect of spatially correlated noise on Turing structures is analyzed both experimentally and numerically. Using the photosensitive character of the chlorine dioxide-iodine-malonic acid reaction-diffusion system, spatial randomness is introduced in the system. In the presence of noise, Turing patterns appear and are stable at levels of average illumination that would be more than sufficient to suppress pattern formation in the case of homogeneous illumination.

01 Jan 2001
TL;DR: Grs and Joshua B. Tenenbaum as mentioned in this paper argue that the apparent inconsistency between people's intuitions about chance and the normative predic- tions of probability theory, as expressed in judgments about randomness and coincidences, can be resolved by focussing on the evidence observations provide about the processes that generated them rather than their likelihood.
Abstract: Randomness and Coincidences: Reconciling Intuition and Probability Theory Thomas L. Griffiths & Joshua B. Tenenbaum Department of Psychology Stanford University Stanford, CA 94305-2130 USA gruffydd,jbt @psych.stanford.edu Abstract We argue that the apparent inconsistency between peo- ple’s intuitions about chance and the normative predic- tions of probability theory, as expressed in judgments about randomness and coincidences, can be resolved by focussing on the evidence observations provide about the processes that generated them rather than their likelihood. This argument is supported by probabilistic modeling of sequence and number production, together with two ex- periments that examine judgments about coincidences. People are notoriously inaccurate in their judgments about randomness, such as whether a sequence of heads and tails like is more random than the se- quence . Intuitively, the former sequence seems more random, but both sequences are equally likely to be produced by a random generating process that chooses or with equal probability, such as a fair coin. This kind of question is often used to illustrate how our intuitions about chance deviate from the normative standards set by probability theory. Our intuitions about coincidental events, which seem to be defined by their improbability, have faced similar criticism from statisti- cians (eg. Diaconis & Mosteller, 1989). The apparent inconsistency between our intuitions about chance and the formal structure of probability the- ory has provoked attention from philosophers and mathe- maticians, as well as psychologists. As a result, a number of definitions of randomness exist in both the mathemat- ical (eg. Chaitin, 2001; Kac, 1983; Li & Vitanyi, 1997) and the psychological (eg. Falk, 1981; Lopes, 1982) lit- erature. These definitions vary in how well they satisfy our intuitions, and can be hard to reconcile with proba- bility theory. In this paper, we will argue that there is a natural relationship between people’s intuitions about chance and the normative standards of probability theory. Traditional criticism of people’s intuitions about chance has focused on the fact that people are poor estimators of the likelihood of events being produced by a particu- lar generating process. The models we present turn this question around, asking how much more likely a set of events makes a particular generating process. This ques- tion may be far more useful in natural inference situa- tions, where it is often more important to reason diagnos- tically than predictively, attempting to infer the structure of our world from the data we observe. Randomness Reichenbach (1934/1949) is credited with having first suggested that mathematical novices will be unable to produce random sequences, instead showing a tendency to overestimate the frequency with which outcomes alter- nate. Subsequent research has provided support for this claim (reviewed in Bar-Hillel & Wagenaar, 1991; Tune, 1964; Wagenaar, 1972), with both sequences of numbers (eg. Budescu, 1987; Rabinowitz, Dunlap, Grant, & Cam- pione, 1989) and two-dimensional black and white grids (Falk, 1981). In producing binary sequences, people al- ternate with a probability of approximately 0.6, rather than the 0.5 that is seen in sequences produced by a ran- dom generating process. This preference for alternation results in subjectively random sequences containing less runs – such as an interrupted series of heads in a set of coin flips – than might be expected by chance (Lopes, Theories of subjective randomness A number of theories have been proposed to account for the accuracy of Reichenbach’s conjecture. These theo- ries have included postulating that people develop a con- cept of randomness that differs from the true definition of the term (eg. Budescu, 1987; Falk, 1981; Skinner, 1942), and that limited short-term memory might con- tribute to people’s responses (Baddeley, 1966; Kareev, 1992; 1995; Wiegersma, 1982). Most recently, Falk and Konold (1997) suggested that the concept of randomness can be connected to the subjective complexity of a se- quence, characterized by the difficulty of specifying a rule by which a sequence can be generated. This idea is related to a notion of complexity based on descrip- tion length (Li & Vitanyi, 1997), and has been considered elsewhere in psychology (Chater, 1996). The account of randomness that has had the strongest influence upon the wider literature of cognitive psychol- ogy is Kahneman and Tversky’s (1972) suggestion that people may be attempting to produce sequences that are “representative” of the output of a random generating process. For sequences, this means that the number of elements of each type appearing in the sequence should correspond to the overall probability with which these el- ements occur. Random sequences should also maintain local representativeness, such that subsequences demon- strate the appropriate probabilities.

Journal ArticleDOI
TL;DR: By extensive Monte Carlo simulations, numerical evidence for softening to a second-order transition induced by randomness is given in the strongly disordered regime of the three-dimensional four-state Potts model.
Abstract: We study by extensive Monte Carlo simulations the effect of random bond dilution on the phase transition of the three-dimensional four-state Potts model that is known to exhibit a strong first-order transition in the pure case. The phase diagram in the dilution-temperature plane is determined from the peaks of the susceptibility for sufficiently large system sizes. In the strongly disordered regime, numerical evidence for softening to a second-order transition induced by randomness is given. Here a large-scale finite-size scaling analysis, made difficult due to strong crossover effects presumably caused by the percolation fixed point, is performed.

Journal ArticleDOI
TL;DR: In this article, the low-frequency dynamical and transport properties of random quantum systems whose low temperature (T), low-energy behavior is controlled by strong-disorder fixed points are analyzed.
Abstract: We present results on the low-frequency dynamical and transport properties of random quantum systems whose low temperature (T), low-energy behavior is controlled by strong-disorder fixed points. We obtain the momentum- and frequency-dependent dynamic structure factor in the random singlet (RS) phases of both spin-1/2 and spin-1 random antiferromagnetic chains, as well as in the random dimer and Ising antiferromagnetic phases of spin-1/2 random antiferromagnetic chains. We show that the RS phases are unusual “spin metals” with divergent low-frequency spin conductivity at T=0, and we also follow the conductivity through “metal-insulator” transitions tuned by the strength of dimerization or Ising anisotropy in the spin-1/2 case, and by the strength of disorder in the spin-1 case. We work out the average spin and energy autocorrelations in the one-dimensional random transverse-field Ising model in the vicinity of its quantum critical point. All of the above calculations are valid in the frequency-dominated regime ω≳T, and rely on previously available renormalization group schemes that describe these systems in terms of the properties of certain strong-disorder fixed-point theories. In addition, we obtain some information about the behavior of the dynamic structure factor and dynamical conductivity in the opposite “hydrodynamic” regime ω