scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 2014"


Book
12 Mar 2014
TL;DR: Useful Notions of Probability Theory: Sums of Random Variables, Random Walks and the Central Limit Theorem, Large Deviations, Fractals and Multifractals, Rank-Ordering Statistics and Heavy Tails as discussed by the authors.
Abstract: Useful Notions of Probability Theory.- Sums of Random Variables, Random Walks and the Central Limit Theorem.- Large Deviations.- Power Law Distributions.- Fractals and Multifractals.- Rank-Ordering Statistics and Heavy Tails.- Statistical Mechanics: Probabilistic Point of View and the Concept of "Temperature".- Long-Range Correlations.- Phase Transitions: Critical Phenomena and First-Order Transitions.- Transitions, Bifurcations and Precursors.- The Renormalization Group.- The Percolation Model.- Rupture Models.- Mechanisms for Power Laws.- Self-Organized Criticality.- to the Physics of Random Systems.- Randomness and Long-Range Laplacian Interactions.

868 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduced a disorder regime for directed polymers in dimension 1+1 that sits between the weak and strong disorder regimes, and showed that the polymer measure under this regime has previously unseen behavior.
Abstract: We introduce a new disorder regime for directed polymers in dimension 1+1 that sits between the weak and strong disorder regimes. We call it the intermediate disorder regime. It is accessed by scaling the inverse temperature parameter β to zero as the polymer length n tends to infinity. The natural choice of scaling is β_n:=βn^(−1/4). We show that the polymer measure under this scaling has previously unseen behavior. While the fluctuation exponents of the polymer endpoint and the log partition function are identical to those for simple random walk (ζ=1/2, χ=0), the fluctuations themselves are different. These fluctuations are still influenced by the random environment, and there is no self-averaging of the polymer measure. In particular, the random distribution of the polymer endpoint converges in law (under a diffusive scaling of space) to a random absolutely continuous measure on the real line. The randomness of the measure is inherited from a stationary process A_β that has the recently discovered crossover distributions as its one-point marginals, which for large β become the GUE Tracy–Widom distribution. We also prove existence of a limiting law for the four-parameter field of polymer transition probabilities that can be described by the stochastic heat equation. In particular, in this weak noise limit, we obtain the convergence of the point-to-point free energy fluctuations to the GUE Tracy–Widom distribution. We emphasize that the scaling behaviour obtained is universal and does not depend on the law of the disorder.

227 citations


Journal ArticleDOI
TL;DR: It is found that subjects' self-created patterns were the most complex, followed by drawing movements of letters and self-paced random motion, and that participants could change the randomness of their behavior depending on context and feedback.
Abstract: Complexity is a hallmark of intelligent behavior consisting both of regular patterns and random variation. To quantitatively assess the complexity and randomness of human motion, we designed a motor task in which we translated subjects' motion trajectories into strings of symbol sequences. In the first part of the experiment participants were asked to perform self-paced movements to create repetitive patterns, copy pre-specified letter sequences, and generate random movements. To investigate whether the degree of randomness can be manipulated, in the second part of the experiment participants were asked to perform unpredictable movements in the context of a pursuit game, where they received feedback from an online Bayesian predictor guessing their next move. We analyzed symbol sequences representing subjects' motion trajectories with five common complexity measures: predictability, compressibility, approximate entropy, Lempel-Ziv complexity, as well as effective measure complexity. We found that subjects' self-created patterns were the most complex, followed by drawing movements of letters and self-paced random motion. We also found that participants could change the randomness of their behavior depending on context and feedback. Our results suggest that humans can adjust both complexity and regularity in different movement types and contexts and that this can be assessed with information-theoretic measures of the symbolic sequences generated from movement trajectories.

218 citations


Journal ArticleDOI
TL;DR: This work modify the standard l1l1-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refers to the resulting algorithm as weighted l1l 1- Minimization.

198 citations


Journal ArticleDOI
TL;DR: In this paper, the pump laser is matched to a specific random medium to generate the optical feedback required for stimulated emission by scattering light from disordered particles, which makes controlling the emission wavelength difficult.
Abstract: Random lasers generate the optical feedback required for stimulated emission by scattering light from disordered particles. Their inherent randomness, however, makes controlling the emission wavelength difficult. It is now shown that this problem can be remedied by carefully matching the pump laser to the specific random medium. The concept is applied to a one-dimensional optofluidic device, but could also be applicable to other random lasers.

177 citations


Journal ArticleDOI
TL;DR: An efficient method to extract the amount of true randomness that can be obtained by a Quantum Random Number Generator (QRNG) by repeating the measurements of a quantum system and by swapping between two mutually unbiased bases is presented.
Abstract: We present an efficient method to extract the amount of true randomness that can be obtained by a quantum random number generator (QRNG). By repeating the measurements of a quantum system and by swapping between two mutually unbiased bases, a lower bound of the achievable true randomness can be evaluated. The bound is obtained thanks to the uncertainty principle of complementary measurements applied to min-entropy and max-entropy. We tested our method with two different QRNGs by using a train of qubits or ququart and demonstrated the scalability toward practical applications.

146 citations


Journal ArticleDOI
TL;DR: It is shown that taking into account the complete set of outcome probabilities is equivalent to optimizing over all possible Bell inequalities, thereby allowing us to determine the optimal Bell inequality for certifying the maximal amount of randomness from a given set of non-local correlations.
Abstract: The majority of recent works investigating the link between non-locality and randomness, e.g. in the context of device-independent cryptography, do so with respect to some specific Bell inequality, usually the CHSH inequality. However, the joint probabilities characterizing the measurement outcomes of a Bell test are richer than just the degree of violation of a single Bell inequality. In this work we show how to take this extra information into account in a systematic manner in order to optimally evaluate the randomness that can be certified from non-local correlations. We further show that taking into account the complete set of outcome probabilities is equivalent to optimizing over all possible Bell inequalities, thereby allowing us to determine the optimal Bell inequality for certifying the maximal amount of randomness from a given set of non-local correlations.

124 citations


Book ChapterDOI
01 Jan 2014
TL;DR: This chapter discusses several examples where use of a true RNG is critical and shows how it can significantly improve security of cryptographic systems, and discusses industrial and research challenges that prevent widespread use of TRNGs.
Abstract: Random numbers are needed in many areas: cryptography, Monte Carlo computation and simulation, industrial testing and labeling, hazard games, gambling, etc. Our assumption has been that random numbers cannot be computed; because digital computers operate deterministically, they cannot produce random numbers. Instead, random numbers are best obtained using physical (true) random number generators (TRNG), which operate by measuring a well-controlled and specially prepared physical process. Randomness of a TRNG can be precisely, scientifically characterized and measured. Especially valuable are the information-theoretic provable random number generators (RNGs), which, at the state of the art, seem to be possible only by exploiting randomness inherent to certain quantum systems. On the other hand, current industry standards dictate the use of RNGs based on free-running oscillators (FRO) whose randomness is derived from electronic noise present in logic circuits and which cannot be strictly proven as uniformly random, but offer easier technological realization. The FRO approach is currently used in 3rd- and 4th-generation FPGA and ASIC hardware, unsuitable for realization of quantum RNGs. In this chapter we compare weak and strong aspects of the two approaches. Finally, we discuss several examples where use of a true RNG is critical and show how it can significantly improve security of cryptographic systems, and discuss industrial and research challenges that prevent widespread use of TRNGs.

111 citations


Journal ArticleDOI
TL;DR: Christensen et al. as discussed by the authors showed how private randomness generated during a Bell test can be directly quantified from the observed correlations, without the need to process these data into an inequality.
Abstract: Correlations that cannot be reproduced with local variables certify the generation of private randomness. Usually, the violation of a Bell inequality is used to quantify the amount of randomness produced. Here, we show how private randomness generated during a Bell test can be directly quantified from the observed correlations, without the need to process these data into an inequality. The frequency with which the different measurement settings are used during the Bell test can also be taken into account. This improved analysis turns out to be very relevant for Bell tests performed with a finite collection efficiency. In particular, applying our technique to the data of a recent experiment (Christensen et al 2013 Phys. Rev. Lett. 111 130406), we show that about twice as much randomness as previously reported can be potentially extracted from this setup.

110 citations


Journal ArticleDOI
TL;DR: By computing high-order finite differences of the chaotic laser intensity time series, time series with symmetric statistical distributions are obtained that are more conducive to ultrafast random bit generation.
Abstract: This paper reports the experimental investigation of two different approaches to random bit generation based on the chaotic dynamics of a semiconductor laser with optical feedback. By computing high-order finite differences of the chaotic laser intensity time series, we obtain time series with symmetric statistical distributions that are more conducive to ultrafast random bit generation. The first approach is guided by information-theoretic considerations and could potentially reach random bit generation rates as high as 160 Gb/s by extracting 4 bits per sample. The second approach is based on pragmatic considerations and could lead to rates of 2.2 Tb/s by extracting 55 bits per sample. The randomness of the bit sequences obtained from the two approaches is tested against three standard randomness tests (ENT, Diehard, and NIST tests), as well as by calculating the statistical bias and the serial correlation coefficients on longer sequences of random bits than those used in the standard tests.

109 citations


Journal ArticleDOI
TL;DR: In this paper, the S = 1/2 antiferromagnetic Heisenberg on the triangular lattice with a quenched randomness in the exchange interaction was proposed as a minimal model of the observed quantum spin liquid behavior.
Abstract: Experimental quest for the hypothetical “quantum spin liquid” state has recently met a few promising candidate materials including organic salts κ-(ET)2Cu2(CN)3 and EtMe3Sb[Pd(dmit)2]2, S = 1/2 triangular-lattice Heisenberg antiferromagnets consisting of molecular dimers. These compounds exhibit no magnetic ordering nor the spin freezing down to very low temperature, while various physical quantities exhibit gapless behaviors. Recent dielectric measurements revealed the glassy dielectric response suggesting the random freezing of the electric polarization degrees of freedom. Inspired by this observation, we propose as a minimal model of the observed quantum spin-liquid behavior the S = 1/2 antiferromagnetic Heisenberg on the triangular lattice with a quenched randomness in the exchange interaction. We study both zero- and finite-temperature properties of the model by an exact diagonalization method, to find that when the randomness exceeds a critical value the model exhibits a quantum spin-liquid ground s...

Journal ArticleDOI
TL;DR: The simplest of the authors' witnesses is highly robust to technical imperfections, and can certify the use of qubits in the presence of arbitrary noise and arbitrarily low detection efficiency, suggesting applications in quantum information processing.
Abstract: We consider the problem of testing the dimension of uncharacterized classical and quantum systems in a prepare-and-measure setup. Here we assume the preparation and measurement devices to be independent, thereby making the problem nonconvex. We present a simple method for generating nonlinear dimension witnesses for systems of arbitrary dimension. The simplest of our witnesses is highly robust to technical imperfections, and can certify the use of qubits in the presence of arbitrary noise and arbitrarily low detection efficiency. Finally, we show that this witness can be used to certify the presence of randomness, suggesting applications in quantum information processing.

Journal ArticleDOI
TL;DR: This paper works in the class of least adversarial power, which is relevant for assessing setups operated by trusted experimentalists, and compares three levels of characterization of the devices, and presents a systematic and efficient approach to quantifying the amount of intrinsic randomness.
Abstract: The amount of intrinsic randomness that can be extracted from measurement on quantum systems depends on several factors: notably, the power given to the adversary and the level of characterization of the devices of the authorized partners. After presenting a systematic introduction to these notions, in this paper we work in the class of least adversarial power, which is relevant for assessing setups operated by trusted experimentalists, and compare three levels of characterization of the devices. Many recent studies have focused on the so-called ?device-independent? level, in which a lower bound on the amount of intrinsic randomness can be certified without any characterization. The other extreme is the case when all the devices are fully characterized: this ?tomographic? level has been known for a long time. We present for this case a systematic and efficient approach to quantifying the amount of intrinsic randomness, and show that setups involving ancillas (positive-operator valued measures, pointer measurements) may not be interesting here, insofar as one may extract randomness from the ancilla rather than from the system under study. Finally, we study how much randomness can be obtained in presence of an intermediate level of characterization related to the task of ?steering?, in which Bob?s device is fully characterized while Alice?s is a black box. We obtain our results here by adapting the NPA hierarchy of semidefinite programs to the steering scenario.This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ?50 years of Bell?s theorem?.

Posted Content
TL;DR: Chung et al. as mentioned in this paper showed how to expand a random seed at an exponential rate without trusting the underlying quantum devices and showed that the Renyi divergence of the outputs of the protocol (for a specific bounding operator) decreases linearly as the protocol iterates.
Abstract: Randomness is a vital resource for modern day information processing, especially for cryptography. A wide range of applications critically rely on abundant, high quality random numbers generated securely. Here we show how to expand a random seed at an exponential rate without trusting the underlying quantum devices. Our approach is secure against the most general adversaries, and has the following new features: cryptographic level of security, tolerating a constant level of imprecision in the devices, requiring only a unit size quantum memory per device component for the honest implementation, and allowing a large natural class of constructions for the protocol. In conjunct with a recent work by Chung, Shi and Wu, it also leads to robust unbounded expansion using just 2 multi-part devices. When adapted for distributing cryptographic keys, our method achieves, for the first time, exponential expansion combined with cryptographic security and noise tolerance. The proof proceeds by showing that the Renyi divergence of the outputs of the protocol (for a specific bounding operator) decreases linearly as the protocol iterates. At the heart of the proof are a new uncertainty principle on quantum measurements, and a method for simulating trusted measurements with untrusted devices.

Journal ArticleDOI
TL;DR: In this paper, a high-speed quantum random number generator is presented, where the timing of single-photon detection relative to an external time reference is measured as the raw data.
Abstract: We present a practical high-speed quantum random number generator, where the timing of single-photon detection relative to an external time reference is measured as the raw data. The bias of the raw data can be substantially reduced compared with the previous realizations. The raw random bit rate of our generator can reach 109 Mbps. We develop a model for the generator and evaluate the min-entropy of the raw data. Toeplitz matrix hashing is applied for randomness extraction, after which the final random bits are able to pass the standard randomness tests.

Journal ArticleDOI
TL;DR: In this article, the authors present a systematic and efficient approach to quantifying the amount of intrinsic randomness, and show that setups involving ancillas (POVMs, pointer measurements) may not be interesting here, insofar as one may extract randomness from the ancilla rather than from the system under study.
Abstract: The amount of intrinsic randomness that can be extracted from measurement on quantum systems depends on several factors: notably, the power given to the adversary and the level of characterization of the devices of the authorized partners. After presenting a systematic introduction to these notions, in this paper we work in the class of least adversarial power, which is relevant for assessing setups operated by trusted experimentalists, and compare three levels of characterization of the devices. Many recent studies have focused on the so-called "device-independent" level, in which a lower bound on the amount of intrinsic randomness can be certified without any characterization. The other extreme is the case when all the devices are fully characterized: this "tomographic" level has been known for a long time. We present for this case a systematic and efficient approach to quantifying the amount of intrinsic randomness, and show that setups involving ancillas (POVMs, pointer measurements) may not be interesting here, insofar as one may extract randomness from the ancilla rather than from the system under study. Finally, we study how much randomness can be obtained in presence of an intermediate level of characterization related to the task of "steering", in which Bob's device is fully characterized while Alice's is a black box. We obtain our results here by adapting the NPA hierarchy of semidefinite programs to the steering scenario.

Book ChapterDOI
01 Jan 2014
TL;DR: Any realistic model of a real-world phenomenon must take into account the possibility of randomness, which is usually accomplished by allowing the model to be probabilistic in nature.
Abstract: Any realistic model of a real-world phenomenon must take into account the possibility of randomness. That is, more often than not, the quantities we are interested in will not be predictable in advance but, rather, will exhibit an inherent variation that should be taken into account by the model. This is usually accomplished by allowing the model to be probabilistic in nature. Such a model is, naturally enough, referred to as a probability model.

Proceedings ArticleDOI
06 Mar 2014
TL;DR: Ring oscillator (RO)-based TRNGs offer the advantage of design simplicity, but previous methods using a slow jittery clock to sample a fast clock provide low randomness and are vulnerable to power supply attacks.
Abstract: True random number generators (TRNGs) use physical randomness as entropy sources and are heavily used in cryptography and security [1]. Although hardware TRNGs provide excellent randomness, power consumption and design complexity are often high. Previous work has demonstrated TRNGs based on a resistor-amplifier-ADC chain [2], oscillator jitter [1], metastability [3-5] and other device noise [6-7]. However, analog designs suffer from variation and noise, making them difficult to integrate with digital circuits. Recent metastability-based methods [3-5] provide excellent performance but often require careful calibration to remove bias. SiN MOSFETs [6] exploit larger thermal noise but require post-processing to achieve sufficient randomness. An oxide breakdown-based TRNG [7] shows high entropy but suffers from low performance and high energy/bit. Ring oscillator (RO)-based TRNGs offer the advantage of design simplicity, but previous methods using a slow jittery clock to sample a fast clock provide low randomness [1] and are vulnerable to power supply attacks [8]. In addition, the majority of previous methods cannot pass all NIST randomness tests.

Journal ArticleDOI
04 Apr 2014-Entropy
TL;DR: This work introduces an intersection information measure based on the Gacs-Korner common random variable that is the first to satisfy the coveted target monotonicity property.
Abstract: The introduction of the partial information decomposition generated a flurry of proposals for defining an intersection information that quantifies how much of “the same information” two or more random variables specify about a target random variable. As of yet, none is wholly satisfactory. A palatable measure of intersection information would provide a principled way to quantify slippery concepts, such as synergy. Here, we introduce an intersection information measure based on the Gacs-Korner common random variable that is the first to satisfy the coveted target monotonicity property. Our measure is imperfect, too, and we suggest directions for improvement.

Journal ArticleDOI
TL;DR: A novel design method for discrete time chaos based true random number generators is developed using skew Tent map as a case study and a current mode skew tent map circuit is designed to validate proposed method.

Journal ArticleDOI
TL;DR: A practical high-speed quantum random number generator, where the timing of single-photon detection relative to an external time reference is measured as the raw data, and the bias of theRaw data can be substantially reduced compared with the previous realizations.
Abstract: We present a practical high-speed quantum random number generator, where the timing of single-photon detection relative to an external time reference is measured as the raw data. The bias of the raw data can be substantially reduced compared with the previous realizations. The raw random bit rate of our generator can reach 109 Mbps. We develop a model for the generator and evaluate the min-entropy of the raw data. Toeplitz matrix hashing is applied for randomness extraction, after which the final random bits are able to pass the standard randomness tests.

Journal ArticleDOI
TL;DR: In this paper, a new hybrid reliability analysis technique based on the convex modeling theory is developed for structures with multi-source uncertainties, which may contain randomness, fuzziness, and non-probabilistic boundedness.
Abstract: A new hybrid reliability analysis technique based on the convex modeling theory is developed for structures with multi-source uncertainties, which may contain randomness, fuzziness, and non-probabilistic boundedness. By solving the convex modeling reliability problem and further analyzing the correlation within uncertainties, the structural hybrid reliability is obtained. Considering various cases of uncertainties of the structure, four hybrid models including the convex with random, convex with fuzzy random, convex with interval, and convex with other three are built, respectively. The present hybrid models are compared with the conventional probabilistic and the non-probabilistic models by two typical numerical examples. The results demonstrate the accuracy and effectiveness of the proposed hybrid reliability analysis method.

Journal ArticleDOI
TL;DR: A system with bimolecular irreversible kinetic reaction A+B->@A where the underlying transport of reactants is governed by diffusion, and the local reaction term is given by the law of mass action is studied.

Journal ArticleDOI
TL;DR: A class of uncertain random optimization is suggested for decision systems in this paper, called the uncertain random multi-objective programming, which involves some notions of the Pareto solutions and the compromise solutions as well as two compromise models.
Abstract: Uncertain random variables are used to describe the phenomenon of simultaneous appearance of both uncertainty and randomness in a complex system. For modeling multi-objective decision-making problems with uncertain random parameters, a class of uncertain random optimization is suggested for decision systems in this paper, called the uncertain random multi-objective programming. For solving the uncertain random programming, some notions of the Pareto solutions and the compromise solutions as well as two compromise models are defined. Subsequently, some properties of these models are investigated, and then two equivalent deterministic mathematical programming models under some particular conditions are presented. Some numerical examples are also given for illustration.

Journal ArticleDOI
TL;DR: A new concept of a true random number generator (TRNG) in which the direct proximity of the metastable point is not mandatory is proposed and the transition times of two devices are compared.
Abstract: The paper introduces a new concept of a true random number generator (TRNG). Most metastability-based solutions operate on the uncertainty of a logical output state of a device (flip-flop, D-latch) aimed to be resolved from an exact metastable point. However, it has been shown that the metastable point of a bistable circuit (which is practically impossible to reach) does not guarantee absolute randomness or sufficient entropy. We propose the concept of a device in which the direct proximity of the metastable point is not mandatory. In our concept the transition times of two devices are compared. Such construction is less sensitive to the proximity of the metastable point, temperature fluctuations, and power supply instabilities. The paper briefly describes the metastability phenomena in general and other known metastability-based TRNG concepts. A new concept of a dual-metastability time-competitive generator is presented, analyzed both numerically and theoretically, and verified based on the sample circuit's implementation. Empirical and statistical test results are presented.

Journal ArticleDOI
TL;DR: It is reported a surprising discovery that for a broad range of gate failure probability, decoders actually benefit from faults in logic gates which serve as an inherent source of randomness and help the decoding algorithm to escape from local minima associated with trapping sets.
Abstract: We propose a gradient descent type bit flipping algorithm for decoding low density parity check codes on the binary symmetric channel. Randomness introduced in the bit flipping rule makes this class of decoders not only superior to other decoding algorithms of this type, but also robust to logic-gate failures. We report a surprising discovery that for a broad range of gate failure probability our decoders actually benefit from faults in logic gates which serve as an inherent source of randomness and help the decoding algorithm to escape from local minima associated with trapping sets.

Journal ArticleDOI
TL;DR: It is demonstrated the physical generation of random bits at high bit rates using optical chaos from a solitary laser diode and therefore without the complex addition of either external optical feedback or injection.
Abstract: We demonstrate the physical generation of random bits at high bit rates (> 100 Gb/s) using optical chaos from a solitary laser diode and therefore without the complex addition of either external optical feedback or injection. This striking result is obtained despite the low dimension and relatively small bandwidth of the laser chaos, i.e. two characteristics that have been so far considered as limiting the performances of optical chaos-based applications. We unambiguously attribute the successful randomness at high speed to the physics of the laser chaotic polarization dynamics and the resulting growth rate of the dynamical entropy.

Journal ArticleDOI
TL;DR: The CVQDKD scheme shows that the scheme can securely and effectively transfer pre-determined keys under ideal conditions and can resist both the entanglement and beam splitter attacks under a relatively high channel transmission efficiency.
Abstract: The distribution of deterministic keys is of significance in personal communications, but the existing continuous variable quantum key distribution protocols can only generate random keys. By exploiting the entanglement properties of two-mode squeezed states, a continuous variable quantum deterministic key distribution (CVQDKD) scheme is presented for handing over the pre-determined key to the intended receiver. The security of the CVQDKD scheme is analyzed in detail from the perspective of information theory. It shows that the scheme can securely and effectively transfer pre-determined keys under ideal conditions. The proposed scheme can resist both the entanglement and beam splitter attacks under a relatively high channel transmission efficiency.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate the physical generation of random bits at high bit rates (> 100 Gb/s) using optical chaos from a solitary laser diode and therefore without the complex addition of either external optical feedback or injection.
Abstract: We demonstrate the physical generation of random bits at high bit rates (> 100 Gb/s) using optical chaos from a solitary laser diode and therefore without the complex addition of either external optical feedback or injection. This striking result is obtained despite the low dimension and relatively small bandwidth of the laser chaos, i.e. two characteristics that have been so far considered as limiting the performances of optical chaos-based applications. We unambiguously attribute the successful randomness at high speed to the physics of the laser chaotic polarization dynamics and the resulting growth rate of the dynamical entropy.

Posted Content
TL;DR: The Equivalence Lemma, a general principle for proving composition security of untrusted-device protocols, implies that unbounded randomness expansion can be achieved simply by cross-feeding any two expansion protocols, and can be made robust, which is known for the first time.
Abstract: How to generate provably true randomness with minimal assumptions? This question is important not only for the efficiency and the security of information processing, but also for understanding how extremely unpredictable events are possible in Nature. All current solutions require special structures in the initial source of randomness, or a certain independence relation among two or more sources. Both types of assumptions are impossible to test and difficult to guarantee in practice. Here we show how this fundamental limit can be circumvented by extractors that base security on the validity of physical laws and extract randomness from untrusted quantum devices. In conjunction with the recent work of Miller and Shi (arXiv:1402:0489), our physical randomness extractor uses just a single and general weak source, produces an arbitrarily long and near-uniform output, with a close-to-optimal error, secure against all-powerful quantum adversaries, and tolerating a constant level of implementation imprecision. The source necessarily needs to be unpredictable to the devices, but otherwise can even be known to the adversary. Our central technical contribution, the Equivalence Lemma, provides a general principle for proving composition security of untrusted-device protocols. It implies that unbounded randomness expansion can be achieved simply by cross-feeding any two expansion protocols. In particular, such an unbounded expansion can be made robust, which is known for the first time. Another significant implication is, it enables the secure randomness generation and key distribution using public randomness, such as that broadcast by NIST's Randomness Beacon. Our protocol also provides a method for refuting local hidden variable theories under a weak assumption on the available randomness for choosing the measurement settings.