scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 2017"


Proceedings Article
01 Mar 2017
TL;DR: World specialists will talk about reliability tests in quantum networks; about quantum hacking, its importance and limitations, and its role in classical and quantum cryptography; about high rate and about low cost QKD systems; about free space quantum communication; and about future quantum repeaters for continental scale quantum communication.
Abstract: ▓ Local Randomness for True Random Number generators ▓ Non-local Randomness for distribution of Cryptographic Keys ▓ Towards faster, longer distances and cheaper QKD engines ▓ Quantum Repeaters ▓ Device-Independent Quantum Information Processing

681 citations


Journal ArticleDOI
TL;DR: In this article, the relationship between quantum chaos and pseudorandomness was studied by developing probes of unitary design, and it was shown that the norm squared of a generalization of out-of-time-order 2k-point correlators is proportional to the kth frame potential.
Abstract: We study the relationship between quantum chaos and pseudorandomness by developing probes of unitary design. A natural probe of randomness is the “frame poten-tial,” which is minimized by unitary k-designs and measures the 2-norm distance between the Haar random unitary ensemble and another ensemble. A natural probe of quantum chaos is out-of-time-order (OTO) four-point correlation functions. We show that the norm squared of a generalization of out-of-time-order 2k-point correlators is proportional to the kth frame potential, providing a quantitative connection between chaos and pseudorandomness. Additionally, we prove that these 2k-point correlators for Pauli operators completely determine the k-fold channel of an ensemble of unitary operators. Finally, we use a counting argument to obtain a lower bound on the quantum circuit complexity in terms of the frame potential. This provides a direct link between chaos, complexity, and randomness.

356 citations


Journal ArticleDOI
TL;DR: Results show that LIF and the new method proposed in this research are very efficient when dealing with nonlinear performance function, small probability, complicated limit state and engineering problems with high dimension.

268 citations


Posted Content
TL;DR: This paper proposes a new defense algorithm called Random Self-Ensemble (RSE), which adds random noise layers to the neural network to prevent the strong gradient-based attacks, and ensembles the prediction over random noises to stabilize the performance.
Abstract: Recent studies have revealed the vulnerability of deep neural networks: A small adversarial perturbation that is imperceptible to human can easily make a well-trained deep neural network misclassify. This makes it unsafe to apply neural networks in security-critical applications. In this paper, we propose a new defense algorithm called Random Self-Ensemble (RSE) by combining two important concepts: {\bf randomness} and {\bf ensemble}. To protect a targeted model, RSE adds random noise layers to the neural network to prevent the strong gradient-based attacks, and ensembles the prediction over random noises to stabilize the performance. We show that our algorithm is equivalent to ensemble an infinite number of noisy models $f_\epsilon$ without any additional memory overhead, and the proposed training procedure based on noisy stochastic gradient descent can ensure the ensemble model has a good predictive capability. Our algorithm significantly outperforms previous defense techniques on real data sets. For instance, on CIFAR-10 with VGG network (which has 92\% accuracy without any attack), under the strong C\&W attack within a certain distortion tolerance, the accuracy of unprotected model drops to less than 10\%, the best previous defense technique has $48\%$ accuracy, while our method still has $86\%$ prediction accuracy under the same level of attack. Finally, our method is simple and easy to integrate into any neural network.

237 citations


Journal ArticleDOI
TL;DR: The results provide an explanation for why exact-diagonalization studies on random models see an apparent scaling near the transition while also obtaining finite-size scaling exponents that strongly violate Harris-Chayes bounds that apply to disorder-driven transitions.
Abstract: We provide a systematic comparison of the many-body localization (MBL) transition in spin chains with nonrandom quasiperiodic versus random fields. We find evidence suggesting that these belong to two separate universality classes: the first dominated by "intrinsic" intrasample randomness, and the second dominated by external intersample quenched randomness. We show that the effects of intersample quenched randomness are strongly growing, but not yet dominant, at the system sizes probed by exact-diagonalization studies on random models. Thus, the observed finite-size critical scaling collapses in such studies appear to be in a preasymptotic regime near the nonrandom universality class, but showing signs of the initial crossover towards the external-randomness-dominated universality class. Our results provide an explanation for why exact-diagonalization studies on random models see an apparent scaling near the transition while also obtaining finite-size scaling exponents that strongly violate Harris-Chayes bounds that apply to disorder-driven transitions. We also show that the MBL phase is more stable for the quasiperiodic model as compared to the random one, and the transition in the quasiperiodic model suffers less from certain finite-size effects.

182 citations


Proceedings ArticleDOI
22 May 2017
TL;DR: This paper proposes two large-scale distributed protocols, RandHound and RandHerd, which provide publicly-verifiable, unpredictable, and unbiasable randomness against Byzantine adversaries.
Abstract: Bias-resistant public randomness is a critical component in many (distributed) protocols. Generating public randomness is hard, however, because active adversaries may behave dishonestly to bias public random choices toward their advantage. Existing solutions do not scale to hundreds or thousands of participants, as is needed in many decentralized systems. We propose two large-scale distributed protocols, RandHound and RandHerd, which provide publicly-verifiable, unpredictable, and unbiasable randomness against Byzantine adversaries. RandHound relies on an untrusted client to divide a set of randomness servers into groups for scalability, and it depends on the pigeonhole principle to ensure output integrity, even for non-random, adversarial group choices. RandHerd implements an efficient, decentralized randomness beacon. RandHerd is structurally similar to a BFT protocol, but uses RandHound in a one-time setup to arrange participants into verifiably unbiased random secret-sharing groups, which then repeatedly produce random output at predefined intervals. Our prototype demonstrates that RandHound and RandHerd achieve good performance across hundreds of participants while retaining a low failure probability by properly selecting protocol parameters, such as a group size and secret-sharing threshold. For example, when sharding 512 nodes into groups of 32, our experiments show that RandHound can produce fresh random output after 240 seconds. RandHerd, after a setup phase of 260 seconds, is able to generate fresh random output in intervals of approximately 6 seconds. For this configuration, both protocols operate at a failure probability of at most 0.08% against a Byzantine adversary.

176 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied a system of N particles with logarithmic, Coulomb or Riesz pairwise interactions, confined by an external potential, and proved a large deviation principle at speed N. They deduced a variational property of the sine-beta processes which arise in random matrix theory.
Abstract: We study a system of N particles with logarithmic, Coulomb or Riesz pairwise interactions, confined by an external potential. We examine a microscopic quantity, the tagged empirical field, for which we prove a large deviation principle at speed N. The rate function is the sum of an entropy term, the specific relative entropy, and an energy term, the renormalized energy introduced in previous works, coupled by the temperature. We deduce a variational property of the sine-beta processes which arise in random matrix theory. We also give a next-to-leading order expansion of the free energy of the system, proving the existence of the thermodynamic limit.

144 citations


Journal ArticleDOI
TL;DR: It is proved that one can certify any amount of random bits from a pair of qubits in a pure state as the resource, even if it is arbitrarily weakly entangled.
Abstract: Unpredictability, or randomness, of the outcomes of measurements made on an entangled state can be certified provided that the statistics violate a Bell inequality. In the standard Bell scenario where each party performs a single measurement on its share of the system, only a finite amount of randomness, of at most 4logd bits, can be certified from a pair of entangled particles of dimension d. Our work shows that this fundamental limitation can be overcome using sequences of (nonprojective) measurements on the same system. More precisely, we prove that one can certify any amount of random bits from a pair of qubits in a pure state as the resource, even if it is arbitrarily weakly entangled. In addition, this certification is achieved by near-maximal violation of a particular Bell inequality for each measurement in the sequence.

107 citations


Journal ArticleDOI
TL;DR: In this paper, the authors exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment for the generation of randomness that cannot be predicted within any physical theory.
Abstract: Random numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment for the generation of randomness that cannot be predicted within any physical theory that allows one to make independent measurement choices and prohibits superluminal signaling. To certify and quantify the randomness, we describe a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtained 256 new random bits, uniform to within 0.001.

106 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of quenched disorder on spin-1/2 quantum magnets was analyzed and a theory for 2d valence-bond solids subject to weak bond randomness, as well as extensions to stronger disorder regimes where we make connections with quantum spin liquids was proposed.
Abstract: We analyze the effect of quenched disorder on spin-1/2 quantum magnets in which magnetic frustration promotes the formation of local singlets. Our results include a theory for 2d valence-bond solids subject to weak bond randomness, as well as extensions to stronger disorder regimes where we make connections with quantum spin liquids. We find, on various lattices, that the destruction of a valence-bond solid phase by weak quenched disorder leads inevitably to the nucleation of topological defects carrying spin-1/2 moments. This renormalizes the lattice into a strongly random spin network with interesting low-energy excitations. Similarly when short-ranged valence bonds would be pinned by stronger disorder, we find that this putative glass is unstable to defects that carry spin-1/2 magnetic moments, and whose residual interactions decide the ultimate low energy fate. Motivated by these results we conjecture Lieb-Schultz-Mattis-like restrictions on ground states for disordered magnets with spin-1/2 per statistical unit cell. These conjectures are supported by an argument for 1d spin chains. We apply insights from this study to the phenomenology of YbMgGaO$_4$, a recently discovered triangular lattice spin-1/2 insulator which was proposed to be a quantum spin liquid. We instead explore a description based on the present theory. Experimental signatures, including unusual specific heat, thermal conductivity, and dynamical structure factor, and their behavior in a magnetic field, are predicted from the theory, and compare favorably with existing measurements on YbMgGaO$_4$ and related materials.

103 citations


Journal ArticleDOI
TL;DR: In this article, a semi-device-independent quantum random number generator based on unambiguous quantum state discrimination is proposed, where two nonorthogonal quantum states can be prepared and a measurement device aims at unambiguously discriminating between them.
Abstract: An approach to quantum random number generation based on unambiguous quantum state discrimination is developed. We consider a prepare-and-measure protocol, where two nonorthogonal quantum states can be prepared, and a measurement device aims at unambiguously discriminating between them. Because the states are nonorthogonal, this necessarily leads to a minimal rate of inconclusive events whose occurrence must be genuinely random and which provide the randomness source that we exploit. Our protocol is semi-device-independent in the sense that the output entropy can be lower bounded based on experimental data and a few general assumptions about the setup alone. It is also practically relevant, which we demonstrate by realizing a simple optical implementation, achieving rates of 16.5 Mbits/s. Combining ease of implementation, a high rate, and a real-time entropy estimation, our protocol represents a promising approach intermediate between fully device-independent protocols and commercial quantum random number generators.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the necessity of 3D analysis when dealing with slope with full randomness in soil properties and showed that two-dimensional (2D) plane strain analysis based on the most pessimistic cross-section generally provides a more conservative result than the corresponding full-3D analysis.
Abstract: A long slope consisting of spatially random soils is a common geographical feature. This paper examined the necessity of three-dimensional (3D) analysis when dealing with slope with full randomness in soil properties. Although 3D random finite element analysis can well reflect the spatial variability of soil properties, it is often time-consuming for probabilistic stability analysis. For this reason, we also examined the least advantageous (or most pessimistic) cross-section of the studied slope. The concept of “most pessimistic” refers to the minimal cross-sectional average of undrained shear strength. The selection of the most pessimistic section is achievable by simulating the undrained shear strength as a 3D random field. Random finite element analysis results suggest that two-dimensional (2D) plane strain analysis based the most pessimistic cross-section generally provides a more conservative result than the corresponding full 3D analysis. The level of conservativeness is around 15% on average. This result may have engineering implications for slope design where computationally tractable 2D analyses based on the procedure proposed in this study could ensure conservative results.

Journal ArticleDOI
TL;DR: The typicality is considered as a fundamental quantity in the pattern analysis, which is derived directly from data and is stated in a discrete form in contrast to the traditional approach where a continuous pdf is assumed a priori and estimated from data afterward.
Abstract: In this paper, we propose an approach to data analysis, which is based entirely on the empirical observations of discrete data samples and the relative proximity of these points in the data space. At the core of the proposed new approach is the typicality—an empirically derived quantity that resembles probability. This nonparametric measure is a normalized form of the square centrality (centrality is a measure of closeness used in graph theory). It is also closely linked to the cumulative proximity and eccentricity (a measure of the tail of the distributions that is very useful for anomaly detection and analysis of extreme values). In this paper, we introduce and study two types of typicality, namely its local and global versions. The local typicality resembles the well-known probability density function (pdf), probability mass function, and fuzzy set membership but differs from all of them. The global typicality, on the other hand, resembles well-known histograms but also differs from them. A distinctive feature of the proposed new approach, empirical data analysis (EDA), is that it is not limited by restrictive impractical prior assumptions about the data generation model as the traditional probability theory and statistical learning approaches are. Moreover, it does not require an explicit and binary assumption of either randomness or determinism of the empirically observed data, their independence, or even their number (it can be as low as a couple of data samples). The typicality is considered as a fundamental quantity in the pattern analysis, which is derived directly from data and is stated in a discrete form in contrast to the traditional approach where a continuous pdf is assumed a priori and estimated from data afterward. The typicality introduced in this paper is free from the paradoxes of the pdf. Typicality is objectivist while the fuzzy sets and the belief-based branch of the probability theory are subjectivist. The local typicality is expressed in a closed analytical form and can be calculated recursively, thus, computationally very efficiently. The other nonparametric ensemble properties of the data introduced and studied in this paper, namely, the square centrality, cumulative proximity, and eccentricity, can also be updated recursively for various types of distance metrics. Finally, a new type of classifier called naive typicality-based EDA class is introduced, which is based on the newly introduced global typicality. This is only one of the wide range of possible applications of EDA including but not limited for anomaly detection, clustering, classification, control, prediction, control, rare events analysis, etc., which will be the subject of further research.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate that the ultrafast chaotic oscillatory dynamics of lasers efficiently solve the multi-armed bandit problem (MAB), which requires decision making concerning a class of difficult trade-offs called the exploration-exploitation dilemma.
Abstract: Reinforcement learning involves decision making in dynamic and uncertain environments and constitutes an important element of artificial intelligence (AI). In this work, we experimentally demonstrate that the ultrafast chaotic oscillatory dynamics of lasers efficiently solve the multi-armed bandit problem (MAB), which requires decision making concerning a class of difficult trade-offs called the exploration–exploitation dilemma. To solve the MAB, a certain degree of randomness is required for exploration purposes. However, pseudorandom numbers generated using conventional electronic circuitry encounter severe limitations in terms of their data rate and the quality of randomness due to their algorithmic foundations. We generate laser chaos signals using a semiconductor laser sampled at a maximum rate of 100 GSample/s, and combine it with a simple decision-making principle called tug of war with a variable threshold, to ensure ultrafast, adaptive, and accurate decision making at a maximum adaptation speed of 1 GHz. We found that decision-making performance was maximized with an optimal sampling interval, and we highlight the exact coincidence between the negative autocorrelation inherent in laser chaos and decision-making performance. This study paves the way for a new realm of ultrafast photonics in the age of AI, where the ultrahigh bandwidth of light wave can provide new value.

Journal ArticleDOI
TL;DR: The continuous Chen chaotic system is used to perturb both the inputs and parameters of Chebyshev map to minimize the chaotic degradation phenomenon under finite precision.
Abstract: The chaotic map has complex dynamics under ideal conditions however it suffers from the problem of performance degradation in the case of finite computing precision. In order to prevent the dynamics degradation, in this paper the continuous Chen chaotic system is used to perturb both the inputs and parameters of Chebyshev map to minimize the chaotic degradation phenomenon under finite precision. Experimental evaluations and corresponding performance analysis demonstrate that the Chebyshev chaotic map has a good randomness and complex dynamic performance by using the proposed perturbation method, and some attributes of the proposed system are stronger than the original system (e.g. chaos attractor and approximate entropy). Finally, the corresponding pseudorandom number generator (PRNG) is constructed by this method and then its randomness is evaluated via NIST SP800-22 and TestU01 test suites, respectively. Statistical test results show that the proposed PRNG has high reliability of randomness, thus it can be used for cryptography and other potential applications.

Journal ArticleDOI
TL;DR: A new uncertainty principle for the Schatten norm, which is based on the uniform convexity ineq... gives the first security proof for randomness expansion based on Kochen--Specker inequalities.
Abstract: Colbeck [Ph.D. thesis, 2006] proposed using Bell inequality violations to generate certified random numbers. While full quantum-security proofs have been given, it remains a major open problem to identify the broadest class of Bell inequalities and lowest performance requirements to achieve such security. In this paper, working within the broad class of spot-checking protocols, we prove exactly which Bell inequality violations can be used to achieve full security. Our result greatly improves the known noise tolerance for secure randomness expansion: for the commonly used CHSH game, full security was only known with a noise tolerance of 1.5% [Miller and Shi, J. ACM, 63 (2016), 33], and we improve this to 10.3%. We also generalize our results beyond Bell inequalities and give the first security proof for randomness expansion based on Kochen--Specker inequalities. The central technical contribution of the paper is a new uncertainty principle for the Schatten norm, which is based on the uniform convexity ineq...

Journal ArticleDOI
TL;DR: Experiments on two large public RS image data sets have shown that the partial randomness hashing method outperforms state of the arts in terms of both learning efficiency and retrieval accuracy.
Abstract: With the rapid progress of satellite and aerial vehicle technologies, large-scale remote sensing (RS) image retrieval has recently become an important research issue in geosciences. Hashing-based searching approaches have been widely employed in content-based image retrieval tasks. However, most hash schemes compromise between learning efficiency and retrieval accuracy, and can thus barely satisfy the precise requirements in RS data analysis. To address these shortcomings, we introduce a partial randomness scheme for learning hash functions, which is referred to as partial randomness hashing (PRH). Specifically, for constructing hash functions, a part of model parameter values are randomly generated and the remaining ones are trained based on RS images. The randomness enables an efficient hash function construction and the trained model parameters encode characteristics from RS images. The coplay between random and trained model parameters results in both efficient and effective learning scheme for constructing hash functions. Experiments on two large public RS image data sets have shown that our PRH method outperforms state of the arts in terms of both learning efficiency and retrieval accuracy.

Journal ArticleDOI
TL;DR: In this article, the elastic properties of mechanical metamaterials are directly function of their topological designs, and rational design approaches based on computational models could be used to devise topology designs that result in the desired properties.
Abstract: The elastic properties of mechanical metamaterials are direct functions of their topological designs. Rational design approaches based on computational models could, therefore, be used to devise topological designs that result in the desired properties. It is of particular importance to independently tailor the elastic modulus and Poisson's ratio of metamaterials. Here, we present patterned randomness as a strategy for independent tailoring of both properties. Soft mechanical metamaterials incorporating various types of patterned randomness were fabricated using an indirect additive manufacturing technique and mechanically tested. Computational models were also developed to predict the topology-property relationship in a wide range of proposed topologies. The results of this study show that patterned randomness allows for independent tailoring of the elastic properties and covering a broad area of the elastic modulus-Poisson's ratio plane. The uniform and homogenous topologies constitute the boundaries of the covered area, while topological designs with patterned randomness fill the enclosed area.

Journal ArticleDOI
TL;DR: In this paper, a hydrodynamical description of the scrambling of quantum information in closed many-body systems, as measured by out-of-time-ordered correlation functions (OTOCs), has been proposed.
Abstract: The scrambling of quantum information in closed many-body systems, as measured by out-of-time-ordered correlation functions (OTOCs), has lately received considerable attention. Recently, a hydrodynamical description of OTOCs has emerged from considering random local circuits, aspects of which are conjectured to be universal to ergodic many-body systems, even without randomness. Here we extend this approach to systems with locally conserved quantities (e.g., energy). We do this by considering local random unitary circuits with a conserved U$(1)$ charge and argue, with numerical and analytical evidence, that the presence of a conservation law slows relaxation in both time ordered {\textit{and}} out-of-time-ordered correlation functions, both can have a diffusively relaxing component or "hydrodynamic tail" at late times. We verify the presence of such tails also in a deterministic, peridocially driven system. We show that for OTOCs, the combination of diffusive and ballistic components leads to a wave front with a specific, asymmetric shape, decaying as a power law behind the front. These results also explain existing numerical investigations in non-noisy ergodic systems with energy conservation. Moreover, we consider OTOCs in Gibbs states, parametrized by a chemical potential $\mu$, and apply perturbative arguments to show that for $\mu\gg 1$ the ballistic front of information-spreading can only develop at times exponentially large in $\mu$ -- with the information traveling diffusively at earlier times. We also develop a new formalism for describing OTOCs and operator spreading, which allows us to interpret the saturation of OTOCs as a form of thermalization on the Hilbert space of operators.

Journal ArticleDOI
TL;DR: A physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing is proposed.
Abstract: Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.

Journal ArticleDOI
TL;DR: In this paper, the ground-state and finite-temperature properties of the bond-random s = 1/2 Heisenberg model on a honeycomb lattice with frustrated nearest-and next-nearest-neighbor antiferromagnetic interactions, J1 and J2, by the exact diagonalization and the Hams-de Raedt methods were investigated.
Abstract: We investigate the ground-state and finite-temperature properties of the bond-random s = 1/2 Heisenberg model on a honeycomb lattice with frustrated nearest- and next-nearest-neighbor antiferromagnetic interactions, J1 and J2, by the exact diagonalization and the Hams–de Raedt methods. The ground-state phase diagram of the model is constructed in the randomness versus the frustration (J2/J1) plane, with the aim of clarifying the effects of randomness and frustration in stabilizing a variety of distinct phases. We find that the randomness induces the gapless quantum spin liquid (QSL)-like state, the random-singlet state, in a wide range of parameter space. The observed robustness of the random-singlet state suggests that the gapless QSL-like behaviors might be realized in a wide class of frustrated quantum magnets possessing a certain amount of randomness or inhomogeneity, without fine-tuning the interaction parameters. Possible implications to recent experiments on the honeycomb-lattice magnets Ba3CuSb2O9...

Journal ArticleDOI
TL;DR: In this paper, an interacting many-body system can blend classical randomness through its dynamics to create quantum randomness, which plays a pivotal role in quantum information applications (such as encryption).
Abstract: Random number generation plays a pivotal role in quantum information applications (such as encryption), but generating random quantum operations requires exceptionally complex resources. A new theoretical analysis shows that an interacting many-body system can blend classical randomness through its dynamics to create quantum randomness.

Journal ArticleDOI
TL;DR: The modeling and control of a class of Itô stochastic networked control systems (NCSs) with packet dropouts that are subject to time-varying sampling is discussed, with robustly exponentially mean-square stability of the system.
Abstract: In this note, we discuss the modeling and control of a class of Ito stochastic networked control systems (NCSs) with packet dropouts that are subject to time-varying sampling. The system under consideration is first modeled as a continuous-time system with input delay that is subject to double randomness via the input-delay approach. Then, by assuming the packet drop rate and recurring to the celebrated formula of total probability, an equivalent model which takes full advantage of probability distribution characteristics of both packet dropouts and sampling periods is established. In particular, the probability distribution values of the stochastic delay taking values in two given intervals can be explicitly obtained, which is of crucial importance to model and analyze the actual problem in NCSs. Based on that, robustly exponentially mean-square stability of the system with an $H_\infty$ performance is guaranteed, an $H_\infty$ controller design procedure is then proposed. Finally, a numerical simulation example is exploited to show the effectiveness and applicability of the results derived and some less conservative results are obtained.

Journal ArticleDOI
TL;DR: In this article, a stochastic, continuous state and time opinion model where each agent's opinion locally interacts with other agents' opinions in the system, and there is also exogenous randomness is considered.
Abstract: We consider a stochastic, continuous state and time opinion model where each agent’s opinion locally interacts with other agents’ opinions in the system, and there is also exogenous randomness. The interaction tends to create clusters of common opinion. By using linear stability analysis of the associated nonlinear Fokker–Planck equation that governs the empirical density of opinions in the limit of infinitely many agents, we can estimate the number of clusters, the time to cluster formation, and the critical strength of randomness so as to have cluster formation. We also discuss the cluster dynamics after their formation, the width and the effective diffusivity of the clusters. Finally, the long-term behavior of clusters is explored numerically. Extensive numerical simulations confirm our analytical findings.

Journal ArticleDOI
TL;DR: The proposed method uses a discrete chaotic map, based on the composition of permutations, which has virtually unlimited key space and the ability to generate same number of different pseudo-random sequences as other secure discrete-space chaotic methods, but with significantly lower memory space requirements.
Abstract: A new method for obtaining pseudo-random numbers, based on discrete-space chaotic map, is presented. The proposed method uses a discrete chaotic map, based on the composition of permutations. The randomness of pseudo-random sequences generated by proposed method is verified using NIST 800-22 test suite and TestU01. Proposed method is not affected by dynamical degradation, so the process of generation of pseudo-random numbers is not influenced by approximations of any kind. The advantage of the proposed method is that it has virtually unlimited key space and the ability to generate same number of different pseudo-random sequences as other secure discrete-space chaotic methods, but with significantly lower memory space requirements. Also, higher level of security and great cycle lengths can be achieved. Small memory requirements could make proposed PRNG applicable in devices with limited memory space.

Journal ArticleDOI
TL;DR: An ultra-fast physical random number generator is demonstrated utilizing a photonic integrated device based broadband chaotic source with a simple post data processing method, indicating its randomness for practical applications.
Abstract: An ultra-fast physical random number generator is demonstrated utilizing a photonic integrated device based broadband chaotic source with a simple post data processing method. The compact chaotic source is implemented by using a monolithic integrated dual-mode amplified feedback laser (AFL) with self-injection, where a robust chaotic signal with RF frequency coverage of above 50 GHz and flatness of ±3.6 dB is generated. By using 4-least significant bits (LSBs) retaining from the 8-bit digitization of the chaotic waveform, random sequences with a bit-rate up to 640 Gbit/s (160 GS/s × 4 bits) are realized. The generated random bits have passed each of the fifteen NIST statistics tests (NIST SP800-22), indicating its randomness for practical applications.

Journal ArticleDOI
TL;DR: In this article, the effect of randomness on the convergence of numerical methods was studied in a general setting, with the regularity result not depending on the specific form of the collision term, the probability distribution of the random variables, or the regime the system is in and thereby is termed "uniform".
Abstract: In this paper we study the effect of randomness in kinetic equations that preserve mass. Our focus is in proving the analyticity of the solution with respect to the randomness, which naturally leads to the convergence of numerical methods. The analysis is carried out in a general setting, with the regularity result not depending on the specific form of the collision term, the probability distribution of the random variables, or the regime the system is in and thereby is termed “uniform." Applications include the linear Boltzmann equation, the Bhatnagar--Gross--Krook (BGK) model, and the Carlemann model, among many others, and the results hold true in kinetic, parabolic, and high field regimes. The proof relies on the explicit expression of the high order derivatives of the solution in the random space, and the convergence in time is mainly based on hypocoercivity, which, despite the popularity in PDE analysis of kinetic theory, has rarely been used for numerical algorithms.

Journal ArticleDOI
TL;DR: In this article, a stochastic isogeometric analysis for free vibration of functionally graded plates with spatially varying random material properties is presented. But the authors only consider the first and second moments of eigenvalues.

Posted Content
TL;DR: In this article, the authors demonstrate that the ultrafast chaotic oscillatory dynamics of lasers efficiently solve the multi-armed bandit problem (MAB), which requires decision making concerning a class of difficult trade-offs called the exploration-exploitation dilemma.
Abstract: Reinforcement learning involves decision making in dynamic and uncertain environments, and constitutes one important element of artificial intelligence (AI). In this paper, we experimentally demonstrate that the ultrafast chaotic oscillatory dynamics of lasers efficiently solve the multi-armed bandit problem (MAB), which requires decision making concerning a class of difficult trade-offs called the exploration-exploitation dilemma. To solve the MAB, a certain degree of randomness is required for exploration purposes. However, pseudo-random numbers generated using conventional electronic circuitry encounter severe limitations in terms of their data rate and the quality of randomness due to their algorithmic foundations. We generate laser chaos signals using a semiconductor laser sampled at a maximum rate of 100 GSample/s, and combine it with a simple decision-making principle called tug-of-war with a variable threshold, to ensure ultrafast, adaptive and accurate decision making at a maximum adaptation speed of 1 GHz. We found that decision-making performance was maximized with an optimal sampling interval, and we highlight the exact coincidence between the negative autocorrelation inherent in laser chaos and decision-making performance. This study paves the way for a new realm of ultrafast photonics in the age of AI, where the ultrahigh bandwidth of photons can provide new value.

Posted Content
TL;DR: In this article, the authors consider approaches motivated by machine learning algorithms as a means of constructing a benchmark for the best attainable level of prediction and illustrate their methods on the task of predicting human-generated random sequences.
Abstract: When we test a theory using data, it is common to focus on correctness: do the predictions of the theory match what we see in the data? But we also care about completeness: how much of the predictable variation in the data is captured by the theory? This question is difficult to answer, because in general we do not know how much "predictable variation" there is in the problem. In this paper, we consider approaches motivated by machine learning algorithms as a means of constructing a benchmark for the best attainable level of prediction. We illustrate our methods on the task of predicting human-generated random sequences. Relative to an atheoretical machine learning algorithm benchmark, we find that existing behavioral models explain roughly 15 percent of the predictable variation in this problem. This fraction is robust across several variations on the problem. We also consider a version of this approach for analyzing field data from domains in which human perception and generation of randomness has been used as a conceptual framework; these include sequential decision-making and repeated zero-sum games. In these domains, our framework for testing the completeness of theories provides a way of assessing their effectiveness over different contexts; we find that despite some differences, the existing theories are fairly stable across our field domains in their performance relative to the benchmark. Overall, our results indicate that (i) there is a significant amount of structure in this problem that existing models have yet to capture and (ii) there are rich domains in which machine learning may provide a viable approach to testing completeness.