scispace - formally typeset
Search or ask a question

Showing papers on "Randomness published in 2019"


01 Jan 2019
TL;DR: The book presents a thorough treatment of the central ideas and their applications of Kolmogorov complexity with a wide range of illustrative applications, and will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics.
Abstract: The book is outstanding and admirable in many respects. ... is necessary reading for all kinds of readers from undergraduate students to top authorities in the field. Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, KolmogorovLoveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others.

3,361 citations


Book ChapterDOI
04 Oct 2019
TL;DR: A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented that has applications in cryptography, random constructions, and complexity theory.
Abstract: A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented. This generator is a deterministic polynomial-time algorithm that transforms pairs (g, r), where g is any one-way function and r is a random k-bit string, to polynomial-time computable functionsf,: { 1, . . . , 2') + { 1, . . . , 2kl. Thesef,'s cannot be distinguished from random functions by any probabilistic polynomial-time algorithm that asks and receives the value of a function at arguments of its choice. The result has applications in cryptography, random constructions, and complexity theory. Categories and Subject Descriptors: F.0 (Theory of Computation): General; F. 1.1 (Computation by Abstract Devices): Models of Computation-computability theory; G.0 (Mathematics of Computing): General; G.3 (Mathematics of Computing): Probability and Statistics-probabilistic algorithms; random number generation

1,679 citations


Book ChapterDOI
04 Oct 2019
TL;DR: A general algorithmic scheme for constructing polynomial-time deterministic algorithms that stretch a short secret random input into a long sequence of unpredictable pseudo-random bits is presented.
Abstract: Much effort has been devoted in the second half of this century to make precise the notion of Randomness. Let us informally recall one of these definitions due to Kolmogorov []. A sequence of bits A =all a2••.•• at is random if the length of the minimal program outputting A is at least k We remark that the above definition is highly non constructive and rules out the possibility of pseudo random number generators. Also. the length of a program, from a Complexity Theory point of view, is a rather unnatural measure. A more operative definition of Randomness should be pursued in the light of modern Complexity Theory.

1,216 citations


Journal ArticleDOI
TL;DR: In this paper, a deep learning method for optimal stopping problems was developed, which directly learns the optimal stopping rule from Monte Carlo samples and is broadly applicable in situations where the underlying randomness can efficiently be simulated.
Abstract: In this paper we develop a deep learning method for optimal stopping problems which directly learns the optimal stopping rule from Monte Carlo samples. As such, it is broadly applicable in situations where the underlying randomness can efficiently be simulated. We test the approach on three problems: the pricing of a Bermudan max-call option, the pricing of a callable multi barrier reverse convertible and the problem of optimally stopping a fractional Brownian motion. In all three cases it produces very accurate results in high-dimensional situations with short computing times.

170 citations


Book ChapterDOI
18 Aug 2019
TL;DR: In this article, the authors propose a secure MPC with no honest majority, where input-independent correlated randomness enables a lightweight "non-cryptographic" online phase once the inputs are known.
Abstract: Secure multiparty computation (MPC) often relies on correlated randomness for better efficiency and simplicity. This is particularly useful for MPC with no honest majority, where input-independent correlated randomness enables a lightweight “non-cryptographic” online phase once the inputs are known. However, since the amount of randomness typically scales with the circuit size of the function being computed, securely generating correlated randomness forms an efficiency bottleneck, involving a large amount of communication and storage.

145 citations


Journal ArticleDOI
TL;DR: It is shown that, by using only an additional 16% of LUTs, the proposed PRNG obtains a much better performance in terms of randomness, increasing the NIST passing rate from 0.252 to 0.989.
Abstract: In this paper, a new pseudorandom number generator (PRNG) based on the logistic map has been proposed. To prevent the system to fall into short period orbits as well as increasing the randomness of the generated sequences, the proposed algorithm dynamically changes the parameters of the chaotic system. This PRNG has been implemented in a Virtex 7 field-programmable gate array (FPGA) with a 32-bit fixed point precision, using a total of 510 lookup tables (LUTs) and 120 registers. The sequences generated by the proposed algorithm have been subjected to the National Institute of Standards and Technology (NIST) randomness tests, passing all of them. By comparing the randomness with the sequences generated by a raw 32-bit logistic map, it is shown that, by using only an additional 16% of LUTs, the proposed PRNG obtains a much better performance in terms of randomness, increasing the NIST passing rate from 0.252 to 0.989. Finally, the proposed bitwise dynamical PRNG is compared with other chaos-based realizations previously proposed, showing great improvement in terms of resources and randomness.

82 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic Peierls-Nabarro (PN) model is proposed to understand how random site occupancy affects intrinsic strength of high-entropy alloys.

78 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that local random quantum circuits generate unitary transformations whose complexity grows linearly for a long time, mirroring the behavior one expects in chaotic quantum systems and verifying conjectures by Brown and Susskind.
Abstract: The concept of quantum complexity has far-reaching implications spanning theoretical computer science, quantum many-body physics, and high energy physics. The quantum complexity of a unitary transformation or quantum state is defined as the size of the shortest quantum computation that executes the unitary or prepares the state. It is reasonable to expect that the complexity of a quantum state governed by a chaotic many-body Hamiltonian grows linearly with time for a time that is exponential in the system size; however, because it is hard to rule out a short-cut that improves the efficiency of a computation, it is notoriously difficult to derive lower bounds on quantum complexity for particular unitaries or states without making additional assumptions. To go further, one may study more generic models of complexity growth. We provide a rigorous connection between complexity growth and unitary $k$-designs, ensembles which capture the randomness of the unitary group. This connection allows us to leverage existing results about design growth to draw conclusions about the growth of complexity. We prove that local random quantum circuits generate unitary transformations whose complexity grows linearly for a long time, mirroring the behavior one expects in chaotic quantum systems and verifying conjectures by Brown and Susskind. Moreover, our results apply under a strong definition of quantum complexity based on optimal distinguishing measurements.

76 citations


Journal ArticleDOI
TL;DR: A novel plain-image encryption using finite-precision error is proposed by means of the implementation of a chaotic system using two natural different interval extensions and has sufficient randomness to be used in encryption.
Abstract: Chaotic systems are broadly adopted to generate pseudo-random numbers used in encryption schemes. However, when implemented on a finite precision computer, chaotic systems end up in dynamical degradation of chaotic properties. Many works have been proposed to address this issue. Nevertheless, little attention has been paid to exploit the finite precision as a source of randomness rather a feature that should be mitigated. This paper proposes a novel plain-image encryption using finite-precision error. The error is obtained by means of the implementation of a chaotic system using two natural different interval extensions. The generated sequence has passed all NIST test, which means it has sufficient randomness to be used in encryption. Several benchmark images have been effectively encrypted using the proposed approach.

67 citations


Journal ArticleDOI
TL;DR: It is argued that the transition in the case of random disorder exhibits universal features that are identified by constructing an appropriate model of intermediate spectral statistics which is a generalization of the family of short-range plasma models.
Abstract: Level statistics of systems that undergo many-body localization transition are studied. An analysis of the gap ratio statistics from the perspective of inter- and intrasample randomness allows us to pin point differences between transitions in random and quasirandom disorder, showing the effects due to Griffiths rare events for the former case. It is argued that the transition in the case of random disorder exhibits universal features that are identified by constructing an appropriate model of intermediate spectral statistics which is a generalization of the family of short-range plasma models. The considered weighted short-range plasma model yields a very good agreement both for level spacing distribution including its exponential tail and the number variance up to tens of level spacings outperforming previously proposed models. In particular, our model grasps the critical level statistics which arise at disorder strength for which the intersample fluctuations are the strongest. Going beyond the paradigmatic examples of many-body localization in spin systems, we show that the considered model also grasps the level statistics of disordered Bose- and Fermi-Hubbard models. The remaining deviations for long-range spectral correlations are discussed and attributed mainly to the intricacies of level unfolding.

65 citations


Journal ArticleDOI
TL;DR: This survey paper intends to provide a systematic review of true random number generators (TRNGs) based on chaos and points out a set of promising future works to help researchers decide which are the ones that best suit their needs and goals.
Abstract: With the rapid development of communication technology and the popularization of network, information security has been highly valued by all walks of life. Random numbers are used in many cryptographic protocols, key management, identity authentication, image encryption, and so on. True random numbers (TRNs) have better randomness and unpredictability in encryption and key than pseudorandom numbers (PRNs). Chaos has good features of sensitive dependence on initial conditions, randomness, periodicity, and reproduction. These demands coincide with the rise of TRNs generating approaches in chaos field. This survey paper intends to provide a systematic review of true random number generators (TRNGs) based on chaos. Firstly, the two kinds of popular chaotic systems for generating TRNs based on chaos, including continuous time chaotic system and discrete time chaotic system are introduced. The main approaches and challenges are exposed to help researchers decide which are the ones that best suit their needs and goals. Then, existing methods are reviewed, highlighting their contributions and their significance in the field. We also devote a part of the paper to review TRNGs based on current-mode chaos for this problem. Finally, quantitative results are given for the described methods in which they were evaluated, following up with a discussion of the results. At last, we point out a set of promising future works and draw our own conclusions about the state of the art of TRNGs based on chaos.

Posted Content
TL;DR: It is argued that random circuits form approximate unitary $k$-designs in O(nk) depth and are thus essentially optimal in both £n and $k, and can be shown in the limit of large local dimension.
Abstract: Random quantum circuits are proficient information scramblers and efficient generators of randomness, rapidly approximating moments of the unitary group. We study the convergence of local random quantum circuits to unitary $k$-designs. Employing a statistical mechanical mapping, we give an exact expression of the distance to forming an approximate design as a lattice partition function. In the statistical mechanics model, the approach to randomness has a simple interpretation in terms of domain walls extending through the circuit. We analytically compute the second moment, showing that random circuits acting on $n$ qudits form approximate 2-designs in $O(n)$ depth, as is known. Furthermore, we argue that random circuits form approximate unitary $k$-designs in $O(nk)$ depth and are thus essentially optimal in both $n$ and $k$. We can show this in the limit of large local dimension, but more generally rely on a conjecture about the dominance of certain domain wall configurations.

Journal ArticleDOI
08 Apr 2019
TL;DR: A method that enables ultra-fast unpredictable quantum random number generation from quadrature fluctuations of quantum optical field without any assumptions on the input states is proposed and a new security analysis framework is established based on the extremality of Gaussian states which can be easily extended to design and analyze new semi-device-independent continuous variable QRNG protocols.
Abstract: As a fundamental phenomenon in nature, randomness has a wide range of applications in the fields of science and engineering. Among different types of random number generators (RNG), quantum random number generator (QRNG) is a kind of promising RNG as it can provide provable true random numbers based on the inherent randomness of fundamental quantum processes. Nevertheless, the randomness from a QRNG can be diminished (or even destroyed) if the devices (especially the entropy source devices) are not perfect or ill-characterized. To eliminate the practical security loopholes from the source, source-independent QRNGs, which allow the source to have arbitrary and unknown dimensions, have been introduced and become one of the most important semi-device-independent (DI) QRNGs. Herein a method that enables ultra-fast unpredictable quantum random number generation from quadrature fluctuations of quantum optical field without any assumptions on the input states is proposed. Particularly, to estimate a lower bound on the extractable randomness that is independent from side information held by an eavesdropper, a new security analysis framework is established based on the extremality of Gaussian states, which can be easily extended to design and analyze new semi-DI continuous variable QRNG protocols. Moreover, the practical imperfections of the QRNG including the effects of excess noise, finite sampling range, finite resolution and asymmetric conjugate quadratures are taken into account and quantitatively analyzed. Finally, the proposed method is experimentally demonstrated to obtain high secure random number generation rates of 15.07 Gbits s−1 in off-line configuration and can potentially achieve 6 Gbits s−1 by real-time post-processing.

Journal ArticleDOI
01 Jun 2019-Chaos
TL;DR: A novel image encryption scheme based on the pseudo-orbits of 1D chaotic maps is presented, which uses the difference of two pseudo- orbits to generate a random sequence that has been successful in all NIST tests, and has adequate randomness to be employed in encryption process.
Abstract: Chaotic systems have been extensively applied in image encryption as a source of randomness. However, dynamical degradation has been pointed out as an important limitation of this procedure. To overcome this limitation, this paper presents a novel image encryption scheme based on the pseudo-orbits of 1D chaotic maps. We use the difference of two pseudo-orbits to generate a random sequence. The generated sequence has been successful in all NIST tests, which implies it has adequate randomness to be employed in encryption process. Confusion and diffusion requirements are also effectively implemented. The usual low key space of 1D maps has been improved by a novelty procedure based on multiple perturbations in the transient time. A factor using the plain image is one of the perturbation conditions, which ensures a new and distinct secret key for each image to be encrypted. The proposed encryption scheme has been efficaciously verified using the Lena, Baboon, and Barbara test images.

Journal ArticleDOI
TL;DR: In this article, the authors improved the second-order sublinear term of the von Neumann entropy accumulation theorem and proved various bounds on the divergence variance, which might be of independent interest.
Abstract: The entropy accumulation theorem states that the smooth min-entropy of an $n$ -partite system $A = (A_{1}, \ldots, A_{n})$ is lower-bounded by the sum of the von Neumann entropies of suitably chosen conditional states up to corrections that are sublinear in $n$ . This theorem is particularly suited to proving the security of quantum cryptographic protocols, and in particular so-called device-independent protocols for randomness expansion and key distribution, where the devices can be built and preprogrammed by a malicious supplier. However, while the bounds provided by this theorem are optimal in the first order, the second-order term is bounded more crudely, in such a way that the bounds deteriorate significantly when the theorem is applied directly to protocols where parameter estimation is done by sampling a small fraction of the positions, as is done in most QKD protocols. The objective of this paper is to improve this second-order sublinear term and remedy this problem. On the way, we prove various bounds on the divergence variance, which might be of independent interest.

Journal ArticleDOI
Bruno Dupire1
TL;DR: In this article, the authors extend the Ito calculus to functionals of the current path of a process to reflect the fact that often the impact of randomness is cumulative and depends on the history of the process.
Abstract: We extend some results of the Ito calculus to functionals of the current path of a process to reflect the fact that often the impact of randomness is cumulative and depends on the history of the pr...

Journal ArticleDOI
TL;DR: In this paper, a 6 Gbps real-time optical quantum random number generator by measuring vacuum fluctuation is presented. But, the generator is not scalable to the real-world due to the imperfection of devices.
Abstract: We demonstrate a 6 Gbps real-time optical quantum random number generator by measuring vacuum fluctuation. To address the common problem that speed gap exists between fast randomness generation and slow randomness extraction in most high-speed real-time quantum random number generator systems, we present an optimized extraction algorithm based on parallel implementation of Toeplitz hashing to reduce the influence of classical noise due to the imperfection of devices. Notably, the real-time rate of randomness extraction we have achieved reaches the highest speed of 12 Gbps by occupying less computing resources, and the algorithm has the ability to support hundreds of Gbps randomness extraction. By assuming that the eavesdropper with complete knowledge of the classical noise, our generator has a randomness generation speed of 6.83 Gbps and this supports the generation of 6 Gbps information-theoretically provable quantum random numbers, which are output in real-time through peripheral component interconnect express interface.

Journal ArticleDOI
TL;DR: In this article, the authors present an experiment that demonstrates device-independent randomness expansion (DIRNE), i.e., where the generated randomness surpasses that consumed, by developing a loophole-free Bell test setup with a single photon detection efficiency of around 81% and exploiting a spot-checking protocol.
Abstract: The ability to produce random numbers that are unknown to any outside party is crucial for many applications. Device-independent randomness generation (DIRNG) allows new randomness to be provably generated, without needing to trust the devices used for the protocol. This provides strong guarantees about the security of the output, but comes at the price of requiring the violation of a Bell inequality to implement. A further challenge is to make the bounds in the security proofs tight enough to allow expansion with contemporary technology. Thus, while randomness has been generated in recent experiments, the amount of randomness consumed in doing so has been too high to certify expansion based on existing theory. Here we present an experiment that demonstrates device-independent randomness expansion (DIRNE), i.e., where the generated randomness surpasses that consumed. By developing a loophole-free Bell test setup with a single photon detection efficiency of around 81% and exploiting a spot-checking protocol, we achieve a net gain of $2.63\times10^8$ certified bits with soundness error $5.74\times10^{-8}$. The experiment ran for 220 hours corresponding to an average rate of randomness generation of 8202 bits/s. By developing the Entropy Accumulation Theorem (EAT), we established security against quantum adversaries. We anticipate that this work will lead to further improvements that push device-independence towards commercial viability.

Journal ArticleDOI
30 Sep 2019-Entropy
TL;DR: A PRNG based on a modified logistic chaotic system with fixed system parameters is proposed and its chaotic behavior is analyzed and proved and shows that the pseudo-random sequence generated by this method has perfect randomness, cryptographic properties and can pass the statistical tests.
Abstract: In recent years, a chaotic system is considered as an important pseudo-random source to pseudo-random number generators (PRNGs). This paper proposes a PRNG based on a modified logistic chaotic system. This chaotic system with fixed system parameters is convergent and its chaotic behavior is analyzed and proved. In order to improve the complexity and randomness of modified PRNGs, the chaotic system parameter denoted by floating point numbers generated by the chaotic system is confused and rearranged to increase its key space and reduce the possibility of an exhaustive attack. It is hard to speculate on the pseudo-random number by chaotic behavior because there is no statistical characteristics and infer the pseudo-random number generated by chaotic behavior. The system parameters of the next chaotic system are related to the chaotic values generated by the previous ones, which makes the PRNG generate enough results. By confusing and rearranging the output sequence, the system parameters of the previous time cannot be gotten from the next time which ensures the security. The analysis shows that the pseudo-random sequence generated by this method has perfect randomness, cryptographic properties and can pass the statistical tests.

Journal ArticleDOI
TL;DR: This paper reviews methods of generating randomness in various fields and suggests that by disordering a “false order,” an effective disorder can be generated to improve the function of systems.
Abstract: Randomness is far from a disturbing disorder in nature. Rather, it underlies many processes and functions. Randomness can be used to improve the efficacy of development and of systems under certain conditions. Moreover, valid unpredictable random-number generators are needed for secure communication, rendering predictable pseudorandom strings unsuitable. This paper reviews methods of generating randomness in various fields. The potential use of these methods is also discussed. It is suggested that by disordering a “false order,” an effective disorder can be generated to improve the function of systems.

Journal ArticleDOI
TL;DR: It is shown that randomness is not a requirement for this computational paradigm, and two methods for maintaining constant bit-streams lengths via approximations, based on low-discrepancy sequences are discussed.
Abstract: Stochastic logic performs computation on data represented by random bit-streams. The representation allows complex arithmetic to be performed with very simple logic, but it suffers from high latency and poor precision. Furthermore, the results are always somewhat inaccurate due to random fluctuations. In this paper, we show that randomness is not a requirement for this computational paradigm. If properly structured, the same arithmetical constructs can operate on deterministic bit-streams, with the data represented uniformly by the fraction of 1’s versus 0’s. This paper presents three approaches for the computation: relatively prime stream lengths, rotation, and clock division. Unlike stochastic methods, all three of our deterministic methods produce completely accurate results. The cost of generating the deterministic streams is a small fraction of the cost of generating streams from random/pseudorandom sources. Most importantly, the latency is reduced by a factor of $({1}/{2^{n}})$ , where $n$ is the equivalent number of bits of precision. When computing in unary, the bit-stream length increases with each level of logic. This is an inevitable consequence of the representation, but it can result in unmanageable bit-stream lengths. We discuss two methods for maintaining constant bit-streams lengths via approximations, based on low-discrepancy sequences. These methods provide the best accuracy and area $\times $ delay product. They are fast-converging and therefore offer progressive precision.

Journal ArticleDOI
TL;DR: NPEM is applied to analyze the random dynamic responses of 3D train-bridge coupled system with random parameters, including parameters of bridge materials, environment factor and prestressing parameters, and results show that NPEM can be combined with random system accurately and efficiently.

Journal ArticleDOI
TL;DR: Two approaches to incorporate dependence into Stochastic Dual Dynamic Programming are compared based on a computational study using the long-term operational planning problem of the Brazilian interconnected power systems and it is found that for the considered problem the optimality bounds computed by the MC-SDDP method close faster than its TS-SD DP counterpart, and the MC -SDDP policy dominates the TS- SDDP policy.

Proceedings ArticleDOI
23 Sep 2019
TL;DR: This paper performs a controlled study on the effect of random seeds on the behaviour of attention, gradient-based and surrogate model based (LIME) interpretations and proposes a technique called ASWA and an extension called Norm-filtered Aggressive Stochastic Weight Averaging which improves the stability of models over random seeds.
Abstract: In this paper, we focus on quantifying model stability as a function of random seed by investigating the effects of the induced randomness on model performance and the robustness of the model in general. We specifically perform a controlled study on the effect of random seeds on the behaviour of attention, gradient-based and surrogate model based (LIME) interpretations. Our analysis suggests that random seeds can adversely affect the consistency of models resulting in counterfactual interpretations. We propose a technique called Aggressive Stochastic Weight Averaging (ASWA) and an extension called Norm-filtered Aggressive Stochastic Weight Averaging (NASWA) which improves the stability of models over random seeds. With our ASWA and NASWA based optimization, we are able to improve the robustness of the original model, on average reducing the standard deviation of the model’s performance by 72%.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the von Neumann entropy fully characterizes single-shot state transitions in unitary quantum mechanics, as long as one has access to a catalyst-an ancillary system that can be reused after the transition-and an environment which has the effect of dephasing in a preferred basis.
Abstract: The von Neumann entropy is a key quantity in quantum information theory and, roughly speaking, quantifies the amount of quantum information contained in a state when many identical and independent (i.i.d.) copies of the state are available, in a regime that is often referred to as being asymptotic. In this Letter, we provide a new operational characterization of the von Neumann entropy which neither requires an i.i.d. limit nor any explicit randomness. We do so by showing that the von Neumann entropy fully characterizes single-shot state transitions in unitary quantum mechanics, as long as one has access to a catalyst-an ancillary system that can be reused after the transition-and an environment which has the effect of dephasing in a preferred basis. Building upon these insights, we formulate and provide evidence for the catalytic entropy conjecture, which states that the above result holds true even in the absence of decoherence. If true, this would prove an intimate connection between single-shot state transitions in unitary quantum mechanics and the von Neumann entropy. Our results add significant support to recent insights that, contrary to common wisdom, the standard von Neumann entropy also characterizes single-shot situations and opens up the possibility for operational single-shot interpretations of other standard entropic quantities. We discuss implications of these insights to readings of the third law of quantum thermodynamics and hint at potentially profound implications to holography.

Journal ArticleDOI
TL;DR: Data suggest some randomness in microtubules structure and dynamics may contribute to their normal function and may even be part of an improved efficacy mechanism, which may further support the concept ofrandomness in biological pathways as part of self‐organization or accurate and enhanced function.

Journal ArticleDOI
TL;DR: In this paper, the authors report an alternative scheme for implementing generalized quantum measurements that does not require the usage of an auxiliary system, which utilizes solely (a) classical randomness and postprocessing, (b) projective measurements on a relevant quantum system, and (c) postselection on nonobserving certain outcomes.
Abstract: We report an alternative scheme for implementing generalized quantum measurements that does not require the usage of an auxiliary system. Our method utilizes solely (a) classical randomness and postprocessing, (b) projective measurements on a relevant quantum system, and (c) postselection on nonobserving certain outcomes. The scheme implements arbitrary quantum measurement in dimension $d$ with the optimal success probability $1/d$. We apply our results to bound the relative power of projective and generalized measurements for unambiguous state discrimination. Finally, we test our scheme experimentally on an IBM quantum processor. Interestingly, due to noise involved in the implementation of entangling gates, the quality with which our scheme implements generalized qubit measurements outperforms the standard construction using an auxiliary system.

Journal ArticleDOI
TL;DR: In this paper, a quantum corrected version of the Fokker-planck equation without dissipation and its fourth order corrected analytical solution for the probability distribution profile responsible for studying the dynamical features of the particle creation events in the stochastic inflation and reheating stage of the universe is presented.
Abstract: In this work, our prime focus is to study the one to one correspondence between the conduction phenomena in electrical wires with impurity and the scattering events responsible for particle production during stochastic inflation and reheating implemented under a closed quantum mechanical system in early universe cosmology. In this connection, we also present a derivation of quantum corrected version of the Fokker–Planck equation without dissipation and its fourth order corrected analytical solution for the probability distribution profile responsible for studying the dynamical features of the particle creation events in the stochastic inflation and reheating stage of the universe. It is explicitly shown from our computation that quantum corrected Fokker–Planck equation describe the particle creation phenomena better for Dirac delta type of scatterer. In this connection, we additionally discuss Ito, Stratonovich prescription and the explicit role of finite temperature effective potential for solving the probability distribution profile. Furthermore, we extend our discussion of particle production phenomena to describe the quantum description of randomness involved in the dynamics. We also present computation to derive the expression for the measure of the stochastic non-linearity (randomness or chaos) arising in the stochastic inflation and reheating epoch of the universe, often described by Lyapunov Exponent. Apart from that, we quantify the quantum chaos arising in a closed system by a more strong measure, commonly known as Spectral Form Factor using the principles of random matrix theory (RMT). Additionally, we discuss the role of out of time order correlation function (OTOC) to describe quantum chaos in the present non-equilibrium field theoretic setup and its consequences in early universe cosmology (stochastic inflation and reheating). Finally, for completeness, we also provide a bound on the measure of quantum chaos (i.e. on Lyapunov Exponent and Spectral Form Factor) arising due to the presence of stochastic non-linear dynamical interactions into the closed quantum system of the early universe in a completely model-independent way.

Journal ArticleDOI
TL;DR: The trust-free experimental verification of higher dimensional quantum steering is reported via preparing a class of entangled photonic qutrits and 1.106±0.023 bits of private randomness per every photon pair are extracted from observed data, which surpasses the one-bit limit for projective measurements performed on qubit systems.
Abstract: In a measurement-device-independent or quantum-refereed protocol, a referee can verify whether two parties share entanglement or Einstein-Podolsky-Rosen (EPR) steering without the need to trust either of the parties or their devices. The need for trusting a party is substituted by a quantum channel between the referee and that party, through which the referee encodes the measurements to be performed on that party's subsystem in a set of nonorthogonal quantum states. In this Letter, an EPR-steering inequality is adapted as a quantum-refereed EPR-steering witness, and the trust-free experimental verification of higher dimensional quantum steering is reported via preparing a class of entangled photonic qutrits. Further, with two measurement settings, we extract 1.106±0.023 bits of private randomness per every photon pair from our observed data, which surpasses the one-bit limit for projective measurements performed on qubit systems. Our results advance research on quantum information processing tasks beyond qubits.

Proceedings ArticleDOI
14 May 2019
TL;DR: This work proposes a novel game-theoretic approach for generating provably unmanipulatable pseudorandom numbers on the blockchain that allows smart contracts to access a trustworthy source of randomness that does not rely on potentially compromised miners or oracles, hence enabling the creation of a new generation of smart contracts that are not limited to being non-probabilistic and can be drawn from the much more general class of probabilistic programs.
Abstract: In today’s programmable blockchains, smart contracts are limited to being deterministic and non-probabilistic. This lack of randomness is a consequential limitation, given that a wide variety of real-world financial contracts, such as casino games and lotteries, depend entirely on randomness. As a result, several ad-hoc random number generation approaches have been developed to be used in smart contracts. These include ideas such as using an oracle or relying on the block hash. However, these approaches are manipulatable, i.e. their output can be tampered with by parties who might not be neutral, such as the owner of the oracle or the miners.We propose a novel game-theoretic approach for generating provably unmanipulatable pseudorandom numbers on the blockchain. Our approach allows smart contracts to access a trustworthy source of randomness that does not rely on potentially compromised miners or oracles, hence enabling the creation of a new generation of smart contracts that are not limited to being non-probabilistic and can be drawn from the much more general class of probabilistic programs.