scispace - formally typeset
Search or ask a question

Showing papers by "Richard D. Gill published in 2003"


Posted Content
TL;DR: In this paper, an audience of statisticians and probabilists was assembled to discuss quantum information, in particular in the sense of the amount of information about unknown parameters in given observational data or accessible through various possible types of measurements.
Abstract: Recent developments in the mathematical foundations of quantum mechanics have brought the theory closer to that of classical probability and statistics. On the other hand, the unique character of quantum physics sets many of the questions addressed apart from those met classically in stochastics. Furthermore, concurrent advances in experimental techniques and in the theory of quantum computation have led to a strong interest in questions of quantum information, in particular in the sense of the amount of information about unknown parameters in given observational data or accessible through various possible types of measurements. This scenery is outlined (with an audience of statisticians and probabilists in mind).

145 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a review of the field of statistical inference in quantum systems and propose and interrelate some new concepts for an extension of classical statistical inference to the quantum context.
Abstract: Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and interrelates some new concepts for an extension of classical statistical inference to the quantum context.

144 citations


01 Jan 2003
TL;DR: In this paper, the effects of time dependence in the Bell inequality were analyzed and a generalized inequality was derived for the case when coincidence and non-coincidence (and hence whether or not a pair contributes to the actual data) is controlled by timing depending on the detector settings.
Abstract: This paper analyzes effects of time dependence in the Bell inequality. A generalized inequality is derived for the case when coincidence and non-coincidence (and hence whether or not a pair contributes to the actual data) is controlled by timing that depends on the detector settings. Needless to say, this inequality is violated by quantum mechanics and could be violated by experimental data provided that the loss of measurement pairs through failure of coincidence is small enough, but the quantitative bound is more restrictive in this case than in the previously analyzed "efficiency loophole".

71 citations


01 Jan 2003
TL;DR: In this paper, the authors discuss three issues connected to Bell's theorem and Bell-CHSH-type experiments: time and the memory loophole, finite statistics (how wide are the error bars, under local realism?), and the question of whether a loophole-free experiment is feasible.
Abstract: In this contribution to the 2002 Vaxjo conference on the foundations of quantum mechanics and probability, I discuss three issues connected to Bell’s theorem and Bell-CHSH-type experiments: time and the memory loophole, finite statistics (how wide are the error bars, under local realism?), and the question of whether a loophole-free experiment is feasible, a surprising omission on Bell’s list of four positions to hold in the light his results. L´ (1935) theory of martingales, and Fisher’s (1935) theory of randomization in experimental design, take care of time and of finite statistics. I exploit a (classical) computer network metaphor for local realism to argue that Bell’s conclusions are independent of how one likes to interpret probability. I give a critique of some recent anti-Bellist literature.

68 citations


Posted Content
TL;DR: There exist numerous proofs of Bell's theorem, stating that quantum mechanics is incompatible with local realistic theories of nature, and the statistical strength of such nonlocality proofs is defined in terms of the amount of evidence against local realism provided by the corresponding experiments.
Abstract: There exist numerous proofs of Bell's theorem, stating that quantum mechanics is incompatible with local realistic theories of nature. Here we define the strength of such nonlocality proofs in terms of the amount of evidence against local realism provided by the corresponding experiments. This measure tells us how many trials of the experiment we should perform in order to observe a substantial violation of local realism. Statistical considerations show that the amount of evidence should be measured by the Kullback-Leibler or relative entropy divergence between the probability distributions over the measurement outcomes that the respective theories predict. The statistical strength of a nonlocality proof is thus determined by the experimental implementation of it that maximizes the Kullback-Leibler divergence from experimental (quantum mechanical) truth to the set of all possible local theories. An implementation includes a specification with which probabilities the different measurement settings are sampled, and hence the maximization is done over all such setting distributions. We analyze two versions of Bell's nonlocality proof (his original proof and an optimized version by Peres), and proofs by Clauser-Horne-Shimony-Holt, Hardy, Mermin, and Greenberger-Horne-Zeilinger. We find that the GHZ proof is at least four and a half times stronger than all other proofs, while of the two-party proofs, the one of CHSH is the strongest.

64 citations


01 Jan 2003
TL;DR: It is shown that the randomised design of the Aspect experiment closes a loophole for local realism, and a supermartingale version of the classical Bernstein (1924) inequality guaranteeing a not-heavier-than-Gaussian tail of the distribution of a sum of bounded superMartingale differences.

41 citations


Book ChapterDOI
01 Jan 2003
TL;DR: In this article, the authors show that the randomised design of the Aspect experiment can close the gap of local realism by guaranteeing a not-heavier-than-Gaussian tail of the distribution of bounded supermartingale differences.
Abstract: An experimentally observed violation of Bell s inequality is supposed to show the failure of local realism to deal with quantum reality. However, finite statistics and the time sequential nature of real experiments still allows a loophole for local realism. We show that the randomised design of the Aspect experiment closes this loophole. Our main tool is van de Geer s (2000) supermartingale version of the classical Bernstein (1924) inequality guaranteeing, at the root n scale, a not-heavier-than-Gaussian tail of the distribution of a sum of bounded supermartingale differences. The results are used to specify a protocol for a public bet between the author and L. Accardi, who in recent papers (Accardi and Regoli, 2000a,b, 2001) has claimed to have produced a suite of computer programmes, to be run on a network of computers, which will simulate a violation of Bell s inequalites. At a sample size of thirty thousand, both error probabilities are guaranteed smaller than one in a million, provided we adhere to the sequential randomized design.

40 citations


Journal ArticleDOI
01 Jan 2003-EPL
TL;DR: Gill, Weihs, Zeilinger, and Żukowski as discussed by the authors proposed a method to solve the same problem using the Gdanski-Gdanski algorithm.
Abstract: R. D. Gill, G. Weihs, A. Zeilinger and M. Żukowski 1 Mathematical Institute, University of Utrecht Budapestlaan 6 3584 CD Utrecht, The Netherlands 2 Ginzton Labs, Stanford University Stanford, CA, 94305-4088, USA 3 Institut fur Experimentalphysik, University of Vienna Boltzmanngasse 5, 1090 Wien, Austria 4 Instytut Fizyki Teoretycznej i Astrofizyki Uniwersytet Gdanski, PL-80-952 Gdansk, Poland

17 citations



01 Jul 2003
TL;DR: In this paper, it was shown that a rational decision maker behaves as if Born's law is a consequence of functional and unitary invariance principles belonging to the deterministic part of quantum mechanics.
Abstract: We analyse an argument of Deutsch, which purports to show that the deterministic part of classical quantum theory together with deterministic axioms of classical decision theory, together imply that a rational decision maker behaves as if the probabilistic part of quantum theory (Born’s law) is true. We uncover two missing assumptions in the argument, and show that the argument also works for an instrumentalist who is prepared to accept that the outcome of a quantum measurement is random in the frequentist sense: Born’s law is a consequence of functional and unitary invariance principles belonging to the deterministic part of quantum mechanics. Unfortunately, it turns out that after the necessary corrections we have done no more than give an easier proof of Gleason’s theorem under stronger assumptions. However, for some special cases the proof method gives positive results while using different assumptions to Gleason. This leads to the conjecture that the proof could be improved to give the same conclusion as Gleason under unitary invariance together with a much weaker functional invariance condition. The first draft of this paper dates back to early 1999, was posted on my webpage, but never completed. It has since been partly overtaken by Barnum et al. (2000), Saunders (2002), and Wallace (2002). However there remain new points of view, new results, and most importantly, a still open conjecture.

11 citations


Posted Content
TL;DR: In this paper, the authors discuss three issues connected to Bell's theorem and Bell-CHSH-type experiments: time and the memory loophole, finite statistics (how wide are the error bars under Local Realism), and the question of whether a loophole-free experiment is feasible.
Abstract: I discuss three issues connected to Bell's theorem and Bell-CHSH-type experiments: time and the memory loophole, finite statistics (how wide are the error bars Under Local Realism), and the question of whether a loophole-free experiment is feasible, a surprising omission on Bell's list of four positions to hold in the light of his results. Levy's (1935) theory of martingales, and Fisher's (1935) theory of randomization in experimental design, take care of time and of finite statistics. I exploit a (classical) computer network metaphor for local realism to argue that Bell's conclusions are independent of how one likes to interpret probability, and give a critique of some recent anti-Bellist literature.

Posted Content
TL;DR: In this paper, a review of the field of statistical inference in quantum systems is presented, and a number of new concepts for an extension of classical statistical inference to the quantum context are discussed.
Abstract: Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, theoretical developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and interrelates a number of new concepts for an extension of classical statistical inference to the quantum context.


01 Jan 2003
TL;DR: In this paper, it was shown that the strength of a non-locality proof is determined by the experimental implementation that maximizes its statistical deviation from all possible local theories, which is best expressed by the Kullback-Leibler distance between the probability distributions over the measurement outcomes that the respective theories predict.
Abstract: The strength of a nonlocality proof is examined in terms of the amount of evidence that the corresponding experiment provides for the nonlocality of Nature. An experimental implementation of such a proof gives data whose statistics will differ from the statistics that are possible under a local description of Nature. The strength of the experiment is quantified by the expected deviation between the observed frequencies, which are given by the laws of quantum mechanics, and the closest possible local theory. Varying the frequencies of the measurement settings gives different experimental implementations of a nonlocality proof, giving each implementation its own strength. The statistical strength of a nonlocality proof is thus determined by the experimental implementation that maximizes its statistical deviation from all possible local theories. It is shown that the deviation between quantum mechanics and a local theory is best expressed by the Kullback-Leibler distance between the probability distributions over the measurement outcomes that the respective theories predict. Specifically, it is proven that the Kullback-Leibler distance is optimal for three methods of hypothesis testing: frequentist, Bayesian, and information theoretic hypothesis testing. The nonlocality proofs that are analyzed in this article are: Bell’s original proof, an improved version of Bell’s proof, the CHSH inequality, Hardy’s proof, a proof by Mermin, and the 3-party GHZ inequality. The outcome is that the GHZ proof is an order of magnitude stronger than all other proofs, while of the two party proofs, the CHSH inequality is the strongest.

Journal Article
TL;DR: In this paper, the authors consider the connection with a probabilistic simulation technique called rejectionsampling, and pose some natural questions concerning what can be achieved and what cannot be achieved with local (or distributed) rejection sampling.

Journal Article
TL;DR: In this article, it was shown that a rational decision maker behaves as if Born's law is a consequence of functional and unitary invariance principles belonging to the deterministic part of quantum mechanics.

Journal ArticleDOI
TL;DR: In this paper, the effects of time-dependence in the Bell inequality were analyzed and a generalized inequality was derived for the case when coincidence and non-coincidence is controlled by timing that depends on the detector settings.
Abstract: This paper analyzes effects of time-dependence in the Bell inequality. A generalized inequality is derived for the case when coincidence and non-coincidence [and hence whether or not a pair contributes to the actual data] is controlled by timing that depends on the detector settings. Needless to say, this inequality is violated by quantum mechanics and could be violated by experimental data provided that the loss of measurement pairs through failure of coincidence is small enough, but the quantitative bound is more restrictive in this case than in the previously analyzed "efficiency loophole."

Posted Content
TL;DR: In this article, the argument of Deutsch, which purports to show that the deterministic part of classical quantum theory together with deterministic axioms of classical decision theory together imply that a rational decision maker behaves as if the probabilistic part of quantum theory (Born's law) is true, is investigated.
Abstract: We analyse an argument of Deutsch, which purports to show that the deterministic part of classical quantum theory together with deterministic axioms of classical decision theory, together imply that a rational decision maker behaves as if the probabilistic part of quantum theory (Born's law) is true. We uncover two missing assumptions in the argument, and show that the argument also works for an instrumentalist who is prepared to accept that the outcome of a quantum measurement is random in the frequentist sense: Born's law is a consequence of functional and unitary invariance principles belonging to the deterministic part of quantum mechanics. Unfortunately, it turns out that after the necessary corrections we have done no more than give an easier proof of Gleason's theorem under stronger assumptions. However, for some special cases the proof method gives positive results while using different assumptions to Gleason. This leads to the conjecture that the proof could be improved to give the same conclusion as Gleason under unitary invariance together with a much weaker functional invariance condition.

Journal ArticleDOI
TL;DR: In this paper, a short geometric proof of the Kochen-Specker no-go theorem for non-contextual hidden variables models is given, which is based on the construction of the original kochenSpecker construction.
Abstract: We give a short geometric proof of the Kochen-Specker no-go theorem for non-contextual hidden variables models. Note added to this version: I understand from Jan-Aake Larsson that the construction we give here actually contains the original Kochen-Specker construction as well as many others (Bell, Conway and Kochen, Schuette, perhaps also Peres).

Posted Content
TL;DR: This work considers the connection with a probabilistic simulation technique called rejectionsampling, and poses some natural questions concerning what can be achieved and what cannot be achieved with local (or distributed) rejection sampling.
Abstract: Various local hidden variables models for the singlet correlations exploit the detection loophole, or other loopholes connected with post-selection on coincident arrival times. I consider the connection with a probabilistic simulation technique called rejection-sampling, and pose some natural questions concerning what can be achieved and what cannot be achieved with local (or distributed) rejection sampling. In particular a new and more serious loophole, which we call the coincidence loophole, is introduced.

Posted Content
TL;DR: In this article, the authors describe quantum tomography as an inverse statistical problem and show how entropy methods can be used to study the behaviour of sieved maximum likelihood estimators, which is an interesting approach to the problem of quantum computation.
Abstract: We describe quantum tomography as an inverse statistical problem and show how entropy methods can be used to study the behaviour of sieved maximum likelihood estimators. There remain many open problems, and a main purpose of the paper is to bring these to the attention of the statistical community.

Journal Article
TL;DR: In this article, the authors describe quantum tomography as an inverse statistical problem and show how entropy methods can be used to study the behaviour of sieved maximum likelihood estimators, which is an interesting approach to the problem of quantum computation.
Abstract: We describe quantum tomography as an inverse statistical problem and show how entropy methods can be used to study the behaviour of sieved maximum likelihood estimators. There remain many open problems, and a main purpose of the paper is to bring these to the attention of the statistical community.

Journal Article
TL;DR: In this article, the authors discuss three issues connected to Bell's theorem and Bell-CHSH-type experiments: time and the memory loophole, finite statistics (how wide are the error bars, under local realism?), and the question of whether a loophole-free experiment is feasible.

01 Jan 2003
TL;DR: In this paper, the authors consider the connection with a probabilistic simulation technique called rejectionsampling, and pose some natural questions concerning what can be achieved and what cannot be achieved with local (or distributed) rejection sampling.
Abstract: Various supposedly local hidden variables models for the singlet correlations exploit the detection loophole, or other loopholes connected with post-selection on coincident arrival times. I consider the connection with a probabilistic simulation technique called rejectionsampling, and pose some natural questions concerning what can be achieved and what cannot be achieved with local (or distributed) rejection sampling. Mathematics Subject Classification: 65C50, 81P68, 81P05, 60-08, 62P35. http://arxiv.org/abs/quant-ph/0307217