scispace - formally typeset
Search or ask a question

Showing papers by "Michael A. Nielsen published in 2010"


01 Dec 2010
TL;DR: This chapter discusses quantum information theory, public-key cryptography and the RSA cryptosystem, and the proof of Lieb's theorem.
Abstract: Part I. Fundamental Concepts: 1. Introduction and overview 2. Introduction to quantum mechanics 3. Introduction to computer science Part II. Quantum Computation: 4. Quantum circuits 5. The quantum Fourier transform and its application 6. Quantum search algorithms 7. Quantum computers: physical realization Part III. Quantum Information: 8. Quantum noise and quantum operations 9. Distance measures for quantum information 10. Quantum error-correction 11. Entropy and information 12. Quantum information theory Appendices References Index.

14,825 citations


Book ChapterDOI
01 Jan 2010
TL;DR: This chapter reviews the basic definitions and properties of entropy in both classical and quantum information theory and discusses Shannon entropy, a key concept of classical information theory.
Abstract: Entropy is a key concept of quantum information theory. It measures how much uncertainty there is in the state of a physical system. In this chapter we review the basic definitions and properties of entropy in both classical and quantum information theory. In places the chapter contains rather detailed and lengthy mathematical arguments. On a first reading these sections may be read lightly and returned to later for reference purposes. Shannon entropy The key concept of classical information theory is the Shannon entropy . Suppose we learn the value of a random variable X . The Shannon entropy of X quantifies how much information we gain, on average, when we learn the value of X . An alternative view is that the entropy of X measures the amount of uncertainty about X before we learn its value. These two views are complementary; we can view the entropy either as a measure of our uncertainty before we learn the value of X , or as a measure of how much information we have gained after we learn the value of X . Intuitively, the information content of a random variable should not depend on the labels attached to the different values that may be taken by the random variable. For example, we expect that a random variable taking the values ‘heads’ and ‘tails’ with respective probabilities ¼ and ¾ contains the same amount of information as a random variable that takes the values 0 and 1 with respective probabilities ¼ and ¾.

57 citations



Book ChapterDOI
01 Jan 2010
TL;DR: Instead of looking at quantum systems purely as phenomena to be explained as they are found in nature, they looked at them as systems that can be designed, a small change in perspective, but the implications are profound.
Abstract: Quantum mechanics has the curious distinction of being simultaneously the most successful and the most mysterious of our scientific theories. It was developed in fits and starts over a remarkable period from 1900 to the 1920s, maturing into its current form in the late 1920s. In the decades following the 1920s, physicists had great success applying quantum mechanics to understand the fundamental particles and forces of nature, culminating in the development of the standard model of particle physics. Over the same period, physicists had equally great success in applying quantum mechanics to understand an astonishing range of phenomena in our world, from polymers to semiconductors, from superfluids to superconductors. But, while these developments profoundly advanced our understanding of the natural world, they did only a little to improve our understanding of quantum mechanics. This began to change in the 1970s and 1980s, when a few pioneers were inspired to ask whether some of the fundamental questions of computer science and information theory could be applied to the study of quantum systems. Instead of looking at quantum systems purely as phenomena to be explained as they are found in nature, they looked at them as systems that can be designed . This seems a small change in perspective, but the implications are profound. No longer is the quantum world taken merely as presented, but instead it can be created.

13 citations


Book ChapterDOI
01 Dec 2010
TL;DR: In this paper, the authors explore some of the guiding principles and model systems for physical implementation of quantum information processing devices and systems, and provide perspective for an elaboration of a set of conditions sufficient for the experimental realization of quantum computation in Section 7.2.
Abstract: Computers in the future may weigh no more than 1.5 tons. – Popular Mechanics, forecasting the relentless march of science, 1949 I think there is a world market for maybe five computers. – Thomas Watson, chairman of IBM, 1943 Quantum computation and quantum information is a field of fundamental interest because we believe quantum information processing machines can actually be realized in Nature. Otherwise, the field would be just a mathematical curiosity! Nevertheless, experimental realization of quantum circuits, algorithms, and communication systems has proven extremely challenging. In this chapter we explore some of the guiding principles and model systems for physical implementation of quantum information processing devices and systems. We begin in Section 7.1 with an overview of the tradeoffs in selecting a physical realization of a quantum computer. This discussion provides perspective for an elaboration of a set of conditions sufficient for the experimental realization of quantum computation in Section 7.2. These conditions are illustrated in Sections 7.3 through 7.7, through a series of case studies, which consider five different model physical systems: the simple harmonic oscillator, photons and nonlinear optical media, cavity quantum electrodynamics devices, ion traps, and nuclear magnetic resonance with molecules. For each system, we briefly describe the physical apparatus, the Hamiltonian which governs its dynamics, means for controlling the system to perform quantum computation, and its principal drawbacks.

9 citations


Book ChapterDOI
01 Jan 2010
TL;DR: The Polymath Project as discussed by the authors was an experiment in what Gowers termed "massively collaborative mathematics" where a large number of mathematicians would contribute, and their collective intelligence would make easy work of what would ordinarily be a difficult problem.
Abstract: At first appearance, the paper which follows this essay [7] appears to be a typical mathematical paper. It poses and partially answers several combinatorial questions, and follows the standard forms of mathematical discourse, with theorems, proofs, conjectures, and so on. Appearances are deceiving, however, for the paper has an unusual origin, a clue to which is in the name of the author, one D. H. J. Polymath. Behind this unusual name is a bold experiment in how mathematics is done. This experiment was initiated in January of 2009 by W. Timothy Gowers [5], and was an experiment in what Gowers termed “massively collaborative mathematics”. The idea, in brief, was to attempt to solve a mathematical research problem working entirely in the open, using Gowers’s blog as a medium for mathematical collaboration. The hope was that a large number of mathematicians would contribute, and that their collective intelligence would make easy work of what would ordinarily be a difficult problem. Gowers dubbed the project the “Polymath Project”. In this essay I describe how the Polymath Project proceeded, and reflect on similarities to online collaborations in the open source and open science communities. Although I followed the Polymath Project closely, my background is in theoretical physics, not combinatorics, and so I did not participate directly in the mathematical discussions. The perspective is that of an interested outsider, one whose main creative interests are in open science and collective intelligence.

6 citations


Book ChapterDOI
01 Jan 2010
TL;DR: In this paper, the authors describe a set of tools enabling us to describe quantum noise and the behavior of open quantum systems, which is a central topic of the third part of this book, which begins in this chapter with the description of the quantum operations formalism.
Abstract: Until now we have dealt almost solely with the dynamics of closed quantum systems, that is, with quantum systems that do not suffer any unwanted interactions with the outside world. Although fascinating conclusions can be drawn about the information processing tasks which may be accomplished in principle in such ideal systems, these observations are tempered by the fact that in the real world there are no perfectly closed systems, except perhaps the universe as a whole. Real systems suffer from unwanted interactions with the outside world. These unwanted interactions show up as noise in quantum information processing systems. We need to understand and control such noise processes in order to build useful quantum information processing systems. This is a central topic of the third part of this book, which begins in this chapter with the description of the quantum operations formalism , a powerful set of tools enabling us to describe quantum noise and the behavior of open quantum systems. What is the distinction between an open and a closed system? A swinging pendulum like that found in some mechanical clocks can be a nearly ideal closed system. A pendulum interacts only very slightly with the rest of the world – its environment – mainly through friction. However, to properly describe the full dynamics of the pendulum and why it eventually ceases to move one must take into account the damping effects of air friction and imperfections in the suspension mechanism of the pendulum.

5 citations


Book ChapterDOI
01 Dec 2010
TL;DR: In this article, it was shown that any single qubit gate can be approximated to arbitrary accuracy using a finite set of gates, such as the controlled-not gate, Hadamard gate H, phase gate S, and π/8 gate.
Abstract: In Chapter 4 we showed that an arbitrary unitary operation U may be implemented on a quantum computer using a circuit consisting of single qubit and controlled- not gates. Such universality results are important because they ensure the equivalence of apparently different models of quantum computation. For example, the universality results ensure that a quantum computer programmer may design quantum circuits containing gates which have four input and output qubits, confident that such gates can be simulated by a constant number of controlled- not and single qubit unitary gates. An unsatisfactory aspect of the universality of controlled- not and single qubit unitary gates is that the single qubit gates form a continuum , while the methods for fault-tolerant quantum computation described in Chapter 10 work only for a discrete set of gates. Fortunately, also in Chapter 4 we saw that any single qubit gate may be approximated to arbitrary accuracy using a finite set of gates, such as the controlled- not gate, Hadamard gate H , phase gate S , and π/8 gate. We also gave a heuristic argument that approximating the chosen single qubit gate to an accuracy ∈ required only Θ(1/∈) gates chosen from the finite set. Furthermore, in Chapter 10 we showed that the controlled- not , Hadamard, phase and π/8 gates may be implemented in a fault-tolerant manner.

5 citations


Book ChapterDOI
01 Dec 2010
TL;DR: The purpose of this chapter is the development of distance measures giving quantitative answers to questions central to a theory of quantum information processing, and two of those measures, the trace distance and the fidelity, are discussed.
Abstract: What does it mean to say that two items of information are similar? What does it mean to say that information is preserved by some process? These questions are central to a theory of quantum information processing, and the purpose of this chapter is the development of distance measures giving quantitative answers to these questions. Motivated by our two questions we will be concerned with two broad classes of distance measures, static measures and dynamic measures . Static measures quantify how close two quantum states are, while dynamic measures quantify how well information has been preserved during a dynamic process. The strategy we take is to begin by developing good static measures of distance, and then to use those static measures as the basis for the development of dynamic measures of distance. There is a certain arbitrariness in the way distance measures are defined, both classically and quantum mechanically, and the community of people studying quantum computation and quantum information has found it convenient to use a variety of distance measures over the years. Two of those measures, the trace distance and the fidelity , have particularly wide currency today, and we discuss both these measures in detail in this chapter. For the most part the properties of both are quite similar, however for certain applications one may be easier to deal with than the other. It is for this reason and because both are widely used within the quantum computation and quantum information community that we discuss both measures.

4 citations




Journal ArticleDOI
TL;DR: Gu et al. as discussed by the authors proposed an extension of Erratum erratum to "More Really is Different" (ERTE) to include the concept of more really is different.

Book ChapterDOI
01 Jan 2010
TL;DR: This chapter explains how to do quantum information processing reliably in the presence of noise, and develops the basic theory of quantum error-correcting codes, which protect quantum information against noise.
Abstract: We have learned that it is possible to fight entanglement with entanglement. – John Preskill To be an Error and to be Cast out is part of God's Design – William Blake This chapter explains how to do quantum information processing reliably in the presence of noise. The chapter covers three broad topics: the basic theory of quantum error-correcting codes, fault-tolerant quantum computation , and the threshold theorem . We begin by developing the basic theory of quantum error-correcting codes, which protect quantum information against noise. These codes work by encoding quantum states in a special way that make them resilient against the effects of noise, and then decoding when it is wished to recover the original state. Section 10.1 explains the basic ideas of classical error-correction, and some of the conceptual challenges that must be overcome to make quantum error-correction possible. Section 10.2 explains a simple example of a quantum error-correcting code, which we then generalize into a theory of quantum error-correcting codes in Section 10.3. Section 10.4 explains some ideas from the classical theory of linear codes, and how they give rise to an interesting class of quantum codes known as Calderbank–Shor–Steane (CSS) codes. Section 10.5 concludes our introductory survey of quantum error-correcting codes with a discussion of stabilizer codes, a richly structured class of codes with a close connection to classical error-correcting codes.



Book ChapterDOI
01 Jan 2010
TL;DR: This chapter outlines the modern theory of algorithms developed by computer science, and focuses on the fundamental model for algorithms, the Turing machine, which is an idealized computer, rather like a modern personal computer, but with a simpler set of basic instructions, and an idealization unbounded memory.
Abstract: In natural science, Nature has given us a world and we're just to discover its laws. In computers, we can stuff laws into it and create a world. – Alan Kay Our field is still in its embryonic stage. It's great that we haven't been around for 2000 years. We are still at a stage where very, very important results occur in front of our eyes. – Michael Rabin, on computer science Algorithms are the key concept of computer science. An algorithm is a precise recipe for performing some task, such as the elementary algorithm for adding two numbers which we all learn as children. This chapter outlines the modern theory of algorithms developed by computer science. Our fundamental model for algorithms will be the Turing machine . This is an idealized computer, rather like a modern personal computer, but with a simpler set of basic instructions, and an idealized unbounded memory. The apparent simplicity of Turing machines is misleading; they are very powerful devices. We will see that they can be used to execute any algorithm whatsoever, even one running on an apparently much more powerful computer. The fundamental question we are trying to address in the study of algorithms is: what resources are required to perform a given computational task? This question splits up naturally into two parts. First, we'd like to understand what computational tasks are possible, preferably by giving explicit algorithms for solving specific problems.

Book ChapterDOI
01 Dec 2010
TL;DR: In this paper, the authors provide all the necessary background knowledge of quantum mechanics needed for a thorough grasp of quantum computation and quantum information, and demonstrate how to solve simple problems in a few hours with no prior knowledge of the subject.
Abstract: I ain't no physicist but I know what matters. – Popeye the Sailor Quantum mechanics: Real Black Magic Calculus – Albert Einstein Quantum mechanics is the most accurate and complete description of the world known. It is also the basis for an understanding of quantum computation and quantum information. This chapter provides all the necessary background knowledge of quantum mechanics needed for a thorough grasp of quantum computation and quantum information. No prior knowledge of quantum mechanics is assumed. Quantum mechanics is easy to learn, despite its reputation as a difficult subject. The reputation comes from the difficulty of some applications , like understanding the structure of complicated molecules, which aren't fundamental to a grasp of the subject; we won't be discussing such applications. The only prerequisite for understanding is some familiarity with elementary linear algebra. Provided you have this background you can begin working out simple problems in a few hours, even with no prior knowledge of the subject. Readers already familiar with quantum mechanics can quickly skim through this chapter, to become familiar with our (mostly standard) notational conventions, and to assure themselves of familiarity with all the material. Readers with little or no prior knowledge should work through the chapter in detail, pausing to attempt the exercises. If you have difficulty with an exercise, move on, and return later to make another attempt.

Book ChapterDOI
01 Jan 2010
TL;DR: A quantum computer can factor a number exponentially faster than the best known classical algorithms, and the question it raises is: what other problems can be done efficiently on a quantum computer which are infeasible on a classical computer?
Abstract: If computers that you build are quantum, Then spies everywhere will all want 'em. Our codes will all fail, And they'll read our email, Till we get crypto that's quantum, and daunt 'em. – Jennifer and Peter Shor To read our E-mail, how mean of the spies and their quantum machine; be comforted though, they do not yet know how to factorize twelve or fifteen. – Volker Strassen Computer programming is an art form, like the creation of poetry or music. – Donald Knuth The most spectacular discovery in quantum computing to date is that quantum computers can efficiently perform some tasks which are not feasible on a classical computer. For example, finding the prime factorization of an n -bit integer is thought to require exp(Θ( n ⅓ log ⅔ n )) operations using the best classical algorithm known at the time of writing, the so-called number field sieve . This is exponential in the size of the number being factored, so factoring is generally considered to be an intractable problem on a classical computer: it quickly becomes impossible to factor even modest numbers. In contrast, a quantum algorithm can accomplish the same task using O ( n 2 log n log log n ) operations. That is, a quantum computer can factor a number exponentially faster than the best known classical algorithms. This result is important in its own right, but perhaps the most exciting aspect is the question it raises: what other problems can be done efficiently on a quantum computer which are infeasible on a classical computer?


Patent
29 Jan 2010
TL;DR: In this article, an oil separator is configured with a core of lengths of pipe (4) extending in parallel, and an oil-containing agent is conveyed through these channels (3 ) in the pipes (4 ), the oil molecules will settle as an oil coating on the channel walls.
Abstract: When, according to the invention, an oil separator ( 1 ) is configured with a core of lengths of pipe ( 4 ) extending in parallel, and an oil-containing agent ( 5 ) is conveyed through these channels ( 3 ) in the pipes ( 4 ), the oil molecules will settle as an oil coating ( 6 ) on the channel walls. At some point, the pressure of the medium will increase, when the flow is reduced, whereby the oil will be pressed out of the pipes ( 4 ). Then, upon settling, the oil separation will be resumed, and the separated oil ( 6 ) may be collected, while the agent ( 12 ) cleaned of oil may be reused and, following reuse, be fed to the cleaning system and its oil separator.