scispace - formally typeset
Search or ask a question
Author

Umesh Vazirani

Bio: Umesh Vazirani is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Quantum computer & Quantum algorithm. The author has an hindex of 58, co-authored 157 publications receiving 17237 citations. Previous affiliations of Umesh Vazirani include Harvard University & Massachusetts Institute of Technology.


Papers
More filters
Book•
01 Jan 1994
TL;DR: The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata is described.
Abstract: The probably approximately correct learning model Occam's razor the Vapnik-Chervonenkis dimension weak and strong learning learning in the presence of noise inherent unpredictability reducibility in PAC learning learning finite automata by experimentation appendix - some tools for probabilistic analysis.

1,765 citations

Journal Article•DOI•
TL;DR: This paper gives the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church--Turing thesis, and proves that bits of precision suffice to support a step computation.
Abstract: In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing machine in Deutsch's model of a quantum Turing machine (QTM) [Proc. Roy. Soc. London Ser. A, 400 (1985), pp. 97--117]. This construction is substantially more complicated than the corresponding construction for classical Turing machines (TMs); in fact, even simple primitives such as looping, branching, and composition are not straightforward in the context of quantum Turing machines. We establish how these familiar primitives can be implemented and introduce some new, purely quantum mechanical primitives, such as changing the computational basis and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing machine need to be specified. We prove that $O(\log T)$ bits of precision suffice to support a $T$ step computation. This justifies the claim that the quantum Turing machine model should be regarded as a discrete model of computation and not an analog one. We give the first formal evidence that quantum Turing machines violate the modern (complexity theoretic) formulation of the Church--Turing thesis. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing machine, but requires superpolynomial time on a bounded-error probabilistic Turing machine, and thus not in the class $\BPP$. The class $\BQP$ of languages that are efficiently decidable (with small error-probability) on a quantum Turing machine satisfies $\BPP \subseteq \BQP \subseteq \Ptime^{\SP}$. Therefore, there is no possibility of giving a mathematical proof that quantum Turing machines are more powerful than classical probabilistic Turing machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory.

1,706 citations

Journal Article•DOI•
TL;DR: It is proved that relative to an oracle chosen uniformly at random with probability 1 the class $\NP$ cannot be solved on a quantum Turing machine (QTM) in time $o(2^{n/2})$.
Abstract: Recently a great deal of attention has been focused on quantum computation following a sequence of results [Bernstein and Vazirani, in Proc. 25th Annual ACM Symposium Theory Comput., 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339], [Simon, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 116--123, SIAM J. Comput., 26 (1997), pp. 1340--1349], [Shor, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 124--134] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor's result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of $\NP$ can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random with probability 1 the class $\NP$ cannot be solved on a quantum Turing machine (QTM) in time $o(2^{n/2})$. We also show that relative to a permutation oracle chosen uniformly at random with probability 1 the class $\NP \cap \coNP$ cannot be solved on a QTM in time $o(2^{n/3})$. The former bound is tight since recent work of Grover [in {\it Proc.\ $28$th Annual ACM Symposium Theory Comput.}, 1996] shows how to accept the class $\NP$ relative to any oracle on a quantum computer in time $O(2^{n/2})$.

1,265 citations

Proceedings Article•DOI•
01 Jun 1993
TL;DR: This dissertation proves that relative to an oracle chosen uniformly at random, the class NP cannot be solved on a quantum Turing machine in time $o(2\sp{n/2}).$ and gives evidence suggesting that quantum Turing Machines cannot efficiently solve all of NP.
Abstract: In this dissertation we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing Machine in Deutsch's model of a quantum Turing Machine. This construction is substantially more complicated than the corresponding construction for classical Turing Machines--in fact, even simple primitives such as looping, branching and composition are not straightforward in the context of quantum Turing Machines. We establish how these familiar primitives can be implemented, and also introduce some new, purely quantum mechanical primitives, such as changing the computational basis, and carrying out an arbitrary unitary transformation of polynomially bounded dimension. We also consider the precision to which the transition amplitudes of a quantum Turing Machine need to be specified. We prove that O(log T) bits of precision suffice to support a T step computation. This justifies the claim that the quantum Turing Machine model should be regarded as a discrete model of computation and not an analog one. We give the first evidence indicating that quantum Turing Machines are more powerful than classical probabilistic Turing Machines. We show the existence of a problem, relative to an oracle, that can be solved in polynomial time on a quantum Turing Machine, but requires super-polynomial time on a bounded-error probabilistic Turing Machine; and thus not in the class BPP. In fact, we show that this problem cannot be solved in MA relative to the same oracle, thus showing that even non-determinism together with randomness is not sufficient to solve the problem in poly-nomial time. The class BQP, of languages that are efficiently decidable (with small error-probability) on a quantum Turing Machine, satisfies: BPP $\subseteq$ BQP $\subseteq$ P$\sp{\sharp P}$. Therefore there is no possibility of giving a mathematical proof that quantum Turing Machines are more powerful than classical probabilistic Turing Machines (in the unrelativized setting) unless there is a major breakthrough in complexity theory. We also give evidence suggesting that quantum Turing Machines cannot efficiently solve all of NP. Specifically, we prove that relative to an oracle chosen uniformly at random, with probability 1, the class NP cannot be solved on a quantum Turing machine in time $o(2\sp{n/2}).$

909 citations

Proceedings Article•DOI•
01 Apr 1990
TL;DR: This work applies the general approach to data structures, bin packing, graph coloring, and graph coloring to bipartite matching and shows that a simple randomized on-line algorithm achieves the best possible performance.
Abstract: There has been a great deal of interest recently in the relative power of on-line and off-line algorithms. An on-line algorithm receives a sequence of requests and must respond to each request as soon as it is receiveD. An off-line algorithm may wait until all requests have been received before determining its responses. One approach to evaluating an on-line algorithm is to compare its performance with that of the best possible off-line algorithm for the same problem. Thus, given a measure of "profit", the performance of an on-line algorithm can be measured by the worst-case ratio of its profit to that of the optimal off-line algorithm. This general approach has been applied in a number of contexts, including data structures [SITa], bin packing [CoGaJo], graph coloring [GyLe] and the k-server problem [MaMcSI]. Here we apply it to bipartite matching and show that a simple randomized on-line algorithm achieves the best possible performance.

807 citations


Cited by
More filters
Journal Article•DOI•
Yoav Freund1, Robert E. Schapire1•
01 Aug 1997
TL;DR: The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
Abstract: In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.

15,813 citations

01 Dec 2010
TL;DR: This chapter discusses quantum information theory, public-key cryptography and the RSA cryptosystem, and the proof of Lieb's theorem.
Abstract: Part I. Fundamental Concepts: 1. Introduction and overview 2. Introduction to quantum mechanics 3. Introduction to computer science Part II. Quantum Computation: 4. Quantum circuits 5. The quantum Fourier transform and its application 6. Quantum search algorithms 7. Quantum computers: physical realization Part III. Quantum Information: 8. Quantum noise and quantum operations 9. Distance measures for quantum information 10. Quantum error-correction 11. Entropy and information 12. Quantum information theory Appendices References Index.

14,825 citations

Book•
01 Jan 1996
TL;DR: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols.
Abstract: From the Publisher: A valuable reference for the novice as well as for the expert who needs a wider scope of coverage within the area of cryptography, this book provides easy and rapid access of information and includes more than 200 algorithms and protocols; more than 200 tables and figures; more than 1,000 numbered definitions, facts, examples, notes, and remarks; and over 1,250 significant references, including brief comments on each paper.

13,597 citations

Christopher M. Bishop1•
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Book•
24 Aug 2012
TL;DR: This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Abstract: Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

8,059 citations