scispace - formally typeset
Search or ask a question
Author

Reevu Maity

Bio: Reevu Maity is an academic researcher from University of Oxford. The author has contributed to research in topics: AdaBoost & Unitary transformation. The author has an hindex of 5, co-authored 6 publications receiving 83 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors show that the speed of multiqubit systems can be evaluated by measuring a set of local observables, providing exponential advantage with respect to state tomography.
Abstract: Important properties of a quantum system are not directly measurable, but they can be disclosed by how fast the system changes under controlled perturbations. In particular, asymmetry and entanglement can be verified by reconstructing the state of a quantum system. Yet, this usually requires experimental and computational resources which increase exponentially with the system size. Here we show how to detect metrologically useful asymmetry and entanglement by a limited number of measurements. This is achieved by studying how they affect the speed of evolution of a system under a unitary transformation. We show that the speed of multiqubit systems can be evaluated by measuring a set of local observables, providing exponential advantage with respect to state tomography. Indeed, the presented method requires neither the knowledge of the state and the parameter-encoding Hamiltonian nor global measurements performed on all the constituent subsystems. We implement the detection scheme in an all-optical experiment.

56 citations

Posted Content
TL;DR: Numerical evidence is provided that, despite the non-convex nature of the loss landscape, gradient descent always converges to the target unitary when the sequence contains $d^2$ or more parameters.
Abstract: We study the hardness of learning unitary transformations in $U(d)$ via gradient descent on time parameters of alternating operator sequences. We provide numerical evidence that, despite the non-convex nature of the loss landscape, gradient descent always converges to the target unitary when the sequence contains $d^2$ or more parameters. Rates of convergence indicate a "computational phase transition." With less than $d^2$ parameters, gradient descent converges to a sub-optimal solution, whereas with more than $d^2$ parameters, gradient descent converges exponentially to an optimal solution.

35 citations

Proceedings Article
12 Jul 2020
TL;DR: In this paper, the authors show how quantum techniques can improve the time complexity of classical AdaBoost, which is a technique that converts a weak and inaccurate machine learning algorithm into a strong accurate learning algorithm.
Abstract: Suppose we have a weak learning algorithm $\mathcal{A}$ for a Boolean-valued problem: $\mathcal{A}$ produces hypotheses whose bias $\gamma$ is small, only slightly better than random guessing (this could, for instance, be due to implementing $\mathcal{A}$ on a noisy device), can we boost the performance of $\mathcal{A}$ so that $\mathcal{A}$'s output is correct on $2/3$ of the inputs? Boosting is a technique that converts a weak and inaccurate machine learning algorithm into a strong accurate learning algorithm. The AdaBoost algorithm by Freund and Schapire (for which they were awarded the G\"odel prize in 2003) is one of the widely used boosting algorithms, with many applications in theory and practice. Suppose we have a $\gamma$-weak learner for a Boolean concept class $C$ that takes time $R(C)$, then the time complexity of AdaBoost scales as $VC(C)\cdot poly(R(C), 1/\gamma)$, where $VC(C)$ is the $VC$-dimension of $C$. In this paper, we show how quantum techniques can improve the time complexity of classical AdaBoost. To this end, suppose we have a $\gamma$-weak quantum learner for a Boolean concept class $C$ that takes time $Q(C)$, we introduce a quantum boosting algorithm whose complexity scales as $\sqrt{VC(C)}\cdot poly(Q(C),1/\gamma);$ thereby achieving a quadratic quantum improvement over classical AdaBoost in terms of $VC(C)$.

9 citations

Posted Content
TL;DR: It is shown how quantum techniques can improve the time complexity of classical AdaBoost by introducing a quantum boosting algorithm whose complexity scales as $\sqrt{VC(C)}\cdot poly(Q(C),1/\gamma);$ thereby achieving a quadratic quantum improvement over classical Ada boost in terms of $VC( C)$.
Abstract: Suppose we have a weak learning algorithm $\mathcal{A}$ for a Boolean-valued problem: $\mathcal{A}$ produces hypotheses whose bias $\gamma$ is small, only slightly better than random guessing (this could, for instance, be due to implementing $\mathcal{A}$ on a noisy device), can we boost the performance of $\mathcal{A}$ so that $\mathcal{A}$'s output is correct on $2/3$ of the inputs? Boosting is a technique that converts a weak and inaccurate machine learning algorithm into a strong accurate learning algorithm. The AdaBoost algorithm by Freund and Schapire (for which they were awarded the G\"odel prize in 2003) is one of the widely used boosting algorithms, with many applications in theory and practice. Suppose we have a $\gamma$-weak learner for a Boolean concept class $C$ that takes time $R(C)$, then the time complexity of AdaBoost scales as $VC(C)\cdot poly(R(C), 1/\gamma)$, where $VC(C)$ is the $VC$-dimension of $C$. In this paper, we show how quantum techniques can improve the time complexity of classical AdaBoost. To this end, suppose we have a $\gamma$-weak quantum learner for a Boolean concept class $C$ that takes time $Q(C)$, we introduce a quantum boosting algorithm whose complexity scales as $\sqrt{VC(C)}\cdot poly(Q(C),1/\gamma);$ thereby achieving a quadratic quantum improvement over classical AdaBoost in terms of $VC(C)$.

9 citations

Posted Content
TL;DR: In this article, the minimum time required to implement a desired unitary transformation on a d-dimensional Hilbert space when applying sequences of Hamiltonian transformations was shown to be in O(d 2 ) time.
Abstract: Quantum computation and quantum control operate by building unitary transformations out of sequences of elementary quantum logic operations or applications of control fields. This paper puts upper bounds on the minimum time required to implement a desired unitary transformation on a d-dimensional Hilbert space when applying sequences of Hamiltonian transformations. We show that strategies of building up a desired unitary out of non-infinitesimal and infinitesimal unitaries, or equivalently, using power and band limited controls, can yield the best possible scaling in time $O(d^2)$.

9 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors discuss and review the development of this rapidly growing research field that encompasses the characterization, quantification, manipulation, dynamical evolution, and operational application of quantum coherence.
Abstract: The coherent superposition of states, in combination with the quantization of observables, represents one of the most fundamental features that mark the departure of quantum mechanics from the classical realm. Quantum coherence in many-body systems embodies the essence of entanglement and is an essential ingredient for a plethora of physical phenomena in quantum optics, quantum information, solid state physics, and nanoscale thermodynamics. In recent years, research on the presence and functional role of quantum coherence in biological systems has also attracted a considerable interest. Despite the fundamental importance of quantum coherence, the development of a rigorous theory of quantum coherence as a physical resource has only been initiated recently. In this Colloquium we discuss and review the development of this rapidly growing research field that encompasses the characterization, quantification, manipulation, dynamical evolution, and operational application of quantum coherence.

1,392 citations

Journal ArticleDOI
TL;DR: In this paper, Paczynska et al. presented a visual representation of Einstein's gedanken experiment, Fig. 4, which was supported by the U.S National Science Foundation under Grant No. CHE-1648973.
Abstract: We are grateful to Marta Paczy´nska for creating the visual representation of Einstein’s gedankenexperiment, Fig. 1, and Lu (Lucy) Hou for providing the resources for Fig. 4. SD would like to thank Eric Lutz for many years of insightful discussions and supporting mentorship, and in particular for inciting our interest in quantum speed limits. This work was supported by the U.S. National Science Foundation under Grant No. CHE-1648973.

386 citations

Journal Article
TL;DR: This book offers an accessible introduction and essential reference for an approach to machine learning that creates highly accurate prediction rules by combining many weak and inaccurate rules of thumb.
Abstract: boosting foundations and algorithms, boosting foundations and algorithms amazon com, boosting foundations and algorithms read online, boosting foundations and algorithms in searchworks catalog, boosting foundations and algorithms adaptive computation, download e book for kindle boosting foundations and, contents, bhlmann hothorn boosting algorithms regularization, boosting foundations and algorithms by robert e schapire, boosting lagout, an introduction to boosting and leveraging face rec, explaining adaboost robert schapire, pdf boosting foundations and algorithms adaptive, boosting foundations and algorithms pdf machine learning, boosting foundations and algorithms 1 ebooks, boosting machine learning wikipedia, boosting foundations and algorithms free computer, boosting foundations and algorithms ieee, theoretical foundations and algorithms for outlier ensembles, ensemble methods foundations and algorithms, boosting foundations and algorithms kybernetes vol 42, the evolution of boosting algorithms arxiv, boosting foundations and algorithms paraglide com, foundations of machine learning boosting, boosting algorithms regularization prediction and model, boosting foundations and algorithms amazon co uk, chapter 3 boosting algorithms a review of methods theory, optimal and adaptive algorithms for online boosting, boosting foundations and algorithms adaptive computation, boosting the mit press, boosting foundations and algorithms ebook 2012, boosting foundations and algorithms amazon co uk, book review of boosting foundations and algorithms, boosting foundations and algorithms kybernetes vol 42, boosting foundations and algorithms indiebound org, download boosting foundations and algorithms adaptive, boosting foundations and algorithms adaptive computation, boosting foundations and algorithms adaptive computation, free download here pdfsdocuments2 com, boosting foundations and algorithms book 2012, boosting foundations and algorithms principal researcher, buy boosting foundations and algorithms adaptive, boosting foundations and algorithms adaptive computation, boosting foundations and algorithms request pdf, boosting foundations and algorithms beck shop de, new book boosting foundations and algorithms by robert, boosting the mit press, a short introduction to boosting computer science andboosting boosting general method of converting rough rules of thumb into highly accurate prediction rule technically assume given weak learning algorithm that can consistently nd classiers rules of thumb at least slightly better than random say accuracy 55, boosting foundations and algorithms adaptive computation and machine learning series robert e schapire yoav freund on amazon com free shipping on qualifying offers an accessible introduction and essential reference for an approach to machine learning that creates highly accurate prediction rules by combining many weak and inaccurate ones lt b gt lt p gt lt p gt lt i gt boosting lt i gt is an approach to, boosting foundations and algorithms by robert e schapire yoav freund publisher the mit press 2014 isbn 13 9780262310413 number of pages 544 description boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate rules of thumb, a remarkably rich theory has evolved around boosting with connections to a range of topics including statistics game theory convex optimization and information geometry boosting

338 citations

Journal Article
TL;DR: A simple, efficient method for simulating Hamiltonian dynamics on a quantum computer by approximating the truncated Taylor series of the evolution operator by using a method for implementing linear combinations of unitary operations together with a robust form of oblivious amplitude amplification.
Abstract: We describe a simple, efficient method for simulating Hamiltonian dynamics on a quantum computer by approximating the truncated Taylor series of the evolution operator. Our method can simulate the time evolution of a wide variety of physical systems. As in another recent algorithm, the cost of our method depends only logarithmically on the inverse of the desired precision, which is optimal. However, we simplify the algorithm and its analysis by using a method for implementing linear combinations of unitary operations together with a robust form of oblivious amplitude amplification.

313 citations

Journal ArticleDOI
08 Dec 2020
TL;DR: In this article, a quantum-classical hybrid algorithm is investigated to understand when and why it succeeds, deepening the connection between entanglement and neural-network optimization strategies, and a connection between neural networks and neural entanglements is established.
Abstract: A quantum-classical hybrid algorithm is investigated to understand when and why it succeeds, deepening the connection between entanglement and neural-network optimization strategies.

148 citations