scispace - formally typeset
C

Carlo Ciliberto

Researcher at Imperial College London

Publications -  84
Citations -  1894

Carlo Ciliberto is an academic researcher from Imperial College London. The author has contributed to research in topics: Structured prediction & iCub. The author has an hindex of 21, co-authored 77 publications receiving 1420 citations. Previous affiliations of Carlo Ciliberto include University of Genoa & University College London.

Papers
More filters
Journal ArticleDOI

Quantum machine learning: a classical perspective.

TL;DR: A review of the literature in quantum machine learning can be found in this article, where the authors discuss perspectives for a mixed readership of classical ML and quantum computation experts and highlight the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems.
Proceedings Article

Learning-to-Learn Stochastic Gradient Descent with Biased Regularization

TL;DR: A key feature of the results is that, when the number of tasks grows and their variance is relatively small, the learning-to-learn approach has a significant advantage over learning each task in isolation by Stochastic Gradient Descent without a bias term.
Journal ArticleDOI

Quantum machine learning: a classical perspective

TL;DR: The literature in quantum ML is reviewed and perspectives for a mixed readership of classical ML and quantum computation experts are discussed, with particular emphasis on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems.
Proceedings Article

Learning To Learn Around A Common Mean

TL;DR: It is shown that the LTL problem can be reformulated as a Least Squares (LS) problem and a novel meta- algorithm is exploited to efficiently solve it, and a bound for the generalization error of the meta-algorithm is presented, which suggests the right splitting parameter to choose.
Proceedings Article

Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance

TL;DR: This work characterize the differential properties of the original Sinkhorn approximation, proving that it enjoys the same smoothness as its regularized version and explicitly provides an efficient algorithm to compute its gradient.