scispace - formally typeset
A

Alessandra Tosi

Researcher at University of Oxford

Publications -  14
Citations -  160

Alessandra Tosi is an academic researcher from University of Oxford. The author has contributed to research in topics: Gaussian process & Nonlinear dimensionality reduction. The author has an hindex of 7, co-authored 14 publications receiving 137 citations. Previous affiliations of Alessandra Tosi include Polytechnic University of Catalonia.

Papers
More filters
Posted Content

Metrics for Probabilistic Geometries

TL;DR: The geometrical structure of probabilistic generative dimensionality reduction models using the tools of Riemannian geometry is investigated and distances that respect the expected metric lead to more appropriate generation of new data.
Journal ArticleDOI

Optimal Operation of an Energy Management System Using Model Predictive Control and Gaussian Process Time-Series Modeling

TL;DR: This paper describes an optimal operation scheme for energy management systems using Gaussian process forecasting and model predictive control in the context of grid-connected microgrids with local generation, loads, and storage to demonstrate a cost reduction of more than 2%.
Proceedings Article

Metrics for probabilistic geometries

TL;DR: In this article, the authors investigate the geometrical structure of probabilistic generative dimensionality reduction models using the tools of Riemannian geometry and provide the necessary algorithms to compute expected metric tensors where the distribution over mappings is given by a Gaussian process.
Proceedings Article

AdaGeo: Adaptive Geometric Learning for Optimization and Sampling.

TL;DR: AdaGeo, a preconditioning framework for adaptively learning the geometry of parameter space during optimization or sampling, uses the Gaussian Process latent variable model (GP-LVM) to represent a lower-dimensional embedding of the parameters, identifying the underlying Riemannian manifold on which the optimize or sampling are taking place.
Journal ArticleDOI

Averaging of kernel functions

TL;DR: It is shown that the only feasible average for kernel learning is precisely the arithmetic average, which can be used in more general kernel optimization procedures.