scispace - formally typeset
A

Aurelien Lucchi

Researcher at ETH Zurich

Publications -  135
Citations -  13053

Aurelien Lucchi is an academic researcher from ETH Zurich. The author has contributed to research in topics: Computer science & Rate of convergence. The author has an hindex of 35, co-authored 118 publications receiving 10254 citations. Previous affiliations of Aurelien Lucchi include Google & École Polytechnique Fédérale de Lausanne.

Papers
More filters
Proceedings ArticleDOI

Are spatial and global constraints really necessary for segmentation

TL;DR: This investigation was unable to find evidence of a significant performance increase attributed to the introduction of spatial and consistency constraints, and found that similar levels of performance can be achieved using a much simpler design that essentially ignores these constraints.
Posted Content

Stabilizing Training of Generative Adversarial Networks through Regularization

TL;DR: This work proposes a new regularization approach with low computational cost that yields a stable GAN training procedure and demonstrates the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks.
Posted Content

Sub-sampled Cubic Regularization for Non-convex Optimization

TL;DR: In this article, a sub-sampled version of cubic regularization is proposed for non-convex functions and a sampling scheme that gives sufficiently accurate gradient and Hessian approximations to retain the strong global and local convergence guarantees of cubically regularized methods is presented.
Book ChapterDOI

Structured Image Segmentation Using Kernelized Features

TL;DR: This paper introduces an approach to "kernelize" the features so that a linear SSVM framework can leverage the power of non-linear kernels without incurring the high computational cost.
Journal ArticleDOI

The power of quantum neural networks

TL;DR: In this article, a class of quantum neural networks is presented that outperforms comparable classical feedforward networks in terms of effective dimension and at the same time train faster, suggesting a quantum advantage.