scispace - formally typeset
A

Alexandre Défossez

Researcher at Facebook

Publications -  16
Citations -  693

Alexandre Défossez is an academic researcher from Facebook. The author has contributed to research in topics: Deep learning & Asymptotic expansion. The author has an hindex of 11, co-authored 16 publications receiving 342 citations. Previous affiliations of Alexandre Défossez include École Normale Supérieure & French Institute for Research in Computer Science and Automation.

Papers
More filters
Proceedings ArticleDOI

Real Time Speech Enhancement in the Waveform Domain.

TL;DR: Empirical evidence shows that the proposed causal speech enhancement model, based on an encoder-decoder architecture with skip-connections, is capable of removing various kinds of background noise including stationary and non-stationary noises, as well as room reverb.
Posted Content

Music Source Separation in the Waveform Domain

TL;DR: Demucs is proposed, a new waveform-to-waveform model, which has an architecture closer to models for audio generation with more capacity on the decoder, and human evaluations show that Demucs has significantly higher quality than Conv-Tasnet, but slightly more contamination from other sources, which explains the difference in SDR.
Proceedings Article

Averaged Least-Mean-Squares: Bias-Variance Trade-offs and Optimal Sampling Distributions

TL;DR: This work considers the least-squares regression problem and provides a detailed asymptotic analysis of the performance of averaged constant-step-size stochastic gradient descent, and provides an asymPTotic expansion up to explicit exponentially decaying terms.
Posted Content

A Simple Convergence Proof of Adam and Adagrad

TL;DR: This work provides a simple proof of convergence covering both the Adam and Adagrad adaptive optimization algorithms when applied to smooth (possibly non-convex) objective functions with bounded gradients and obtains the tightest dependency on the heavy ball momentum among all previous convergence bounds.
Posted Content

On the Convergence of Adam and Adagrad

TL;DR: It is shown that in expectation, the squared norm of the objective gradient averaged over the trajectory has an upper-bound which is explicit in the constants of the problem, parameters of the optimizer and the total number of iterations N.