scispace - formally typeset
Y

Yoshua Bengio

Researcher at Université de Montréal

Publications -  1146
Citations -  534376

Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.

Papers
More filters
Journal ArticleDOI

Adaptive Discrete Communication Bottlenecks with Dynamic Vector Quantization

TL;DR: This work proposes learning to dynamically select discretization tightness conditioned on inputs, based on the hypothesis that data naturally contains variations in complexity that call for different levels of representational coarseness.

Discovering Shared Structure in Manifold Learning

TL;DR: This analysis suggests non-local manifold learning algorithms which attempt to discover shared structure in the tangent planes at different positions, which has parameters that are shared across space rather than estimated based on the local neighborhood, as in current non-parametric manifolds learning algorithms.
Proceedings Article

Use of Multi-Layered Networks for Coding Speech with Phonetic Features

TL;DR: A method that combines expertise on neural networks with expertise on speech recognition is used to build the recognition systems, and a model of the human auditory system is preferred to FFT as a front-end module for sonorant speech.
Journal ArticleDOI

Establishing an evaluation metric to quantify climate change image realism

TL;DR: This paper focuses on the evaluation of a conditional generative model that illustrates the consequences of climate change-induced flooding to encourage public interest and awareness on the issue and to generate more realistic images of the future consequences ofClimate change.
Posted Content

Mollifying Networks

TL;DR: In this article, the authors propose a continuation method for optimization of highly non-convex neural networks by starting with a smoothed objective function that gradually evolves to a more non-consistent energy landscape during the training.