Y
Yoshua Bengio
Researcher at Université de Montréal
Publications - 1146
Citations - 534376
Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.
Papers
More filters
Book ChapterDOI
On the equivalence between deep NADE and Generative Stochastic Networks
TL;DR: This work shows an alternative way to sample from a trained Orderless NADE that allows to trade-off computing time and quality of the samples, and makes a connection between this criterion and the training criterion for Generative Stochastic Networks (GSNs).
Proceedings ArticleDOI
Equivalence of Equilibrium Propagation and Recurrent Backpropagation
Benjamin Scellier,Yoshua Bengio +1 more
TL;DR: In this paper, the authors show that the temporal derivatives of the neural activities in equilibrium propagation are equal to the error derivatives computed iteratively by recurrent backpropagation in the side network.
Posted Content
Spectral Dimensionality Reduction
Yoshua Bengio,Olivier Delalleau,Nicolas Le Roux,Jean-François Paiement,Pascal Vincent,Marie Claude Ouimet +5 more
TL;DR: In this article, the authors put under a common framework a number of non-linear dimensionality reduction methods, such as Locally Linear Embedding, Isomap, Laplacian Eigenmaps and kernel PCA, which are based on performing an eigendecomposition.
Posted Content
Neural Models for Key Phrase Detection and Question Generation
TL;DR: This article proposed a two-stage neural model to tackle question generation from documents, which estimates the probability that word sequences in a document are ones that a human would pick when selecting candidate answers by training a neural key-phrase extractor on the answers in a question-answering corpus.
Posted Content
Image Segmentation by Iterative Inference from Conditional Score Estimation
TL;DR: This work extends previous work on score estimation by denoising autoencoders to the case of a conditional distribution, with a novel use of a corrupted feedforward predictor replacing Gaussian corruption and experimentally finds that the proposed iterative inference from conditional Score estimation by conditional denoised autoen coders performs better than comparable models based on CRFs or those not using any explicit modeling of the conditional joint distribution of outputs.