scispace - formally typeset
Y

Yoshua Bengio

Researcher at Université de Montréal

Publications -  1146
Citations -  534376

Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.

Papers
More filters
Proceedings Article

h-detach: Modifying the LSTM Gradient Towards Better Optimization

TL;DR: In this paper, a stochastic algorithm called H-detach was proposed to prevent the vanishing gradient problem in LSTM by suppressing the gradient components through the linear path (cell state) in the computational graph, which can prevent LSTMs from capturing long-term dependencies.
Proceedings ArticleDOI

Multi-Image Super-Resolution for Remote Sensing Using Deep Recurrent Networks

TL;DR: This work presents a data-driven, multi-image super resolution approach based on an end-to-end deep neural network that consists of an encoder, a fusion module, and a decoder that reconstructs the super-resolved image.

Topic Segmentation : A First Stage to Dialog-Based Information Extraction.

TL;DR: This work studies the problem of topic segmentation of manually transcribed speech in order to facilitate information extraction from dialogs using a combination of multi-source knowledge modeled by hidden Markov models and experiments with different combinations of linguistic-level cues.
Proceedings Article

Learning the 2-D Topology of Images

TL;DR: The surprising result presented here is that about as few as a thousand images are enough to approximately recover the relative locations of about a thousand pixels.
Posted Content

Recall Traces: Backtracking Models for Efficient Reinforcement Learning

TL;DR: The authors use a backtracking model that predicts the preceding states that terminate at a given high-reward state and use these traces to improve a policy, which improves the sample efficiency of both on-and off-policy RL algorithms across several environments and tasks.