scispace - formally typeset
Y

Yoshua Bengio

Researcher at Université de Montréal

Publications -  1146
Citations -  534376

Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.

Papers
More filters
Proceedings Article

On the number of inference regions of deep feed forward networks with piece-wise linear activations

TL;DR: In this paper, the complexity of deep feed forward networks with linear pre-synaptic couplings and rectified linear activations is compared with a single layer version of the model.
Posted Content

HighRes-net: Recursive Fusion for Multi-Frame Super-Resolution of Satellite Imagery.

TL;DR: HighRes-net is presented, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion, and shows that by learning deep representations of multiple views, it can super-resolve low-resolution signals and enhance Earth Observation data at scale.
Proceedings Article

Reweighted Wake-Sleep

TL;DR: In this article, the authors propose a reweighted wake-sleep algorithm, which is based on importance sampling as an estimator of the likelihood, with the approximate inference network as a proposal distribution, and show that using a more powerful layer model such as NADE yields substantially better generative models.
Proceedings Article

Twin Networks: Matching the Future for Sequence Generation

TL;DR: This work proposes a simple technique for encouraging generative RNNs to plan ahead, and hypothesizes that this approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states).
Posted Content

Hybrid Models for Learning to Branch

TL;DR: This work addresses the first question in the negative, and addresses the second question by proposing a new hybrid architecture for efficient branching on CPU machines, which combines the expressive power of GNNs with computationally inexpensive multi-layer perceptrons (MLP) for branching.