Y
Yoshua Bengio
Researcher at Université de Montréal
Publications - 1146
Citations - 534376
Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.
Papers
More filters
Proceedings ArticleDOI
Word-level training of a handwritten word recognizer based on convolutional neural networks
TL;DR: A new approach for online recognition of handwritten words written in unconstrained mixed style where each pixel contains information about trajectory direction and curvature is introduced.
Posted Content
Architectural Complexity Measures of Recurrent Neural Networks
Saizheng Zhang,Yuhuai Wu,Tong Che,Zhouhan Lin,Roland Memisevic,Ruslan Salakhutdinov,Yoshua Bengio +6 more
TL;DR: In this article, a graph-theoretic framework is presented to analyze the connecting architectures of RNNs and three architecture complexity measures are proposed: the recurrent depth, the feedforward depth and the recurrent skip coefficient.
Book ChapterDOI
On the expressive power of deep architectures
Yoshua Bengio,Olivier Delalleau +1 more
TL;DR: Some of the theoretical motivations for deep architectures, as well as some of their practical successes, are reviewed, and directions of investigations to address some of the remaining challenges are proposed.
Posted Content
Residual Connections Encourage Iterative Inference
TL;DR: It is shown that residual connections naturally encourage features of residual blocks to move along the negative gradient of loss as the authors go from one block to the next, and empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement.
Unsupervised and transfer learning challenge: a deep learning approach
Grégoire Mesnil,Yann N. Dauphin,Xavier Glorot,Salah Rifai,Yoshua Bengio,Ian Goodfellow,Erick Lavoie,Xavier Muller,Guillaume Desjardins,David Warde-Farley,Pascal Vincent,Aaron Courville,James Bergstra +12 more
TL;DR: This paper describes different kinds of layers the authors trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge, and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples.