scispace - formally typeset
Y

Yoshua Bengio

Researcher at Université de Montréal

Publications -  1146
Citations -  534376

Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.

Papers
More filters
Proceedings ArticleDOI

Building Robust Ensembles via Margin Boosting

TL;DR: This work takes a principled approach towards building robust ensembles and develops an algorithm for learning an ensemble with maximum margin, which not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion.
Proceedings Article

On the generalization capability of multi-layered networks in the extraction of speech properties

TL;DR: Experiments are performed on 10 English vowels showing a recognition rate higher than 95% for new speakers, suggesting that MLNs suitably fed by the data computed by an ear model have good generalization capabilities over new speakers and new sounds.
Proceedings Article

Phonetically motivated acoustic parameters for continuous speech recognition using artificial neural networks.

TL;DR: An algorithm for the global optimization of this globally optimized hybrid system for phone recognition achieved a recognition accuracy of 86% on an 8 class recognition problem (7 plosives and one class corresponding to all other phonemes).
Proceedings ArticleDOI

MAgNet: Mesh Agnostic Neural PDE Solver

TL;DR: This paper designs a novel architecture that predicts the spatially continuous solution of a PDE given a spatial position query by augmenting coordinate-based architectures with Graph Neural Networks (GNN), and enables zero-shot generalization to new non-uniform meshes and long-term predictions up to 250 frames ahead that are physically consistent.
Proceedings ArticleDOI

Continuous-Time Meta-Learning with Forward Mode Differentiation

TL;DR: This work introduces Continuous-Time Meta-Learning (COMLN), a meta-learning algorithm where adaptation follows the dynamics of a gradient vector field, and devise an efficient algorithm based on forward mode differentiation, whose memory requirements do not scale with the length of the learning trajectory, thus allowing longer adaptation in constant memory.