Y
Yoshua Bengio
Researcher at Université de Montréal
Publications - 1146
Citations - 534376
Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.
Papers
More filters
Posted Content
On the Spectral Bias of Deep Neural Networks
Nasim Rahaman,Devansh Arpit,Aristide Baratin,Felix Draxler,Min Lin,Fred A. Hamprecht,Yoshua Bengio,Aaron Courville +7 more
TL;DR: It is shown that deep networks with finite weights (or trained for finite number of steps) are inherently biased towards representing smooth functions over the input space, and all samples classified by a network to belong to a certain class are connected by a path such that the prediction of the network along that path does not change.
Posted Content
Incorporating Second-Order Functional Knowledge for Better Option Pricing
TL;DR: In this article, a class of functions similar to multi-layer neural networks is proposed for modeling the price of call options, where the function to be learned is non-decreasing in its two arguments and convex in one of them.
Posted Content
Large-Scale Feature Learning With Spike-and-Slab Sparse Coding
TL;DR: Spike-and-slab sparse coding (S3C) as discussed by the authors is a feature learning and extraction procedure based on a factor model for object recognition with a large number of classes.
Posted Content
Spike-and-Slab Sparse Coding for Unsupervised Feature Discovery
TL;DR: This work derives a structured variational inference procedure and employs a variational EM training algorithm to improve upon the supervised learning capabilities of both sparse coding and the ssRBM on the CIFAR-10 dataset.
Posted Content
Learning Anonymized Representations with Adversarial Neural Networks
TL;DR: A novel training objective for simultaneously training a predictor over target variables of interest (the regular labels) while preventing an intermediate representation to be predictive of the private labels is introduced.