scispace - formally typeset
Y

Yoshua Bengio

Researcher at Université de Montréal

Publications -  1146
Citations -  534376

Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.

Papers
More filters
Posted Content

Manifold Mixup: Encouraging Meaningful On-Manifold Interpolation as a Regularizer.

TL;DR: This work proposes Manifold Mixup which encourages the network to produce more reasonable and less confident predictions at points with combinations of attributes not seen in the training set by training on convex combinations of the hidden state representations of data samples.
Posted Content

Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning

TL;DR: A novel forward synthesis framework powered by reinforcement learning (RL) for de novo drug design, Policy Gradient for Forward Synthesis (PGFS), that addresses this challenge by embedding the concept of synthetic accessibility directly into the de noVO drug design system.
Posted Content

N-BEATS: Neural basis expansion analysis for interpretable time series forecasting.

TL;DR: The authors proposed a deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers for univariate time series point forecasting, which has a number of desirable properties, being interpretable, applicable without modification to a wide array of target domains, and fast to train.
Journal ArticleDOI

Using a Financial Training Criterion Rather than a Prediction Criterion

TL;DR: It is found with noisy time series that better results can be obtained when the model is directly trained in order to maximize the financial criterion of interest, here gains and losses incurred during trading.
Proceedings Article

Wasserstein Dependency Measure for Representation Learning

TL;DR: It is empirically demonstrated that mutual information-based representation learning approaches do fail to learn complete representations on a number of designed and real-world tasks, and a practical approximation to this theoretically motivated solution, constructed using Lipschitz constraint techniques from the GAN literature, achieves substantially improved results on tasks where incomplete representations are a major challenge.