scispace - formally typeset
Y

Yoshua Bengio

Researcher at Université de Montréal

Publications -  1146
Citations -  534376

Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.

Papers
More filters
Proceedings ArticleDOI

Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL

TL;DR: This work proposes an alternative method that is able to recover, in a non-uniform-prior setting, the expressiveness and the desired properties of the Laplacian representation, and shows that a simple augmentation of the representation objective with the learned temporal abstractions improves dynamics-awareness and helps exploration.
Posted Content

Modularity Matters: Learning Invariant Relational Reasoning Tasks

TL;DR: Experimental results support the hypothesis that modularity is a robust prior for learning invariant relational reasoning, and show that very shallow ResMixNet models are capable of learning each of the two tasks well, attaining less than 2% and 1% test error on the MNIST Parity and the colorized Pentomino tasks respectively.
Journal ArticleDOI

Factorized embeddings learns rich and biologically meaningful embedding spaces using factorized tensor decomposition.

TL;DR: The factorized embeddings (FE) model is presented, a self-supervised deep learning algorithm that learns simultaneously, by tensor factorization, gene and sample representation spaces, and it is found that the sample representation captures information on single gene and global gene expression patterns.
Proceedings Article

Adversarial domain adaptation for stable brain-machine interfaces

TL;DR: In this paper, an adversarial domain adaptation network is used to match the empirical probability distribution of the residuals of the reconstructed neural signals. But, the authors do not address the domain adaptation problem in this paper.
Posted Content

Monaural Singing Voice Separation with Skip-Filtering Connections and Recurrent Inference of Time-Frequency Mask

TL;DR: In this paper, the authors proposed a method that learns and optimizes (during training) a source-dependent mask and does not need the aforementioned post processing step, which shows an increase of 0.49 dB for the signal to distortion ratio and 0.30 dB for signal to interference ratio, compared to previous state-of-the-art approaches for monaural singing voice separation.