Y
Yoshua Bengio
Researcher at Université de Montréal
Publications - 1146
Citations - 534376
Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.
Papers
More filters
Proceedings Article
The Curse of Highly Variable Functions for Local Kernel Machines
TL;DR: A series of theoretical arguments are presented supporting the claim that a large class of modern learning algorithms that rely solely on the smoothness prior - with similarity between examples expressed with a local kernel - are sensitive to the curse of dimensionality, or more precisely to the variability of the target.
Proceedings Article
K-Local Hyperplane and Convex Distance Nearest Neighbor Algorithms
Pascal Vincent,Yoshua Bengio +1 more
TL;DR: Experimental results on real world classification tasks suggest that the modified K-Nearest Neighbor algorithms often give a dramatic improvement over standard KNN and perform as well or better than SVMs.
Journal ArticleDOI
Predicting COVID-19 Pneumonia Severity on Chest X-ray With Deep Learning
Joseph Paul Cohen,Lan Dao,Karsten Roth,Paul Morrison,Yoshua Bengio,Almas F. Abbasi,Beiyi Shen,Hoshmand Kochi Mahsa,Marzyeh Ghassemi,Haifang Li,Timothy Q. Duong +10 more
TL;DR: The results indicate that the model’s ability to gauge the severity of COVID-19 lung infections could be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU.
Proceedings Article
Image-to-image translation for cross-domain disentanglement
TL;DR: This paper achieves better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets and compares the model to the state-of-the-art in multi-modal image translation.
Proceedings Article
Efficient Non-Parametric Function Induction in Semi-Supervised Learning
TL;DR: Experiments show that the proposed non-parametric algorithms which provide an estimated continuous label for the given unlabeled examples are extended to function induction algorithms that correspond to the minimization of a regularization criterion applied to an out-of-sample example, and happens to have the form of a Parzen windows regressor.