scispace - formally typeset
Y

Yoshua Bengio

Researcher at Université de Montréal

Publications -  1146
Citations -  534376

Yoshua Bengio is an academic researcher from Université de Montréal. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 202, co-authored 1033 publications receiving 420313 citations. Previous affiliations of Yoshua Bengio include McGill University & Centre de Recherches Mathématiques.

Papers
More filters
Posted Content

Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition.

TL;DR: This paper proposes to integrate multiple feature views in quaternion-valued convolutional neural network (QCNN), to be used for sequence-to-sequence mapping with the CTC model, and reports that QCNNs obtain a lower phoneme error rate (PER) with less learning parameters than a competing model based on real-valued CNNs.
Proceedings ArticleDOI

Tell, Draw, and Repeat: Generating and Modifying Images Based on Continual Linguistic Instruction

TL;DR: Geneva et al. as discussed by the authors presented a recurrent image generation model which takes into account both the generated output up to the current step as well as all past instructions for generation.

Neural Models for Key Phrase Detection and Question Generation

TL;DR: This article proposed a two-stage neural model to tackle question generation from documents, which first estimates the probability that word sequences in a document compose "interesting" answers using a neural model trained on a question-answering corpus.
Proceedings Article

Learning concept embeddings for query expansion by quantum entropy minimization

TL;DR: A novel method for learning, in a supervised way, semantic representations for words and phrases by embedding queries and documents in special matrices, which disposes of an increased representational power with respect to existing approaches adopting a vector representation.
Proceedings ArticleDOI

Use of genetic programming for the search of a new learning rule for neural networks

TL;DR: Experiments on classification tasks suggest genetic programming finds better learning rules than other optimization methods, and the best rule found with genetic programming outperformed the well-known backpropagation algorithm for a given set of tasks.