scispace - formally typeset
Search or ask a question

Showing papers by "Yoshua Bengio published in 1988"


Proceedings ArticleDOI
11 Apr 1988
TL;DR: The Boltzmann machine algorithm and the error back propagation algorithm were used to learn to recognize the place of articulation of vowels (front, center or back), represented by a static description of spectral lines, which shows a fault tolerant property of the neural nets.
Abstract: The Boltzmann machine algorithm and the error back propagation algorithm were used to learn to recognize the place of articulation of vowels (front, center or back), represented by a static description of spectral lines. The error rate is shown to depend on the coding. Results are comparable or better than those obtained by us on the same data using hidden Markov models. The authors also show a fault tolerant property of the neural nets, i.e. that the error on the test set increases slowly and gradually when an increasing number of nodes fail. >

16 citations


Proceedings Article
21 Aug 1988
TL;DR: A set of Multi-Layered Networks for Automatic Speech Recognition (ASR) allows the integration of information extracted with variable resolution in the time and frequency domains and to keep the number of links between nodes of the networks small in order to allow significant generalization during learning with a reasonable training set size.
Abstract: A set of Multi-Layered Networks (MLN) for Automatic Speech Recognition (ASR) is proposed. Such a set allows the integration of information extracted with variable resolution in the time and frequency domains and to keep the number of links between nodes of the networks small in order to allow significant generalization during learning with a reasonable training set size. Subsets of networks can be executed depending on preconditions based on descriptions of the time evolution of signal energies allowing spectral properties that are significant in different acoustic situations to be learned. Preliminary experiments on speaker-independent recognition of the letters of the E-set are reported. Voices from 70 speakers were used for learning. Voices of 10 new speakers were used for test. An overall error rate of 9.5% was obtained in the test showing that results better than those previously reported can be achieved.

6 citations


Proceedings Article
01 Jan 1988
TL;DR: A method that combines expertise on neural networks with expertise on speech recognition is used to build the recognition systems, and a model of the human auditory system is preferred to FFT as a front-end module for sonorant speech.
Abstract: Preliminary results on speaker-independant speech recognition are reported. A method that combines expertise on neural networks with expertise on speech recognition is used to build the recognition systems. For transient sounds, event-driven property extractors with variable resolution in the time and frequency domains are used. For sonorant speech, a model of the human auditory system is preferred to FFT as a front-end module.

4 citations


01 Jan 1988
TL;DR: The Boltzmann machine algorithm and Science, McGill the error back propagation algorithm were used to learn to recognize the place of articulation of vowels (front, center or back), represented by a static description of spectral lines.
Abstract: The Boltzmann machine algorithm and Science, McGill the error back propagation algorithm were used to learn to recognize the place of articulation of vowels (front, center or back), represented by a static description of spectral lines. The error rate is shown to depend on the coding. Results are comparable or better than those obtained by us on the same data using Hidden Markov Models. We also show a fault tolerant property of the neural nets, i.e. that the error on the test set increases slowly and gradually when an increasing number of nodes fail.

1 citations