scispace - formally typeset
Journal ArticleDOI

Estimation of elliptical basis function parameters by the EM algorithm with application to speaker verification

Man-Wai Mak, +1 more
- 01 Jul 2000 - 
- Vol. 11, Iss: 4, pp 961-969
Reads0
Chats0
TLDR
Experimental results show that small EBF networks with basis function parameters estimated by the EM algorithm outperform the large RBF networks trained in the conventional approach.
Abstract
This paper proposes to incorporate full covariance matrices into the radial basis function (RBF) networks and to use the expectation-maximization (EM) algorithm to estimate the basis function parameters. The resulting networks, referred to as elliptical basis function (EBF) networks, are evaluated through a series of text-independent speaker verification experiments involving 258 speakers from a phonetically balanced, continuous speech corpus (TIMIT). We propose a verification procedure using RBF and EBF networks as speaker models and show that the networks are readily applicable to verifying speakers using LP-derived cepstral coefficients as features. Experimental results show that small EBF networks with basis function parameters estimated by the EM algorithm outperform the large RBF networks trained in the conventional approach. The results also show that the equal error rate achieved by the EBF networks is about two-third of that achieved by the vector quantization-based speaker models.

read more

Citations
More filters
Journal ArticleDOI

Variational learning for Gaussian mixture models

TL;DR: A hyperparameter initialization procedure for the training algorithm using a joint maximum likelihood and Bayesian methodology for estimating Gaussian mixture models and is applied in blind signal detection and in color image segmentation.
Journal ArticleDOI

Recurrent radial basis function network for time-series prediction

TL;DR: The advantage of the proposed RRBF network is to combine the learning flexibility of the RBF network with the dynamic performances of the local recurrence given by the looped-neurons.
Journal ArticleDOI

Variational Learning for Finite Dirichlet Mixture Models and Applications

TL;DR: This paper focuses on the variational learning of finite Dirichlet mixture models and suggests that this approach has several advantages: first, the problem of over-fitting is prevented; furthermore, the complexity of the mixture model can be determined automatically and simultaneously with the parameters estimation as part of the Bayesian inference procedure.
Proceedings ArticleDOI

Empirical Analysis of Optimal Hidden Neurons in Neural Network Modeling for Stock Prediction

TL;DR: A sensitivity analysis of optimal hidden layers and hidden neurons in neural network modeling for stock price prediction and the result with the minimum estimated generalization error is determined as the optimum for the application of neural network model.
Patent

Speaker verification system

TL;DR: In this article, a text-independent speaker verification system utilizes mel frequency cepstral coefficients analysis in the feature extraction blocks, template modeling with vector quantization in the pattern matching blocks, an adaptive threshold and an adaptive decision verdict and is implemented in a stand-alone device using less powerful microprocessors and smaller data storage devices.
References
More filters
Journal ArticleDOI

An Algorithm for Vector Quantizer Design

TL;DR: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data.
Journal ArticleDOI

Fast learning in networks of locally-tuned processing units

TL;DR: This work proposes a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken 1988).
Journal ArticleDOI

Probabilistic neural networks

TL;DR: A probabilistic neural network that can compute nonlinear decision boundaries which approach the Bayes optimal is formed, and a fourlayer neural network of the type proposed can map any input pattern to any number of classifications.
Related Papers (5)