scispace - formally typeset
Search or ask a question
Topic

Speaker recognition

About: Speaker recognition is a research topic. Over the lifetime, 14990 publications have been published within this topic receiving 310061 citations.


Papers
More filters
Patent
06 May 1999
TL;DR: A recognition system for use in a vehicle or the like includes a handwriting recognizer (44) and a voice recognizer(38) for receiving handwriting and voice signals, where the signals are associated with commands used to operate with a variety of vehicle appliances.
Abstract: A recognition system (20) for use in a vehicle or the like includes a handwriting recognizer (44) and a voice recognizer (38) for receiving handwriting and voice signals, where the signals are associated with commands used to operate with a variety of vehicle appliances. Such appliances may include, but are not limited to, car alarms (32), electric windows, personal computers (28), navigation systems (26), and audio (30) and telecommunications equipment (24).

68 citations

Proceedings ArticleDOI
17 Oct 2005-Scopus
TL;DR: This paper distinguishes between standard American English and Indian Accented English using the second and third formant frequencies of specific accent markers to achieve a suitable classification for these two accent groups.
Abstract: Apart form the word content and identity of a speaker; speech also conveys information about several soft biometric traits such as accent and gender. Accurate classification of these features can have a direct impact on present speech systems. An accent specific dictionary or word models can be used to improve accuracy of speech recognition systems. Gender and accent information can also be used to improve the performance of speaker recognition systems. In this paper, we distinguish between standard American English and Indian Accented English using the second and third formant frequencies of specific accent markers. A GMM classification is used on the feature set for each accent group. The results show that using just the formant frequencies of these accent markers is sufficient to achieve a suitable classification for these two accent groups.

68 citations

Proceedings ArticleDOI
07 May 2001
TL;DR: The aim is to apply speaker grouping information to speaker adaptation for speech recognition by using vector quantization (VQ) distortion as the criterion and showing the superiority of the proposed method.
Abstract: Addresses the problem of the detection of speaker changes and clustering speakers when no information is available regarding speaker classes or even the total number of classes. We assume that no previous information on speakers is available (no speaker model, no training phase) and that people do not speak simultaneously. The aim is to apply speaker grouping information to speaker adaptation for speech recognition. We use vector quantization (VQ) distortion as the criterion. A speaker model is created from successive utterances as a codebook by a VQ algorithm, and the VQ distortion is calculated between the model and an utterance. A result was obtained by the experiment on speaker detection and speaker clustering. The speaker change detection experiment was compared with results by generalized likelihood ratio and Bayesian information criterion. We show the superiority of our proposed method.

68 citations

Proceedings ArticleDOI
01 Jan 2000
TL;DR: This work forms a learning framework for DBNs based on error-feedback and statistical boosting theory and applies this framework to the problem of audio/visual speaker detection in an interactive kiosk environment using "off-the-shelf" visual and audio sensors.
Abstract: Design and development of novel human-computer interfaces poses a challenging problem: actions and intentions of users have to be inferred from sequences of noisy and ambiguous multi-sensory data such as video and sound. Temporal fusion of multiple sensors has been efficiently formulated using dynamic Bayesian networks (DBNs) which allows the power of statistical inference and learning to be combined with contextual knowledge of the problem. Unfortunately simple learning methods can cause such appealing models to fail when the data exhibits complex behavior. We formulate a learning framework for DBNs based on error-feedback and statistical boosting theory. We apply this framework to the problem of audio/visual speaker detection in an interactive kiosk environment using "off-the-shelf" visual and audio sensors (face, skin, texture, mouth motion, and silence detectors). Detection results obtained in this setup demonstrate superiority of our learning framework over that of the classical ML learning in DBNs.

68 citations

Proceedings ArticleDOI
14 Mar 2010
TL;DR: The effectiveness of phase information for noisy environments on speaker identification in noisy environments with integrated MFCC with phase information is described.
Abstract: In conventional speaker recognition methods based on MFCC, the phase information has been ignored. Recently, we proposed a method that integrated MFCC with the phase information on a speaker recognition method. Using the phase information, the speaker identification error rate was reduced by 78% for clean speech. In this paper, we describe the effectiveness of phase information for noisy environments on speaker identification. Integrationg MFCC with phase information, the speaker error identification rates were reduced by 20%∼70% in comparison with using only MFCC in noisy environments.

68 citations


Network Information
Related Topics (5)
Feature vector
48.8K papers, 954.4K citations
83% related
Recurrent neural network
29.2K papers, 890K citations
82% related
Feature extraction
111.8K papers, 2.1M citations
81% related
Signal processing
73.4K papers, 983.5K citations
81% related
Decoding methods
65.7K papers, 900K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023165
2022468
2021283
2020475
2019484
2018420