Topic
Speaker recognition
About: Speaker recognition is a research topic. Over the lifetime, 14990 publications have been published within this topic receiving 310061 citations.
Papers published on a yearly basis
Papers
More filters
01 Jan 2012
TL;DR: The proposed bottleneck feature extraction paradigm performs slightly worse than MFCCs but provides complementary information in combination and the proposed combination strategy with re-training improved the EER by 14% and 18% relative over the baseline MFCC system in the sameand different-microphone tasks respectively.
Abstract: Bottleneck neural networks have recently found success in a variety of speech recognition tasks. This paper presents an approach in which they are utilized in the front-end of a speaker recognition system. The network inputs are melfrequency cepstral coefficients (MFCCs) from multiple consecutive frames and the outputs are speaker labels. We propose using a recording-level criterion that is optimized via an online learning algorithm. We furthermore propose retraining a network to focus on its errors when leveraging scores from an independently trained system. We ran experiments on the sameand different-microphone tasks of the 2010 NIST Speaker Recognition Evaluation. We found that the proposed bottleneck feature extraction paradigm performs slightly worse than MFCCs but provides complementary information in combination. We also found that the proposed combination strategy with re-training improved the EER by 14% and 18% relative over the baseline MFCC system in the sameand different-microphone tasks respectively.
75 citations
•
24 Jun 2004
TL;DR: In this paper, a system and method enrolls a speaker with an enrollment utterance and authenticates a user with a biometric analysis of an authentication utterance, without the need for a PIN (Personal Identification Number).
Abstract: A system and method enrolls a speaker with an enrollment utterance and authenticates a user with a biometric analysis of an authentication utterance, without the need for a PIN (Personal Identification Number). During authentication, the system uses the same authentication utterance to identify who a speaker claims to be with speaker recognition, and verify whether is the speaker is actually the claimed person. Thus, it is not necessary for the speaker to identify biometric data using a PIN. The biometric analysis includes a neural tree network to determine unique aspects of the authentication utterances for comparison to the enrollment authentication. The biometric analysis leverages a statistical analysis using Hidden Markov Models to before authorizing the speaker.
75 citations
••
TL;DR: A novel speaker-independent feature, the ratio of a spectral flatness measure to a spectral center (RSS), with a small variation in speakers when constructing a speaker- independent system is proposed.
Abstract: Emotion recognition is one of the latest challenges in human-robot interaction. This paper describes the realization of emotional interaction for a Thinking Robot, focusing on speech emotion recognition. In general, speaker-independent systems show a lower accuracy rate compared with speaker-dependent systems, as emotional feature values depend on the speaker and their gender. However, speaker-independent systems are required for commercial applications. In this paper, a novel speaker-independent feature, the ratio of a spectral flatness measure to a spectral center (RSS), with a small variation in speakers when constructing a speaker-independent system is proposed. Gender and emotion are hierarchically classified by using the proposed feature (RSS), pitch, energy, and the mel frequency cepstral coefficients. An average recognition rate of 57.2% (plusmn 5.7%) at a 90% confidence interval is achieved with the proposed system in the speaker-independent mode.
75 citations
••
01 Dec 2019TL;DR: Simple classifiers are used to investigate the contents encoded by x-vector embeddings for information related to the speaker, channel, transcription, and meta information about the utterance and compare these with the information encoded by i-vectors across a varying number of dimensions.
Abstract: Deep neural network based speaker embeddings, such as x-vectors, have been shown to perform well in text-independent speaker recognition/verification tasks. In this paper, we use simple classifiers to investigate the contents encoded by x-vector embeddings. We probe these embeddings for information related to the speaker, channel, transcription (sentence, words, phones), and meta information about the utterance (duration and augmentation type), and compare these with the information encoded by i-vectors across a varying number of dimensions. We also study the effect of data augmentation during extractor training on the information captured by x-vectors. Experiments on the RedDots data set show that x-vectors capture spoken content and channel-related information, while performing well on speaker verification tasks.
75 citations