scispace - formally typeset
T

Tomoko Matsui

Researcher at Tokyo Gakugei University

Publications -  175
Citations -  2739

Tomoko Matsui is an academic researcher from Tokyo Gakugei University. The author has contributed to research in topics: Speaker recognition & Hidden Markov model. The author has an hindex of 24, co-authored 171 publications receiving 2558 citations. Previous affiliations of Tomoko Matsui include Eli Lilly and Company & Nagoya University.

Papers
More filters
Proceedings ArticleDOI

A Kernel for Time Series Based on Global Alignments

TL;DR: It is proved that this new family of kernels to handle time series, notably speech data, within the framework of kernel methods which includes popular algorithms such as the support vector machine is positive definite under favorable conditions.
Proceedings ArticleDOI

Comparison of text-independent speaker recognition methods using VQ-distortion and discrete/continuous HMMs

TL;DR: The distortion-intersection measure (DIM), which was introduced as a VQ-distortion measure to increase the robustness against utterance variations, is effective and the speaker identification rates using a continuous ergodic HMM are strongly correlated with the total number of mixtures irrespective of the number of states.
Proceedings ArticleDOI

Concatenated phoneme models for text-variable speaker recognition

TL;DR: Methods that create models to specify both speaker and phonetic information accurately by using only a small amount of training data for each speaker are investigated and supplementing these methods by adding a phoneme-independent speaker model to make up for the lack of speaker information.
Proceedings ArticleDOI

Speaker adaptation of tied-mixture-based phoneme models for text-prompted speaker recognition

TL;DR: A new method of creating speaker-specific phoneme models consisting of tied-mixture HMMs and adapts the feature space of the tied- mixtures to that of the speaker through phoneme-dependent/independent iterative training is proposed.
Journal ArticleDOI

Comparison of text-independent speaker recognition methods using VQ-distortion and discrete/continuous HMM's

TL;DR: This paper compares a VQ (vector quantization)-distortion-based speaker recognition method and discrete/continuous ergodic HMM (hidden Markov model)-based ones, especially from the viewpoint of robustness against utterance variations, and shows that a continuous ergodIC HMM is as robust as a V Q-distortion method when enough data is available.