K
Kazuya Takeda
Researcher at Nagoya University
Publications - 546
Citations - 9667
Kazuya Takeda is an academic researcher from Nagoya University. The author has contributed to research in topics: Speech processing & Speech enhancement. The author has an hindex of 42, co-authored 495 publications receiving 7719 citations. Previous affiliations of Kazuya Takeda include Kobe Women's University & Nara Institute of Science and Technology.
Papers
More filters
Proceedings Article
Recognition of continuous speech segments of monophone units using support vector machines.
TL;DR: A close-class-set discrimination method suitable for large- class-set pattern recognition problems, and the effectiveness of the proposed method in reducing the complexity of multiclass pattern recognition systems is studied.
Proceedings ArticleDOI
Direction of arrival estimation based on nonlinear microphone array
TL;DR: A new method for estimating the direction of arrival (DOA) using a nonlinear microphone array based on complementary beamforming, which can be used to estimate DOAs even when the number of sound sources is equal to or exceeds that of microphones.
Book ChapterDOI
Maximum a posterior probability and cumulative distribution function equalization methods for speech spectral estimation with application in noise suppression filtering
TL;DR: This work develops and compares noise suppression filtering systems based on maximum a posterior probability (MAP) and cumulative distribution function equalization (CDFE) estimation of speech spectrum using a double-gamma modeling for both the speech and noise spectral components.
Journal ArticleDOI
Impact of acoustic similarity on efficiency of verbal information transmission via subtle prosodic cues
TL;DR: Eye tracking and response time results both showed that participants understood the textually ambiguous sentences faster when listening to voices similar to their own, and suggest that tiny acoustic features, which do not contain verbal meaning can influence the processing of verbal information.