scispace - formally typeset
F

Futoshi Asano

Researcher at National Institute of Advanced Industrial Science and Technology

Publications -  78
Citations -  2193

Futoshi Asano is an academic researcher from National Institute of Advanced Industrial Science and Technology. The author has contributed to research in topics: Speech enhancement & Microphone array. The author has an hindex of 22, co-authored 78 publications receiving 2106 citations.

Papers
More filters
Journal ArticleDOI

An optimum computer-generated pulse signal suitable for the measurement of very long impulse responses

TL;DR: This optimized ATSP (OATSP) has an almost ideal characteristic to measure impulse responses shorter than its specific length N and it is newly shown in this paper that OATSP has also a good characteristic toMeasure impulse responses longer than N.
Proceedings Article

Acoustical Sound Database in Real Environments for Sound Scene Understanding and Hands-Free Speech Recognition

TL;DR: LREC2000: the 2nd International Conference on Language Resources and Evaluation, May 31 - June 2, 2000, Athens, Greece.
Journal ArticleDOI

Combined approach of array processing and independent component analysis for blind separation of acoustic signals

TL;DR: Two array signal processing techniques are combined with independent component analysis (ICA) to enhance the performance of blind separation of acoustic signals in a reflective environment by using the subspace method, which reduces the effect of room reflection when the system is used in a room.
Journal ArticleDOI

Speech enhancement based on the subspace method

TL;DR: A method of speech enhancement using microphone-array signal processing based on the subspace method that reduces less-directional ambient noise by eliminating the noise-dominant subspace and extracts the spectrum of the target source from the mixture of spectra of the multiple directional components remaining in the modified spatial correlation matrix.
Proceedings ArticleDOI

Robust speech interface based on audio and video information fusion for humanoid HRP-2

TL;DR: In this method, audio information and video information are fused by a Bayesian network to enable the detection of speech events and the information of detected speech events is utilized in sound separation using adaptive beam forming.