D
DeLiang Wang
Researcher at Ohio State University
Publications - 475
Citations - 28623
DeLiang Wang is an academic researcher from Ohio State University. The author has contributed to research in topics: Speech processing & Speech enhancement. The author has an hindex of 82, co-authored 440 publications receiving 23687 citations. Previous affiliations of DeLiang Wang include Massachusetts Institute of Technology & Tsinghua University.
Papers
More filters
Journal ArticleDOI
Deep Learning Based Real-Time Speech Enhancement for Dual-Microphone Mobile Phones
TL;DR: In this paper, a novel deep learning based approach to real-time speech enhancement for dual-microphone mobile phones is proposed, which employs a new densely-connected convolutional recurrent network to perform dual-channel complex spectral mapping.
Journal ArticleDOI
Sequential organization of speech in computational auditory scene analysis
Yang Shao,DeLiang Wang +1 more
TL;DR: The proposed general system for sequential organization of speech based on speaker models is shown to function well with both interfering talkers and non-speech intrusions and to deal with situations where prior information about specific speakers is not available.
Proceedings ArticleDOI
Bridging the Gap Between Monaural Speech Enhancement and Recognition with Distortion-Independent Acoustic Modeling.
Peidong Wang,Ke Tan,DeLiang Wang +2 more
TL;DR: Experimental results suggest that distortion-independent acoustic modeling is able to overcome the distortion problem, and the models investigated in this paper outperform the previous best system on the CHiME-2 corpus.
Posted Content
Multi-Microphone Complex Spectral Mapping for Speech Dereverberation
Zhong-Qiu Wang,DeLiang Wang +1 more
TL;DR: Experimental results on multi-channel speech dereverberation demonstrate the effectiveness of the proposed approach and the integration of multi-microphone complex spectral mapping with beamforming and post-filtering is investigated.
Proceedings ArticleDOI
A two-stage approach for improving the perceptual quality of separated speech
TL;DR: This paper proposes a two-stage algorithm that uses a soft mask in the first stage for separation, and NMF in the second stage for improving perceptual quality where only a speech model needs to be trained.