D
DeLiang Wang
Researcher at Ohio State University
Publications - 475
Citations - 28623
DeLiang Wang is an academic researcher from Ohio State University. The author has contributed to research in topics: Speech processing & Speech enhancement. The author has an hindex of 82, co-authored 440 publications receiving 23687 citations. Previous affiliations of DeLiang Wang include Massachusetts Institute of Technology & Tsinghua University.
Papers
More filters
Journal ArticleDOI
Locally excitatory globally inhibitory oscillator networks
DeLiang Wang,David Terman +1 more
TL;DR: A novel class of locally excitatory, globally inhibitory oscillator networks (LEGION) is proposed and investigated, which lays a physical foundation for the oscillatory correlation theory of feature binding and may provide an effective computational framework for scene segmentation and figure/ground segregation in real time.
Journal ArticleDOI
A Tandem Algorithm for Pitch Estimation and Voiced Speech Segregation
Guoning Hu,DeLiang Wang +1 more
TL;DR: A tandem algorithm is proposed that performs pitch estimation of a target utterance and segregation of voiced portions of target speech jointly and iteratively and performs substantially better than previous systems for either pitch extraction or voiced speech segregation.
Journal ArticleDOI
Learning Complex Spectral Mapping With Gated Convolutional Recurrent Networks for Monaural Speech Enhancement
Ke Tan,DeLiang Wang +1 more
TL;DR: A gated convolutional recurrent network (GCRN) for complex spectral mapping is proposed, which amounts to a causal system for monaural speech enhancement and yields significantly higher STOI and PESQ than magnitude spectral mapping and complex ratio masking.
Journal ArticleDOI
Long short-term memory for speaker generalization in supervised speech separation
Jitong Chen,DeLiang Wang +1 more
TL;DR: A separation model based on long short-term memory (LSTM) is proposed, which naturally accounts for temporal dynamics of speech and which substantially outperforms a DNN-based model on unseen speakers and unseen noises in terms of objective speech intelligibility.
Journal ArticleDOI
Learning spectral mapping for speech dereverberation and denoising
TL;DR: Deep neural networks are trained to directly learn a spectral mapping from the magnitude spectrogram of corrupted speech to that of clean speech, which substantially attenuates the distortion caused by reverberation, as well as background noise, and is conceptually simple.