scispace - formally typeset
D

DeLiang Wang

Researcher at Ohio State University

Publications -  475
Citations -  28623

DeLiang Wang is an academic researcher from Ohio State University. The author has contributed to research in topics: Speech processing & Speech enhancement. The author has an hindex of 82, co-authored 440 publications receiving 23687 citations. Previous affiliations of DeLiang Wang include Massachusetts Institute of Technology & Tsinghua University.

Papers
More filters
Journal ArticleDOI

Segregation of unvoiced speech from nonspeech interference

TL;DR: Systematic evaluation shows that the proposed system extracts a majority of unvoiced speech without including much interference, and it performs substantially better than spectral subtraction.
Journal ArticleDOI

Improving robustness of deep neural network acoustic models via speech separation and joint adaptive training

TL;DR: A supervised speech separation system that significantly improves automatic speech recognition (ASR) performance in realistic noise conditions is presented and a framework that unifies separation and acoustic modeling via joint adaptive training is proposed.
Journal ArticleDOI

Transforming Binary Uncertainties for Robust Speech Recognition

TL;DR: This work proposes a supervised approach using regression trees to learn the nonlinear transformation of the uncertainty from the linear spectral domain to the cepstral domain, which is used by a decoder that exploits the variance associated with the enhanced cEPstral features to improve robust speech recognition.
Proceedings ArticleDOI

Boosted Deep Neural Networks and Multi-resolution Cochleagram Features for Voice Activity Detection

TL;DR: A new VAD algorithm based on boosted deep neural networks (bDNNs) is described that outperforms state-of-the-art VADs by a considerable margin and employs a new acoustic feature, multi-resolution cochleagram (MRCG), that concatenates the cochreagram features at multiple spectrotemporal resolutions and shows superior speech separation results over many acoustic features.
Journal ArticleDOI

An algorithm to increase speech intelligibility for hearing-impaired listeners in novel segments of the same noise type

TL;DR: Substantial sentence-intelligibility benefit was observed for hearing-impaired listeners in both noise types, despite the use of unseen noise segments during the test stage, which highlights the importance of evaluating these algorithms not only in human subjects, but in members of the actual target population.