scispace - formally typeset
D

DeLiang Wang

Researcher at Ohio State University

Publications -  475
Citations -  28623

DeLiang Wang is an academic researcher from Ohio State University. The author has contributed to research in topics: Speech processing & Speech enhancement. The author has an hindex of 82, co-authored 440 publications receiving 23687 citations. Previous affiliations of DeLiang Wang include Massachusetts Institute of Technology & Tsinghua University.

Papers
More filters
Proceedings Article

Singing Voice Separation from Monaural Recordings.

TL;DR: A system to separate singing voice from music accompaniment from monaural recordings is proposed and Quantitative results show that the system performs well in singing voice separation.
Journal ArticleDOI

A neural model of synaptic plasticity underlying short-term and long-term habituation

TL;DR: A parsimonious model of short-term and long-term synaptic plasticity at the electrophysiological level consists of two interacting differential equations, one describing alterations of the synaptic weight and the other describing changes to the speed of recovery (forgetting).
Proceedings ArticleDOI

Deep Learning Based Multi-Channel Speaker Recognition in Noisy and Reverberant Environments.

TL;DR: It is shown that rank-1 approximation of a speech covariance matrix based on generalized eigenvalue decomposition leads to the best results for the masking-based MVDR beamformer.
Proceedings ArticleDOI

Binaural tracking of multiple moving sources

TL;DR: A hidden Markov model (HMM) is employed for forming continuous tracks and detecting the number of active sources across time for tracking the azimuth locations of multiple active sources based on binaural processing.
Journal ArticleDOI

Sequential Organization of Speech in Reverberant Environments by Integrating Monaural Grouping and Binaural Localization

TL;DR: This paper compares localization performance to two existing methods, sequential organization performance to a model-based system that uses only monaural cues, and segregation performance to an exclusively binaural system, and suggests that the proposed framework allows for improved source localization and robust segregation of voiced speech in environments with considerable reverberation.