A
Andreas Spanias
Researcher at Arizona State University
Publications - 512
Citations - 8918
Andreas Spanias is an academic researcher from Arizona State University. The author has contributed to research in topics: Speech coding & Speech processing. The author has an hindex of 36, co-authored 490 publications receiving 7895 citations. Previous affiliations of Andreas Spanias include Arizona's Public Universities & Intel.
Papers
More filters
Proceedings ArticleDOI
Sparse Manifold Learning with Applications to SAR Image Classification
Visar Berisha,Nitesh N. Shah,Donald E. Waagen,Harry A. Schmitt,S. Bellofiore,Andreas Spanias,D. Cochran +6 more
TL;DR: A novel method for selecting a minimal set of exemplars and performing the out-of-sample extension to a test point for two-class target recognition with synthetic aperture radar (SAR) data is proposed.
Patent
Ensemble sparse models for image analysis and restoration
TL;DR: In this article, methods and systems for recovering corrupted/degraded images using approximations obtained from an ensemble of multiple sparse models are disclosed, including simple and computationally efficient dictionary design approach along with low-complexity reconstruction procedure that may use a parallel-friendly table-lookup process.
Proceedings ArticleDOI
A Stem Reu Site on the Integrated Design of Sensor Devices and Signal Processing Algorithms
TL;DR: An NSF Research Experiences for Undergraduates site to embed students in research projects related to integrated sensor and signal processing systems and this paper describes the REU activities, modules, training, projects, and their assessment.
Proceedings ArticleDOI
Selecting disorder-specific features for speech pathology fingerprinting
TL;DR: A novel algorithm for feature selection that aims to minimize the effects of speaker-specific features and maximize the effect of pathology- specific features and simultaneously trades off between these two competing criteria is proposed.
Proceedings ArticleDOI
Unsupervised audio source separation using generative priors
TL;DR: In this paper, the authors propose an unsupervised approach for audio source separation based on generative priors trained on individual sources, which simultaneously searches in the source-specific latent spaces to effectively recover the constituent sources.