A
Andreas Spanias
Researcher at Arizona State University
Publications - 512
Citations - 8918
Andreas Spanias is an academic researcher from Arizona State University. The author has contributed to research in topics: Speech coding & Speech processing. The author has an hindex of 36, co-authored 490 publications receiving 7895 citations. Previous affiliations of Andreas Spanias include Arizona's Public Universities & Intel.
Papers
More filters
Proceedings ArticleDOI
Direct classification from compressively sensed images via deep Boltzmann machine
TL;DR: A potential technique of performing a classification task based on compressively sensed (CS) data, skipping a computationally expensive reconstruction step, is examined, finding this approach achieves a 1.21% test data error rate at a sensing rate of 0.4, compared to a 0.99% error rate for non-compressive data.
Proceedings ArticleDOI
Performance of distributed estimation over multiple access fading channels with partial feedback
TL;DR: It is shown that as few as 3 bits of feedback is sufficient for a loss in performance of about 5% and that the performance is robust in the presence of feedback errors.
Proceedings ArticleDOI
Adaptive antenna systems for mobile ad-hoc networks
S. Bellofiore,J. Foutz,R. Govindarajula,L. Bahceci,Constantine A. Balanis,Andreas Spanias,J.M. Capone,Tolga M. Duman +7 more
TL;DR: This paper focuses on the interaction and integration of several critical components of a mobile ad-hoc network (MANET) using smart antenna systems and considers problems dealing with the choice of direction of arrival algorithm and the length of the training sequence used by the beamforming algorithms.
Posted Content
Audio Source Separation via Multi-Scale Learning with Dilated Dense U-Nets
Vivek Sivaraman Narayanaswamy,Sameeksha Katoch,Jayaraman J. Thiagarajan,Huan Song,Andreas Spanias +4 more
TL;DR: This paper replaces regular $1-$D convolutions with adaptive dilated convolutions that have innate capability of capturing increased context by using large temporal receptive fields and investigates the impact of dense connections on the extraction process that encourage feature reuse and better gradient flow.
Proceedings ArticleDOI
A hybrid model for speech synthesis
TL;DR: A hybrid model for speech analysis/synthesis that relies on a time-varying autoregressive moving-average model and the short-time Fourier transform (STFT) is proposed, expected to yield robust speech synthesis at low data rates.