G
Gregory D. Peterson
Researcher at University of Tennessee
Publications - 120
Citations - 5443
Gregory D. Peterson is an academic researcher from University of Tennessee. The author has contributed to research in topics: Reconfigurable computing & Speedup. The author has an hindex of 24, co-authored 120 publications receiving 4717 citations. Previous affiliations of Gregory D. Peterson include University of Washington & University of Cincinnati.
Papers
More filters
High Performance Reconfigurable Computing for Cholesky Decomposition
TL;DR: A hardware accelerator for Cholesky decomposition on FPGAs is proposed by designing a single triangular linear equation solver, which achieves a speedup of 7~13.5% compared to the software on the Intel Xeon quad core microprocessor.
Proceedings ArticleDOI
Compressive sensing TDOA for UWB positioning systems
TL;DR: Simulation results have demonstrated that the CS-based UWB positioning system using the FOMP TDOA algorithm has the potential to achieve sub-mm positioning accuracy while still using very low sampling rate ADCs.
Journal ArticleDOI
An Effective Execution Time Approximation Method for Parallel Computing
Junqing Sun,Gregory D. Peterson +1 more
TL;DR: This paper presents an interesting property of extreme values to enable Effective Mean Maximum Approximation (EMMA), which is more accurate and general to different computational environments than previous mean maximum execution time approximation methods.
Proceedings ArticleDOI
Space-Time Turbo Bayesian Compressed Sensing for UWB Systems
TL;DR: Simulation results using experimental UWB echo signals demonstrate that the proposed STTBCS algorithm achieves good performance for UWB systems, compared with the traditional BCS and multitask BCS algorithms.
Journal ArticleDOI
Evolvable block-based neural network design for applications in dynamic environments
TL;DR: The network structure and the internal parameters, the two pieces of the multiparametric evolution of the BbNNs, can be adapted intrinsically, in-field under the control of the training algorithm, which enables deployment of the platform in dynamic environments, thereby significantly expanding the range of target applications, deployment lifetimes, and system reliability.