S
Shubhabrata Sengupta
Researcher at University of California, Davis
Publications - 25
Citations - 2146
Shubhabrata Sengupta is an academic researcher from University of California, Davis. The author has contributed to research in topics: Data structure & Graphics hardware. The author has an hindex of 15, co-authored 25 publications receiving 2087 citations. Previous affiliations of Shubhabrata Sengupta include Baidu & Nvidia.
Papers
More filters
Patent
End-to-end speech recognition
Bryan Catanzaro,Jingdong Chen,Mike Chrzanowski,Erich Elsen,Jesse Engel,Christopher Fougner,Han Xu,Awni Hannun,Ryan Prenger,Sanjeev Satheesh,Shubhabrata Sengupta,Dani Yogatama,Chong Wang,Jun Zhan,Zhenyao Zhu,Dario Amodei +15 more
TL;DR: In this paper, the entire pipeline of hand-engineered components are replaced with neural networks, and the end-to-end learning allows handling a diverse variety of speech including noisy environments, accents, and different languages.
Book ChapterDOI
Efficient Parallel Scan Algorithms for Manycore GPUs
TL;DR: This paper presents a meta-analyses of the determinants of infectious disease in eight operation theatres of the immune system and shows clear patterns of disease progression that are consistent with those observed in animals and humans.
Proceedings ArticleDOI
Dynamic adaptive shadow maps on graphics hardware
TL;DR: A novel implementation of adaptive shadow maps (ASMs) that performs all shadow lookups and scene analysis on the GPU, enabling interactive rendering with ASMs while moving both the light and camera.
Proceedings ArticleDOI
Octree textures on graphics hardware
TL;DR: This work implements an interactive 3D painting application that stores paint in an octree-like GPU-based adaptive data structure that enables interactive performance with high quality by supporting quadlinear (mipmapped) filtering and fast, constant-time data accesses.
Assessment of Graphic Processing Units (GPUs) for Department of Defense (DoD) Digital Signal Processing (DSP) Applications
TL;DR: The overhead of transferring data and initiating GPU computation are substantially higher than on the CPU, and thus for latency-critical applications, the CPU is a superior choice.