A
Arnab Paul
Researcher at Intel
Publications - 24
Citations - 2471
Arnab Paul is an academic researcher from Intel. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 5, co-authored 13 publications receiving 1411 citations.
Papers
More filters
Journal ArticleDOI
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
Michael Davies,Narayan Srinivasa,Tsung-Han Lin,Gautham N. Chinya,Cao Yongqiang,Sri Harsha Choday,Georgios D. Dimou,Prasad Joshi,Nabil Imam,Shweta Jain,Yuyun Liao,Chit-Kwan Lin,Andrew Lines,Ruokun Liu,Deepak A. Mathaikutty,Steven McCoy,Arnab Paul,Jonathan Tse,Guruguhanathan Venkataramanan,Yi-Hsin Weng,Andreas Wild,Yoon Seok Yang,Hong Wang +22 more
TL;DR: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon, and can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area.
Proceedings ArticleDOI
e-SAFE: An Extensible, Secure and Fault Tolerant Storage System
TL;DR: e-SAFE is a scalable utility-driven distributed storage system that offers very high availability at an archival scale and reduces management overhead such as periodic repairs and provides strong guarantee on data-integrity.
Journal ArticleDOI
Stampede: a cluster programming middleware for interactive stream-oriented applications
Umakishore Ramachandran,Rishiyur S. Nikhil,James M. Rehg,Yavor Angelov,Arnab Paul,Sameer Adhikari,Kenneth Mackenzie,Nissim Harel,Kathleen Knobe +8 more
TL;DR: This work presents an overview of Stampede, the primary data abstractions, the algorithmic basis of garbage collection, and the issues in implementing these abstractions on a cluster of SMPs, and a set of micromeasurements along with two multimedia applications implemented on top of Stampinge.
Posted Content
Why does Deep Learning work? - A perspective from Group Theory
TL;DR: It is shown how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.