scispace - formally typeset
S

Sahithi Rampalli

Researcher at Pennsylvania State University

Publications -  4
Citations -  54

Sahithi Rampalli is an academic researcher from Pennsylvania State University. The author has contributed to research in topics: Recurrent neural network & Crossbar switch. The author has an hindex of 2, co-authored 4 publications receiving 16 citations.

Papers
More filters
Proceedings ArticleDOI

GaaS-X: graph analytics accelerator supporting sparse data representation using crossbar architectures

TL;DR: This work presents GaaS-X, a graph analytics accelerator that inherently supports the sparse graph data representations using an in-situ compute-enabled crossbar memory architectures and alleviate the overheads of redundant writes, sparse to dense conversions, and redundant computations on the invalid edges that are present in the state of the art crossbar-based PIM accelerators.
Proceedings ArticleDOI

PSB-RNN: a processing-in-memory systolic array architecture using block circulant matrices for recurrent neural networks

TL;DR: This work presents a ReRAM crossbar based processing-in-memory (PIM) architecture with systolic dataflow incorporating block circulant compression for RNNs and forms the Fourier transform and point-wise operations into in-situ multiply-and-accumulate (MAC) operations mapped to ReRAMCrossbar crossbars for high energy efficiency and throughput.
Journal ArticleDOI

FARM: A Flexible Accelerator for Recurrent and Memory Augmented Neural Networks

TL;DR: This work presents an end-to-end hardware accelerator architecture, FARM, for the inference of RNNs and several variants of MANNs, such as the Differential Neural Computer (DNC), Neural Turing Machine (NTM) and Meta-learning model, and proposes the FARM-PIM architecture, which augments FARM with in-memory compute support for MAC and content-similarity operations in order to reduce data traversal costs.
Proceedings ArticleDOI

X-VS: Crossbar-Based Processing-in-Memory Architecture for Video Summarization

TL;DR: X-VS, a ReRAM processing-in-memory (PIM) hardware accelerator architecture for video summarization workloads is presented, augmenting a baseline ReRAM CNN accelerator with a systolic array-based crossbar architecture to incorporate efficient support for recurrent neural networks, attention and content similarity mechanisms and hash-based word embedding lookup to support theVideo summarization networks.