scispace - formally typeset
S

Shihui Yin

Researcher at Arizona State University

Publications -  51
Citations -  1393

Shihui Yin is an academic researcher from Arizona State University. The author has contributed to research in topics: Static random-access memory & Hardware acceleration. The author has an hindex of 13, co-authored 47 publications receiving 635 citations. Previous affiliations of Shihui Yin include Carnegie Mellon University & Arizona's Public Universities.

Papers
More filters
Proceedings ArticleDOI

XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks

TL;DR: An in-memory computing SRAM macro that computes XNOR-and-accumulate in binary/ternary deep neural networks on the bitline without row-by-row data access is presented.
Proceedings ArticleDOI

XNOR-RRAM: A scalable and parallel resistive synaptic architecture for binary neural networks

TL;DR: This work proposes a RRAM synaptic architecture with a bit-cell design of complementary word lines that implements equivalent XNOR and bit-counting operation in a parallel fashion and investigates the impact of sensing offsets on classification accuracy and analyzes various design options with different sub-array sizes and sensing bit-levels.
Journal ArticleDOI

C3SRAM: An In-Memory-Computing SRAM Macro Based on Robust Capacitive Coupling Computing Mechanism

TL;DR: The macro is an SRAM module with the circuits embedded in bitcells and peripherals to perform hardware acceleration for neural networks with binarized weights and activations and utilizes analog-mixed-signal capacitive-coupling computing to evaluate the main computations of binary neural networks, binary-multiply-and-accumulate operations.
Journal ArticleDOI

XNOR-SRAM: In-Memory Computing SRAM Macro for Binary/Ternary Deep Neural Networks

TL;DR: XNOR-SRAM is a mixed-signal in-memory computing (IMC) SRAM macro that computes ternary-X NOR-and-accumulate (XAC) operations in binary/ternary deep neural networks (DNNs) without row-by-row data access and represents among the best tradeoff in energy efficiency and DNN accuracy.
Journal ArticleDOI

High-Throughput In-Memory Computing for Binary Deep Neural Networks With Monolithically Integrated RRAM and 90-nm CMOS

TL;DR: This work presents a resistive RAM-based in-memory computing (IMC) design, which is fabricated in 90-nm CMOS with monolithic integration of RRAM devices and demonstrates improvements in throughput and energy–delay product (EDP) compared with the state-of-the-art literature.