scispace - formally typeset
A

Alexander H. Hsia

Researcher at Sandia National Laboratories

Publications -  14
Citations -  536

Alexander H. Hsia is an academic researcher from Sandia National Laboratories. The author has contributed to research in topics: CMOS & Adiabatic process. The author has an hindex of 8, co-authored 14 publications receiving 379 citations. Previous affiliations of Alexander H. Hsia include Massachusetts Institute of Technology.

Papers
More filters
Proceedings ArticleDOI

Resistive memory device requirements for a neural algorithm accelerator

TL;DR: A general purpose neural architecture that can accelerate many different algorithms and determine the device properties that will be needed to run backpropagation on the neural architecture is proposed.
Journal ArticleDOI

Multiscale Co-Design Analysis of Energy, Latency, Area, and Accuracy of a ReRAM Analog Neural Training Accelerator

TL;DR: In this article, a detailed circuit and device analysis of a training accelerator may serve as a foundation for further architecture-level studies, and the possible gains over a similar digital-only version of this accelerator block suggest that continued optimization of analog resistive memories is valuable.
Journal ArticleDOI

Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding.

TL;DR: This paper presents a kernels-based architecture for sparse coding that can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision.
PatentDOI

Low-voltage differentially-signaled modulators

TL;DR: The first differentially signaled silicon resonator is demonstrated which can provide a 5dB extinction ratio using 3fJ/bit and 500mV signal amplitude at 10Gbps.
Journal ArticleDOI

Multiscale Co-Design Analysis of Energy, Latency, Area, and Accuracy of a ReRAM Analog Neural Training Accelerator

TL;DR: In this paper, a detailed circuit and device-level analysis of an analog crossbar circuit block designed to process three key kernels required in training and inference of neural networks is given and compared to relevant designs using standard digital ReRAM and SRAM operations.