scispace - formally typeset
S

Sean Kinzer

Researcher at University of California, San Diego

Publications -  11
Citations -  104

Sean Kinzer is an academic researcher from University of California, San Diego. The author has contributed to research in topics: Computer science & Compiler. The author has an hindex of 3, co-authored 4 publications receiving 31 citations.

Papers
More filters
Proceedings ArticleDOI

Planaria: Dynamic Architecture Fission for Spatial Multi-Tenant Acceleration of Deep Neural Networks

TL;DR: This paper defines Planaria1, a microarchitectural capability that can dynamically fission (break) into multiple smaller yet full-fledged DNN engines at runtime that enables spatially co-locating multiple DNN inference services on the same hardware, offering simultaneous multi-tenant DNN acceleration.
Posted Content

Mixed-Signal Charge-Domain Acceleration of Deep Neural networks through Interleaved Bit-Partitioned Arithmetic

TL;DR: One possible 3D-stacked microarchitecture is defined, dubbed BiHiwe, that leverages clustering and hierarchical design to best utilize power-efficiency of the mixed-signal domain and 3D stacking and also builds models for noise, computational non-idealities, and variations.
Proceedings ArticleDOI

Mixed-Signal Charge-Domain Acceleration of Deep Neural Networks through Interleaved Bit-Partitioned Arithmetic

TL;DR: BiHiwe as discussed by the authors proposes a wide bit-interleaved analog vector unit comprising of low-bitwidth multiply-accumulate modules that operate in the analog domain and share a single A/D converter.
Proceedings ArticleDOI

A Computational Stack for Cross-Domain Acceleration

TL;DR: PolyMath as discussed by the authors is a cross-domain stack that provides a unified computational stack across multiple, but not all, domains by defining a high-level crossdomain language (CDL), called PMLang, which encapsulates mathematical properties to be expressive across multiple domains.
Proceedings ArticleDOI

Glimpse: mathematical embedding of hardware specification for neural compilation

TL;DR: A Bayesian optimization framework called Glimpse, incorporating the mathematical embedding of the hardware specification of the GPU accelerators dubbed Blueprint to better guide the search algorithm and focus on sub-spaces that have higher potential for yielding higher performance binaries.