scispace - formally typeset
S

Surat Teerapittayanon

Researcher at Harvard University

Publications -  28
Citations -  2815

Surat Teerapittayanon is an academic researcher from Harvard University. The author has contributed to research in topics: MNIST database & Artificial neural network. The author has an hindex of 11, co-authored 23 publications receiving 1942 citations. Previous affiliations of Surat Teerapittayanon include Massachusetts Institute of Technology & Wyss Institute for Biologically Inspired Engineering.

Papers
More filters
Journal ArticleDOI

Rapid prototyping of 3D DNA-origami shapes with caDNAno

TL;DR: Cadnano as mentioned in this paper is an open-source software package with a graphical user interface that aids in the design of DNA sequences for folding 3D honeycomb-pleated shapes A series of rectangular-block motifs were designed, assembled, and analyzed to identify a well-behaved motif that could serve as a building block for future studies.
Proceedings ArticleDOI

BranchyNet: Fast inference via early exiting from deep neural networks

TL;DR: The BranchyNet architecture is presented, a novel deep network architecture that is augmented with additional side branch classifiers that can both improve accuracy and significantly reduce the inference time of the network.
Proceedings ArticleDOI

Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices

TL;DR: In this paper, the authors proposed distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices.
Posted Content

Distributed Deep Neural Networks over the Cloud, the Edge and End Devices

TL;DR: In this article, the authors proposed distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices.
Proceedings ArticleDOI

Embedded Binarized Neural Networks

TL;DR: In this paper, the authors focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB), and show that it is essential to minimize the memory used for temporaries which hold intermediate results between layers in feedforward inference.