scispace - formally typeset
E

Eddie Yan

Researcher at University of Washington

Publications -  18
Citations -  2258

Eddie Yan is an academic researcher from University of Washington. The author has contributed to research in topics: Deep learning & Compiler. The author has an hindex of 11, co-authored 18 publications receiving 1490 citations. Previous affiliations of Eddie Yan include Nvidia & University of California, Los Angeles.

Papers
More filters
Proceedings ArticleDOI

TVM: an automated end-to-end optimizing compiler for deep learning

TL;DR: TVM as discussed by the authors is a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends, such as mobile phones, embedded devices, and accelerators.
Journal ArticleDOI

Detection and spatial mapping of mercury contamination in water samples using a smart-phone.

TL;DR: A smart-phone-based hand-held platform that allows the quantification of mercury(II) ions in water samples with parts per billion (ppb) level of sensitivity is introduced and a mercury contamination map is generated by measuring water samples at over 50 locations in California (USA).
Journal ArticleDOI

Imaging and sizing of single DNA molecules on a mobile phone.

TL;DR: This work has created a computational framework and a mobile phone application connected to a server back-end for measurement of the lengths of individual DNA molecules that are labeled and stretched using disposable chips.
Posted Content

TVM: End-to-End Optimization Stack for Deep Learning

TL;DR: TVM is proposed, an end-to-end optimization stack that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends and discusses the optimization challenges specific toDeep learning that TVM solves.
Posted Content

Learning to Optimize Tensor Programs

TL;DR: In this article, a learning-based framework is introduced to optimize tensor programs for deep learning workloads, such as matrix multiplication and high dimensional convolution, which are key enablers of effective deep learning systems.