scispace - formally typeset
H

Hongxu Yin

Researcher at Princeton University

Publications -  50
Citations -  1745

Hongxu Yin is an academic researcher from Princeton University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 14, co-authored 41 publications receiving 835 citations. Previous affiliations of Hongxu Yin include Nanyang Technological University & Nvidia.

Papers
More filters
Proceedings ArticleDOI

See through Gradients: Image Batch Recovery via GradInversion

TL;DR: GradInversion as mentioned in this paper proposes a group consistency regularization framework, where multiple agents starting from different random seeds work together to find an enhanced reconstruction of the original data batch, even for complex datasets, deep networks, and large batch sizes.
Posted Content

Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion

TL;DR: DeepInversion is introduced, a new method for synthesizing images from the image distribution used to train a deep neural network, which optimizes the input while regularizing the distribution of intermediate feature maps using information stored in the batch normalization layers of the teacher.
Proceedings ArticleDOI

ChamNet: Towards Efficient Network Design Through Platform-Aware Model Adaptation

TL;DR: The results show that adapting computation resources to building blocks is critical to model performance, and a novel algorithm to search for optimal architectures aided by efficient accuracy and resource (latency and/or energy) predictors is proposed.
Proceedings ArticleDOI

Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion

TL;DR: DeepInversion as discussed by the authors uses a trained network (teacher) to synthesize class-conditional input images starting from random noise, without using any additional information about the training dataset.
Posted Content

ChamNet: Towards Efficient Network Design through Platform-Aware Model Adaptation

TL;DR: Chameleon as mentioned in this paper proposes an efficient neural network (NN) architecture design methodology called Chameleon that honors given resource constraints instead of developing new building blocks or using computationally-intensive reinforcement learning algorithms, instead of exploiting hardware traits and adapting computation resources to fit target latency and/or energy constraints.