scispace - formally typeset
X

Xiaoliang Dai

Researcher at Princeton University

Publications -  48
Citations -  2938

Xiaoliang Dai is an academic researcher from Princeton University. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 15, co-authored 37 publications receiving 1587 citations. Previous affiliations of Xiaoliang Dai include Peking University & Facebook.

Papers
More filters
Proceedings ArticleDOI

FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search

TL;DR: This work proposes a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods.
Posted Content

Visual Transformers: Token-based Image Representation and Processing for Computer Vision

TL;DR: This work represents images as a set of visual tokens and applies visual transformers to find relationships between visual semantic concepts to densely model relationships between them, and finds that this paradigm of token-based image representation and processing drastically outperforms its convolutional counterparts on image classification and semantic segmentation.
Proceedings ArticleDOI

ChamNet: Towards Efficient Network Design Through Platform-Aware Model Adaptation

TL;DR: The results show that adapting computation resources to building blocks is critical to model performance, and a novel algorithm to search for optimal architectures aided by efficient accuracy and resource (latency and/or energy) predictors is proposed.
Proceedings ArticleDOI

FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions

TL;DR: DMaskingNAS as mentioned in this paper proposes a masking mechanism for feature map reuse, so that memory and computational costs stay nearly constant as the search space expands, and employs effective shape propagation to maximize per-FLOP or per-parameter accuracy.
Posted Content

ChamNet: Towards Efficient Network Design through Platform-Aware Model Adaptation

TL;DR: Chameleon as mentioned in this paper proposes an efficient neural network (NN) architecture design methodology called Chameleon that honors given resource constraints instead of developing new building blocks or using computationally-intensive reinforcement learning algorithms, instead of exploiting hardware traits and adapting computation resources to fit target latency and/or energy constraints.