scispace - formally typeset
A

An Xiao

Researcher at Tsinghua University

Publications -  21
Citations -  618

An Xiao is an academic researcher from Tsinghua University. The author has contributed to research in topics: Transformer (machine learning model) & Biological network. The author has an hindex of 10, co-authored 21 publications receiving 418 citations. Previous affiliations of An Xiao include Huawei.

Papers
More filters
Journal ArticleDOI

NeoDTI: neural integration of neighbor information from a heterogeneous network for discovering new drug-target interactions.

TL;DR: A new nonlinear end‐to‐end learning model that integrates diverse information from heterogeneous network data and automatically learns topology‐preserving representations of drugs and targets to facilitate DTI prediction is developed, suggesting that NeoDTI can offer a powerful and robust tool for drug development and drug repositioning.
Posted Content

Transformer in Transformer

TL;DR: Transformer iN Transformer (TNT) as discussed by the authors is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism, where the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship.
Posted Content

Weight-Sharing Neural Architecture Search: A Battle to Shrink the Optimization Gap

TL;DR: A literature review on the application of NAS to computer vision problems is provided and existing approaches are summarized into several categories according to their efforts in bridging the gap.
Posted Content

A Survey on Visual Transformer

TL;DR: In this paper, a review of transformer-based models for computer vision tasks is presented, including the backbone network, high/mid-level vision, low-level image processing, and video processing.
Book ChapterDOI

Circumventing Outliers of AutoAugment with Knowledge Distillation

TL;DR: It is revealed that AutoAugment may remove part of discriminative information from the training image and so insisting on the ground-truth label is no longer the best option, and knowledge distillation is made use that refers to the output of a teacher model to guide network training.