scispace - formally typeset
A

An Xiao

Researcher at Tsinghua University

Publications -  21
Citations -  618

An Xiao is an academic researcher from Tsinghua University. The author has contributed to research in topics: Transformer (machine learning model) & Biological network. The author has an hindex of 10, co-authored 21 publications receiving 418 citations. Previous affiliations of An Xiao include Huawei.

Papers
More filters
Posted Content

GhostSR: Learning Ghost Features for Efficient Image Super-Resolution.

TL;DR: Zhang et al. as mentioned in this paper proposed to use shift operation to generate the redundant features (i.e., Ghost features) in single image super-resolution (SISR) system.
Patent

Method and device for training image processing model

TL;DR: In this paper, the authors proposed a technical scheme of training the image processing model provided by the invention, after which the soft label of the enhanced image is obtained, and the training of the imageprocessing model is guided based on the soft labels so as to improve the performance of the model.
Posted ContentDOI

Rationalizing Translation Elongation by Reinforcement Learning

TL;DR: It is demonstrated that RiboRL can outperform other state-of-the-art methods in predicting ribosome densities and that the reinforcement learning based strategy can generate more informative features for the prediction task when compared to other commonly used attribution methods in deep learning.
Journal ArticleDOI

Riboexp: an interpretable reinforcement learning framework for ribosome density modeling.

TL;DR: Zeng et al. as discussed by the authors developed a novel deep reinforcement learning-based framework, named Riboexp, to model the determinants of the uneven distribution of ribosomes on mRNA transcripts during translation elongation.
Posted Content

Augmented Shortcuts for Vision Transformers

TL;DR: Zhang et al. as mentioned in this paper theoretically analyzed the feature collapse phenomenon and studied the relationship between shortcuts and feature diversity in these transformers. And they presented an augmented shortcut scheme, which inserts additional paths with learnable parameters in parallel on the original shortcuts.