scispace - formally typeset
X

Xianbiao Qi

Researcher at Beijing University of Posts and Telecommunications

Publications -  52
Citations -  1807

Xianbiao Qi is an academic researcher from Beijing University of Posts and Telecommunications. The author has contributed to research in topics: Feature extraction & Computer science. The author has an hindex of 12, co-authored 43 publications receiving 1005 citations. Previous affiliations of Xianbiao Qi include University of Oulu & Shenzhen University.

Papers
More filters
Journal ArticleDOI

DeepCrack: Learning Hierarchical Convolutional Features for Crack Detection

TL;DR: DeepCrack-an end-to-end trainable deep convolutional neural network for automatic crack detection by learning high-level features for crack representation and outperforms the current state-of-the-art methods.
Journal ArticleDOI

Pairwise Rotation Invariant Co-Occurrence Local Binary Pattern

TL;DR: This work formally introduces a Pairwise Transform Invariance (PTI) principle, and proposes a novel Pairwise Rotation Invariant Co-occurrence Local Binary Pattern (PRICoLBP) feature, and extends it to incorporate multi-scale, multi-orientation, and multi-channel information.
Proceedings Article

DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR

TL;DR: A novel query formulation using dynamic anchor boxes for DETR (DEtection TRansformer) and offers a deeper understanding of the role of queries in DETR, which directly uses box coordinates as queries in Transformer decoders and dynamically updates them layer-by-layer.
Proceedings ArticleDOI

Self-Supervised Convolutional Subspace Clustering Network

TL;DR: An end-to-end trainable framework that combines a dual self-supervision that exploits the output of spectral clustering to supervise the training of the feature learning module and the self-expression module into a joint optimization framework is proposed.
Journal ArticleDOI

Dynamic texture and scene classification by transferring deep image features

TL;DR: Transferred ConvNet Feature (TCoF) as mentioned in this paper applies a well-trained Convolutional Neural Network (ConvNet) as a feature extractor to extract midlevel features from each frame, and then form the video-level representation by concatenating the first and second order statistics over the mid-level features.