scispace - formally typeset
G

Guo-Jun Qi

Researcher at Huawei

Publications -  263
Citations -  12701

Guo-Jun Qi is an academic researcher from Huawei. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 53, co-authored 248 publications receiving 9928 citations. Previous affiliations of Guo-Jun Qi include China University of Science and Technology & University of Science and Technology of China.

Papers
More filters
Proceedings ArticleDOI

Heterogeneous Network Embedding via Deep Architectures

TL;DR: It is demonstrated that the rich content and linkage information in a heterogeneous network can be captured by a multi-resolution deep embedding function, so that similarities among cross-modal data can be measured directly in a common embedding space.
Proceedings ArticleDOI

Correlative multi-label video annotation

TL;DR: A third paradigm is proposed which simultaneously classifies concepts and models correlations between them in a single step by using a novel Correlative Multi-Label (CML) framework and is compared with the state-of-the-art approaches in the first and second paradigms on the widely used TRECVID data set.
Journal ArticleDOI

Unified Video Annotation via Multigraph Learning

TL;DR: This paper shows that various crucial factors in video annotation, including multiple modalities, multiple distance functions, and temporal consistency, all correspond to different relationships among video units, and hence they can be represented by different graphs, and proposes optimized multigraph-based semi-supervised learning (OMG-SSL), which aims to simultaneously tackle these difficulties in a unified scheme.
Proceedings ArticleDOI

Differential Recurrent Neural Networks for Action Recognition

TL;DR: In this article, a differential recurrent neural network (dRNN) is proposed to learn complex time-series representations via high-order derivatives of states, where the change in information gain caused by the salient motions between successive frames is quantified by Derivative of States (DoS), and thus the proposed LSTM model is termed as differential RNN.
Proceedings ArticleDOI

Task Agnostic Meta-Learning for Few-Shot Learning

TL;DR: Task-Agnostic Meta-Learning (TAML) as mentioned in this paper proposes an entropy-based approach that meta-learns an unbiased initial model with the largest uncertainty over the output labels by preventing it from overperforming in classification tasks.