scispace - formally typeset
G

Guo-Jun Qi

Researcher at Huawei

Publications -  263
Citations -  12701

Guo-Jun Qi is an academic researcher from Huawei. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 53, co-authored 248 publications receiving 9928 citations. Previous affiliations of Guo-Jun Qi include China University of Science and Technology & University of Science and Technology of China.

Papers
More filters
Journal ArticleDOI

A Temporal Order Modeling Approach to Human Action Recognition from Multimodal Sensor Data

TL;DR: A novel temporal order modeling approach to human action recognition is proposed that explores subspace projections to extract the latent temporal patterns from different human action sequences and introduces a sequential optimization algorithm to learn the optimized projections that preserve the pairwise label similarity of the action sequences.
Journal Article

Improved Techniques for Model Inversion Attacks

TL;DR: A variety of new techniques that can significantly boost the performance of MI attacks against deep neural networks (DNNs) are presented and a proposal to model private data distribution in order to better reconstruct representative data points is proposed.
Proceedings ArticleDOI

Concurrence-Aware Long Short-Term Sub-Memories for Person-Person Action Recognition

TL;DR: Wang et al. as discussed by the authors proposed a concurrent Long Short-Term Memory (Co-LSTM) to model the long-term inter-related dynamics between two interacting people on the bonding boxes covering people.
Proceedings ArticleDOI

Previewer for Multi-Scale Object Detector

TL;DR: A novel light-weight previewer block is proposed, which previews the objectness probability for the potential regression region of each prior box, using the stronger features with larger receptive fields and more contextual information for better predictions.
Posted Content

Prior-Knowledge and Attention-based Meta-Learning for Few-Shot Learning

TL;DR: A novel paradigm of meta- learning approach with three developments to introduce attention mechanism and prior-knowledge for meta-learning is presented, which alleviates the meta-learner's few-shot cognition burden.