scispace - formally typeset
Q

Qilong Wang

Researcher at Tianjin University

Publications -  92
Citations -  5296

Qilong Wang is an academic researcher from Tianjin University. The author has contributed to research in topics: Computer science & Covariance. The author has an hindex of 21, co-authored 56 publications receiving 2056 citations. Previous affiliations of Qilong Wang include Dalian University of Technology & Hong Kong Polytechnic University.

Papers
More filters
Proceedings ArticleDOI

ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks

TL;DR: The Efficient Channel Attention (ECA) module as discussed by the authors proposes a local cross-channel interaction strategy without dimensionality reduction, which can be efficiently implemented via 1D convolution, which only involves a handful of parameters while bringing clear performance gain.
Posted Content

ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks

TL;DR: This paper proposes an Efficient Channel Attention (ECA) module, which only involves a handful of parameters while bringing clear performance gain, and develops a method to adaptively select kernel size of 1D convolution, determining coverage of local cross-channel interaction.
Proceedings ArticleDOI

Mind the Class Weight Bias: Weighted Maximum Mean Discrepancy for Unsupervised Domain Adaptation

TL;DR: In this article, a weighted maximum mean discrepancy (MMD) model is proposed to exploit the class prior probability on source and target domains, whose challenge lies in the fact that the class label in target domain is unavailable.
Proceedings ArticleDOI

Is Second-Order Information Helpful for Large-Scale Visual Recognition?

TL;DR: A Matrix Power Normalized Covariance (MPNCOV) method that develops forward and backward propagation formulas regarding the nonlinear matrix functions such that MPN-COV can be trained end-to-end and analyzes both qualitatively and quantitatively its advantage over the well-known Log-Euclidean metric.
Proceedings ArticleDOI

Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization

TL;DR: This article proposed an iterative matrix square root normalization method for fast end-to-end training of global covariance pooling networks, which consists of three consecutive nonlinear structured layers, which perform pre-normalization, coupled matrix iteration and post-compensation, respectively.