scispace - formally typeset
Search or ask a question
Institution

Xidian University

EducationXi'an, China
About: Xidian University is a education organization based out in Xi'an, China. It is known for research contribution in the topics: Antenna (radio) & Computer science. The organization has 32099 authors who have published 38961 publications receiving 431820 citations. The organization is also known as: University of Electronic Science and Technology at Xi'an & Xīān Diànzǐ Kējì Dàxué.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors comprehensively summarize and compare the methods for generation and detection of optical OAM, radio OAM and acoustic OAM in communications, including free-space optical communications, optical fiber communications, radio communications and acoustic communications.
Abstract: Orbital angular momentum (OAM) has aroused a widespread interest in many fields, especially in telecommunications due to its potential for unleashing new capacity in the severely congested spectrum of commercial communication systems. Beams carrying OAM have a helical phase front and a field strength with a singularity along the axial center, which can be used for information transmission, imaging and particle manipulation. The number of orthogonal OAM modes in a single beam is theoretically infinite and each mode is an element of a complete orthogonal basis that can be employed for multiplexing different signals, thus greatly improving the spectrum efficiency. In this paper, we comprehensively summarize and compare the methods for generation and detection of optical OAM, radio OAM and acoustic OAM. Then, we represent the applications and technical challenges of OAM in communications, including free-space optical communications, optical fiber communications, radio communications and acoustic communications. To complete our survey, we also discuss the state of art of particle manipulation and target imaging with OAM beams.

138 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper proposes a multimodal gesture recognition method based on a ResC3D network, which leverages the advantages of both residual and C3D model, together with a canonical correlation analysis based fusion scheme for blending features.
Abstract: Gesture recognition is an important issue in computer vision. Recognizing gestures with videos remains a challenging task due to the barriers of gesture-irrelevant factors. In this paper, we propose a multimodal gesture recognition method based on a ResC3D network. One key idea is to find a compact and effective representation of video sequences. Therefore, the video enhancement techniques, such as Retinex and median filter are applied to eliminate the illumination variation and noise in the input video, and a weighted frame unification strategy is utilized to sample key frames. Upon these representations, a ResC3D network, which leverages the advantages of both residual and C3D model, is developed to extract features, together with a canonical correlation analysis based fusion scheme for blending features. The performance of our method is evaluated in the Chalearn LAP isolated gesture recognition challenge. It reaches 67.71% accuracy and ranks the 1st place in this challenge.

138 citations

Journal ArticleDOI
TL;DR: This work develops a novel Lyapunov-based logic switching rule, and then the desired adaptive switching controllers are designed, where the controller parameters are to be tuned online in a switching manner according to the proposed switching logic.

138 citations

Proceedings Article
01 Jan 2018
TL;DR: A three-player game named KDGAN consisting of a classifier, a teacher, and a discriminator, where the classifier and the teacher learn from each other via distillation losses and are adversarially trained against the discriminator via adversarial losses.
Abstract: Knowledge distillation (KD) aims to train a lightweight classifier suitable to provide accurate inference with constrained resources in multi-label learning. Instead of directly consuming feature-label pairs, the classifier is trained by a teacher, i.e., a high-capacity model whose training may be resource-hungry. The accuracy of the classifier trained this way is usually suboptimal because it is difficult to learn the true data distribution from the teacher. An alternative method is to adversarially train the classifier against a discriminator in a two-player game akin to generative adversarial networks (GAN), which can ensure the classifier to learn the true data distribution at the equilibrium of this game. However, it may take excessively long time for such a two-player game to reach equilibrium due to high-variance gradient updates. To address these limitations, we propose a three-player game named KDGAN consisting of a classifier, a teacher, and a discriminator. The classifier and the teacher learn from each other via distillation losses and are adversarially trained against the discriminator via adversarial losses. By simultaneously optimizing the distillation and adversarial losses, the classifier will learn the true data distribution at the equilibrium. We approximate the discrete distribution learned by the classifier (or the teacher) with a concrete distribution. From the concrete distribution, we generate continuous samples to obtain low-variance gradient updates, which speed up the training. Extensive experiments using real datasets confirm the superiority of KDGAN in both accuracy and training speed.

138 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a structured sparsity regularization (SSR) regularization to reduce the memory overhead of CNNs, which can be well supported by various off-the-shelf deep learning libraries.
Abstract: The success of convolutional neural networks (CNNs) in computer vision applications has been accompanied by a significant increase of computation and memory costs, which prohibits their usage on resource-limited environments, such as mobile systems or embedded devices. To this end, the research of CNN compression has recently become emerging. In this paper, we propose a novel filter pruning scheme, termed structured sparsity regularization (SSR), to simultaneously speed up the computation and reduce the memory overhead of CNNs, which can be well supported by various off-the-shelf deep learning libraries. Concretely, the proposed scheme incorporates two different regularizers of structured sparsity into the original objective function of filter pruning, which fully coordinates the global output and local pruning operations to adaptively prune filters. We further propose an alternative updating with Lagrange multipliers (AULM) scheme to efficiently solve its optimization. AULM follows the principle of alternating direction method of multipliers (ADMM) and alternates between promoting the structured sparsity of CNNs and optimizing the recognition loss, which leads to a very efficient solver ( $2.5\times $ to the most recent work that directly solves the group sparsity-based regularization). Moreover, by imposing the structured sparsity, the online inference is extremely memory-light since the number of filters and the output feature maps are simultaneously reduced. The proposed scheme has been deployed to a variety of state-of-the-art CNN structures, including LeNet, AlexNet, VGGNet, ResNet, and GoogLeNet, over different data sets. Quantitative results demonstrate that the proposed scheme achieves superior performance over the state-of-the-art methods. We further demonstrate the proposed compression scheme for the task of transfer learning, including domain adaptation and object detection, which also show exciting performance gains over the state-of-the-art filter pruning methods.

138 citations


Authors

Showing all 32362 results

NameH-indexPapersCitations
Zhong Lin Wang2452529259003
Jie Zhang1784857221720
Bin Wang126222674364
Huijun Gao12168544399
Hong Wang110163351811
Jian Zhang107306469715
Guozhong Cao10469441625
Lajos Hanzo101204054380
Witold Pedrycz101176658203
Lei Liu98204151163
Qi Tian96103041010
Wei Liu96153842459
MengChu Zhou96112436969
Chunying Chen9450830110
Daniel W. C. Ho8536021429
Network Information
Related Institutions (5)
Beihang University
73.5K papers, 975.6K citations

92% related

Southeast University
79.4K papers, 1.1M citations

91% related

Harbin Institute of Technology
109.2K papers, 1.6M citations

91% related

City University of Hong Kong
60.1K papers, 1.7M citations

90% related

Nanyang Technological University
112.8K papers, 3.2M citations

90% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023117
2022529
20213,751
20203,817
20194,017
20183,382