Institution
Xidian University
Education•Xi'an, China•
About: Xidian University is a education organization based out in Xi'an, China. It is known for research contribution in the topics: Antenna (radio) & Synthetic aperture radar. The organization has 32099 authors who have published 38961 publications receiving 431820 citations. The organization is also known as: University of Electronic Science and Technology at Xi'an & Xīān Diànzǐ Kējì Dàxué.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: An ultrathin metallic structure to produce frequency-selective spoof surface plasmon polaritons (SPPs) in the microwave and terahertz frequencies with good performance of low loss, high transmission, and wide bandwidth in the selective frequency band is proposed.
Abstract: Broadband Frequency-Selective Spoof Surface Plasmon Polaritons on Ultrathin Metallic Structure
129 citations
••
TL;DR: An in situ growth routine based on the hydrophobic core/shell UCNPs combined with ultrasmall water-soluble CuS triggered by single 808 nm NIR irradiation as the theranostic platform is proposed and the platform is uniform and stable.
Abstract: In the theranostic field, a near-infrared (NIR) laser is located in the optical window, and up-conversion nanoparticles (UCNPs) could be potentially utilized as the imaging agents with high contrast. Meanwhile, copper sulfide (CuS) has been proposed as a photothermal agent with increased temperature under a NIR laser. However, there is still no direct and effective strategy to integrate the hydrophobic UCNPs with CuS until now. Herein, we propose an in situ growth routine based on the hydrophobic core/shell UCNPs combined with ultrasmall water-soluble CuS triggered by single 808 nm NIR irradiation as the theranostic platform. Hydrophobic NaYF4:Yb,Er@NaYF4,Nd,Yb could be turned hydrophilic with highly dispersed and biocompatible properties through conjunction with transferred dopamine. The as-synthesized ultrasmall CuS (3 and 7 nm) served as a stable photothermal agent even after several laser-on/off cycles. Most importantly, comparing with the mix routine, the in situ growth routine to coat UCNPs with CuS...
129 citations
••
TL;DR: The experimental results have shown that the MA using both MatchFmeasure and UIR is effective to simultaneously align multiple pairs of ontologies and avoid the bias improvement caused by MatchFeasure, and the comparison with state-of-the-art ontology matching systems further indicates the effectiveness of the proposed method.
129 citations
••
18 Jun 2018TL;DR: DSRN efficiently constructs strong representations to disentangle highly nonlinear relationships between images and shapes; by incorporating a linear layer of low-rank learning, DSRN effectively encodes correlations of landmarks to improve performance.
Abstract: Face alignment has been extensively studied in computer vision community due to its fundamental role in facial analysis, but it remains an unsolved problem. The major challenges lie in the highly nonlinear relationship between face images and associated facial shapes, which is coupled by underlying correlation of landmarks. Existing methods mainly rely on cascaded regression, suffering from intrinsic shortcomings, e.g., strong dependency on initialization and failure to exploit landmark correlations. In this paper, we propose the direct shape regression network (DSRN) for end-to-end face alignment by jointly handling the aforementioned challenges in a unified framework. Specifically, by deploying doubly convolutional layer and by using the Fourier feature pooling layer proposed in this paper, DSRN efficiently constructs strong representations to disentangle highly nonlinear relationships between images and shapes; by incorporating a linear layer of low-rank learning, DSRN effectively encodes correlations of landmarks to improve performance. DSRN leverages the strengths of kernels for nonlinear feature extraction and neural networks for structured prediction, and provides the first end-to-end learning architecture for direct face alignment. Its effectiveness and generality are validated by extensive experiments on five benchmark datasets, including AFLW, 300W, CelebA, MAFL, and 300VW. All empirical results demonstrate that DSRN consistently produces high performance and in most cases surpasses state-of-the-art.
129 citations
••
TL;DR: In this article, a shared predictive deep quantization (SPDQ) approach is proposed to explicitly formulates a shared subspace across different modalities and two private subspaces for individual modalities, and representations in the shared sub-space and the private sub-spaces are learned simultaneously by embedding them to a reproducing kernel Hilbert space.
Abstract: With explosive growth of data volume and ever-increasing diversity of data modalities, cross-modal similarity search, which conducts nearest neighbor search across different modalities, has been attracting increasing interest. This paper presents a deep compact code learning solution for efficient cross-modal similarity search. Many recent studies have proven that quantization-based approaches perform generally better than hashing-based approaches on single-modal similarity search. In this paper, we propose a deep quantization approach, which is among the early attempts of leveraging deep neural networks into quantization-based cross-modal similarity search. Our approach, dubbed shared predictive deep quantization (SPDQ), explicitly formulates a shared subspace across different modalities and two private subspaces for individual modalities, and representations in the shared subspace and the private subspaces are learned simultaneously by embedding them to a reproducing kernel Hilbert space, where the mean embedding of different modality distributions can be explicitly compared. In addition, in the shared subspace, a quantizer is learned to produce the semantics preserving compact codes with the help of label alignment. Thanks to this novel network architecture in cooperation with supervised quantization training, SPDQ can preserve intramodal and intermodal similarities as much as possible and greatly reduce quantization error. Experiments on two popular benchmarks corroborate that our approach outperforms state-of-the-art methods.
129 citations
Authors
Showing all 32362 results
Name | H-index | Papers | Citations |
---|---|---|---|
Zhong Lin Wang | 245 | 2529 | 259003 |
Jie Zhang | 178 | 4857 | 221720 |
Bin Wang | 126 | 2226 | 74364 |
Huijun Gao | 121 | 685 | 44399 |
Hong Wang | 110 | 1633 | 51811 |
Jian Zhang | 107 | 3064 | 69715 |
Guozhong Cao | 104 | 694 | 41625 |
Lajos Hanzo | 101 | 2040 | 54380 |
Witold Pedrycz | 101 | 1766 | 58203 |
Lei Liu | 98 | 2041 | 51163 |
Qi Tian | 96 | 1030 | 41010 |
Wei Liu | 96 | 1538 | 42459 |
MengChu Zhou | 96 | 1124 | 36969 |
Chunying Chen | 94 | 508 | 30110 |
Daniel W. C. Ho | 85 | 360 | 21429 |