Institution
Hefei University of Technology
Education•Hefei, China•
About: Hefei University of Technology is a education organization based out in Hefei, China. It is known for research contribution in the topics: Computer science & Microstructure. The organization has 28093 authors who have published 24935 publications receiving 324989 citations.
Papers published on a yearly basis
Papers
More filters
••
01 Oct 2019
TL;DR: The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative; results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years.
Abstract: The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOTST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" shortterm tracking in RGB, (iii) VOT-LT2019 focused on longterm tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard shortterm, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website.
393 citations
••
TL;DR: This paper proposes an adaptive hypergraph learning method for transductive image classification that simultaneously learns the labels of unlabeled images and the weights of hyperedges and can automatically modulate the effects of different hyperedge effects.
Abstract: Recent years have witnessed a surge of interest in graph-based transductive image classification. Existing simple graph-based transductive learning methods only model the pairwise relationship of images, however, and they are sensitive to the radius parameter used in similarity calculation. Hypergraph learning has been investigated to solve both difficulties. It models the high-order relationship of samples by using a hyperedge to link multiple samples. Nevertheless, the existing hypergraph learning methods face two problems, i.e., how to generate hyperedges and how to handle a large set of hyperedges. This paper proposes an adaptive hypergraph learning method for transductive image classification. In our method, we generate hyperedges by linking images and their nearest neighbors. By varying the size of the neighborhood, we are able to generate a set of hyperedges for each image and its visual neighbors. Our method simultaneously learns the labels of unlabeled images and the weights of hyperedges. In this way, we can automatically modulate the effects of different hyperedges. Thorough empirical studies show the effectiveness of our approach when compared with representative baselines.
387 citations
••
TL;DR: The proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term and a novel algorithm to optimize the objective function is designed.
Abstract: The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.
382 citations
••
TL;DR: Experimental results demonstrate that the proposed method can obtain more competitive performance in comparison to nine representative medical image fusion methods, leading to state-of-the-art results on both visual quality and objective assessment.
Abstract: As an effective way to integrate the information contained in multiple medical images with different modalities, medical image fusion has emerged as a powerful technique in various clinical applications such as disease diagnosis and treatment planning. In this paper, a new multimodal medical image fusion method in nonsubsampled shearlet transform (NSST) domain is proposed. In the proposed method, the NSST decomposition is first performed on the source images to obtain their multiscale and multidirection representations. The high-frequency bands are fused by a parameter-adaptive pulse-coupled neural network (PA-PCNN) model, in which all the PCNN parameters can be adaptively estimated by the input band. The low-frequency bands are merged by a novel strategy that simultaneously addresses two crucial issues in medical image fusion, namely, energy preservation and detail extraction. Finally, the fused image is reconstructed by performing inverse NSST on the fused high-frequency and low-frequency bands. The effectiveness of the proposed method is verified by four different categories of medical image fusion problems [computed tomography (CT) and magnetic resonance (MR), MR-T1 and MR-T2, MR and positron emission tomography, and MR and single-photon emission CT] with more than 80 pairs of source images in total. Experimental results demonstrate that the proposed method can obtain more competitive performance in comparison to nine representative medical image fusion methods, leading to state-of-the-art results on both visual quality and objective assessment.
381 citations
••
TL;DR: It is suggested that an integration of the synergetic effect of suitable size plasmonic Ag@ AgCl and strong coupling effect between the Ag@AgCl nanoparticles and the exfoliated porous g-C3N4 nanosheets was superior for visible-light-responsive and fast separation of photogenerated electron-hole pairs, thus significantly improving the photocatalytic efficiency.
Abstract: A novel efficient Ag@AgCl/g-C3N4 plasmonic photocatalyst was synthesized by a rational in situ ion exchange approach between exfoliated g-C3N4 nanosheets with porous 2D morphology and AgNO3. The as-prepared Ag@AgCl-9/g-C3N4 plasmonic photocatalyst exhibited excellent photocatalytic performance under visible light irradiation for rhodamine B degradation with a rate constant of 0.1954 min–1, which is ∼41.6 and ∼16.8 times higher than those of the g-C3N4 (∼0.0047 min–1) and Ag/AgCl (∼0.0116 min–1), respectively. The degradation of methylene blue, methyl orange, and colorless phenol further confirmed the broad spectrum photocatalytic degradation abilities of Ag@AgCl-9/g-C3N4. These results suggested that an integration of the synergetic effect of suitable size plasmonic Ag@AgCl and strong coupling effect between the Ag@AgCl nanoparticles and the exfoliated porous g-C3N4 nanosheets was superior for visible-light-responsive and fast separation of photogenerated electron–hole pairs, thus significantly improving ...
377 citations
Authors
Showing all 28292 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yi Chen | 217 | 4342 | 293080 |
Xiang Zhang | 154 | 1733 | 117576 |
Jun Chen | 136 | 1856 | 77368 |
Shuicheng Yan | 123 | 810 | 66192 |
Yang Li | 117 | 1319 | 63111 |
Jian Liu | 117 | 2090 | 73156 |
Han-Qing Yu | 105 | 718 | 39735 |
Jianqiao Ye | 101 | 962 | 42647 |
Wei Liu | 96 | 1538 | 42459 |
Wei Zhou | 93 | 1640 | 39772 |
Panos M. Pardalos | 87 | 1207 | 39512 |
Zhong Chen | 80 | 1000 | 28171 |
Yong Zhang | 78 | 665 | 36388 |
Rong Cao | 76 | 568 | 21747 |
Qian Zhang | 76 | 891 | 25517 |