scispace - formally typeset
Search or ask a question
Institution

Hefei University of Technology

EducationHefei, China
About: Hefei University of Technology is a education organization based out in Hefei, China. It is known for research contribution in the topics: Computer science & Microstructure. The organization has 28093 authors who have published 24935 publications receiving 324989 citations.


Papers
More filters
Proceedings ArticleDOI
Matej Kristan1, Amanda Berg2, Linyu Zheng3, Litu Rout4  +176 moreInstitutions (43)
01 Oct 2019
TL;DR: The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative; results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years.
Abstract: The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOTST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" shortterm tracking in RGB, (iii) VOT-LT2019 focused on longterm tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard shortterm, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website.

393 citations

Journal ArticleDOI
TL;DR: This paper proposes an adaptive hypergraph learning method for transductive image classification that simultaneously learns the labels of unlabeled images and the weights of hyperedges and can automatically modulate the effects of different hyperedge effects.
Abstract: Recent years have witnessed a surge of interest in graph-based transductive image classification. Existing simple graph-based transductive learning methods only model the pairwise relationship of images, however, and they are sensitive to the radius parameter used in similarity calculation. Hypergraph learning has been investigated to solve both difficulties. It models the high-order relationship of samples by using a hyperedge to link multiple samples. Nevertheless, the existing hypergraph learning methods face two problems, i.e., how to generate hyperedges and how to handle a large set of hyperedges. This paper proposes an adaptive hypergraph learning method for transductive image classification. In our method, we generate hyperedges by linking images and their nearest neighbors. By varying the size of the neighborhood, we are able to generate a set of hyperedges for each image and its visual neighbors. Our method simultaneously learns the labels of unlabeled images and the weights of hyperedges. In this way, we can automatically modulate the effects of different hyperedges. Thorough empirical studies show the effectiveness of our approach when compared with representative baselines.

387 citations

Journal ArticleDOI
TL;DR: The proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term and a novel algorithm to optimize the objective function is designed.
Abstract: The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

382 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain more competitive performance in comparison to nine representative medical image fusion methods, leading to state-of-the-art results on both visual quality and objective assessment.
Abstract: As an effective way to integrate the information contained in multiple medical images with different modalities, medical image fusion has emerged as a powerful technique in various clinical applications such as disease diagnosis and treatment planning. In this paper, a new multimodal medical image fusion method in nonsubsampled shearlet transform (NSST) domain is proposed. In the proposed method, the NSST decomposition is first performed on the source images to obtain their multiscale and multidirection representations. The high-frequency bands are fused by a parameter-adaptive pulse-coupled neural network (PA-PCNN) model, in which all the PCNN parameters can be adaptively estimated by the input band. The low-frequency bands are merged by a novel strategy that simultaneously addresses two crucial issues in medical image fusion, namely, energy preservation and detail extraction. Finally, the fused image is reconstructed by performing inverse NSST on the fused high-frequency and low-frequency bands. The effectiveness of the proposed method is verified by four different categories of medical image fusion problems [computed tomography (CT) and magnetic resonance (MR), MR-T1 and MR-T2, MR and positron emission tomography, and MR and single-photon emission CT] with more than 80 pairs of source images in total. Experimental results demonstrate that the proposed method can obtain more competitive performance in comparison to nine representative medical image fusion methods, leading to state-of-the-art results on both visual quality and objective assessment.

381 citations

Journal ArticleDOI
TL;DR: It is suggested that an integration of the synergetic effect of suitable size plasmonic Ag@ AgCl and strong coupling effect between the Ag@AgCl nanoparticles and the exfoliated porous g-C3N4 nanosheets was superior for visible-light-responsive and fast separation of photogenerated electron-hole pairs, thus significantly improving the photocatalytic efficiency.
Abstract: A novel efficient Ag@AgCl/g-C3N4 plasmonic photocatalyst was synthesized by a rational in situ ion exchange approach between exfoliated g-C3N4 nanosheets with porous 2D morphology and AgNO3. The as-prepared Ag@AgCl-9/g-C3N4 plasmonic photocatalyst exhibited excellent photocatalytic performance under visible light irradiation for rhodamine B degradation with a rate constant of 0.1954 min–1, which is ∼41.6 and ∼16.8 times higher than those of the g-C3N4 (∼0.0047 min–1) and Ag/AgCl (∼0.0116 min–1), respectively. The degradation of methylene blue, methyl orange, and colorless phenol further confirmed the broad spectrum photocatalytic degradation abilities of Ag@AgCl-9/g-C3N4. These results suggested that an integration of the synergetic effect of suitable size plasmonic Ag@AgCl and strong coupling effect between the Ag@AgCl nanoparticles and the exfoliated porous g-C3N4 nanosheets was superior for visible-light-responsive and fast separation of photogenerated electron–hole pairs, thus significantly improving ...

377 citations


Authors

Showing all 28292 results

NameH-indexPapersCitations
Yi Chen2174342293080
Xiang Zhang1541733117576
Jun Chen136185677368
Shuicheng Yan12381066192
Yang Li117131963111
Jian Liu117209073156
Han-Qing Yu10571839735
Jianqiao Ye10196242647
Wei Liu96153842459
Wei Zhou93164039772
Panos M. Pardalos87120739512
Zhong Chen80100028171
Yong Zhang7866536388
Rong Cao7656821747
Qian Zhang7689125517
Network Information
Related Institutions (5)
South China University of Technology
69.4K papers, 1.2M citations

92% related

Harbin Institute of Technology
109.2K papers, 1.6M citations

91% related

Tsinghua University
200.5K papers, 4.5M citations

91% related

University of Science and Technology of China
101K papers, 2.4M citations

90% related

Tianjin University
79.9K papers, 1.2M citations

90% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023106
2022490
20213,120
20202,931
20192,666
20182,151