Institution
Nankai University
Education•Tianjin, China•
About: Nankai University is a education organization based out in Tianjin, China. It is known for research contribution in the topics: Catalysis & Enantioselective synthesis. The organization has 42964 authors who have published 51866 publications receiving 1127896 citations. The organization is also known as: Nánkāi Dàxué.
Topics: Catalysis, Enantioselective synthesis, Adsorption, Graphene, Anode
Papers published on a yearly basis
Papers
More filters
••
TL;DR: Res2Net as mentioned in this paper constructs hierarchical residual-like connections within one single residual block to represent multi-scale features at a granular level and increases the range of receptive fields for each network layer.
Abstract: Representing features at multiple scales is of great importance for numerous vision tasks. Recent advances in backbone convolutional neural networks (CNNs) continually demonstrate stronger multi-scale representation ability, leading to consistent performance gains on a wide range of applications. However, most existing methods represent the multi-scale features in a layer-wise manner. In this paper, we propose a novel building block for CNNs, namely Res2Net, by constructing hierarchical residual-like connections within one single residual block. The Res2Net represents multi-scale features at a granular level and increases the range of receptive fields for each network layer. The proposed Res2Net block can be plugged into the state-of-the-art backbone CNN models, e.g., ResNet, ResNeXt, and DLA. We evaluate the Res2Net block on all these models and demonstrate consistent performance gains over baseline models on widely-used datasets, e.g., CIFAR-100 and ImageNet. Further ablation studies and experimental results on representative computer vision tasks, i.e., object detection, class activation mapping, and salient object detection, further verify the superiority of the Res2Net over the state-of-the-art baseline methods. The source code and trained models are available on https://mmcheng.net/res2net/ .
1,553 citations
••
1,537 citations
••
TL;DR: The broadband and tunable high-performance microwave absorption properties of an ultralight and highly compressible graphene foam (GF) are investigated and it is shown that via physical compression, the microwave absorption performance can be tuned.
Abstract: The broadband and tunable high-performance microwave absorption properties of an ultralight and highly compressible graphene foam (GF) are investigated. Simply via physical compression, the microwave absorption performance can be tuned. The qualified bandwidth coverage of 93.8% (60.5 GHz/64.5 GHz) is achieved for the GF under 90% compressive strain (1.0 mm thickness). This mainly because of the 3D conductive network.
1,533 citations
••
TL;DR: In this paper, the preparation of polyvinyl alcohol (PVA) nanocomposites with graphene oxide (GO) using a simple water solution processing method is reported, and efficient load transfer is found between the nanofiller graphene and matrix PVA and the mechanical properties of the graphene-based nanocompositionite with molecule-level dispersion are significantly improved.
Abstract: Despite great recent progress with carbon nanotubes and other nanoscale fillers, the development of strong, durable, and cost-efficient multifunctional nanocomposite materials has yet to be achieved. The challenges are to achieve molecule-level dispersion and maximum interfacial interaction between the nanofiller and the matrix at low loading. Here, the preparation of poly(vinyl alcohol) (PVA) nanocomposites with graphene oxide (GO) using a simple water solution processing method is reported. Efficient load transfer is found between the nanofiller graphene and matrix PVA and the mechanical properties of the graphene-based nanocomposite with molecule-level dispersion are significantly improved. A 76% increase in tensile strength and a 62% improvement of Young's modulus are achieved by addition of only 0.7 wt% of GO. The experimentally determined Young's modulus is in excellent agreement with theoretical simulation.
1,508 citations
••
TL;DR: A framework for adaptive visual object tracking based on structured output prediction that is able to outperform state-of-the-art trackers on various benchmark videos and can easily incorporate additional features and kernels into the framework, which results in increased tracking performance.
Abstract: Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we avoid the need for an intermediate classification step. Our method uses a kernelised structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow our tracker to run at high frame rates, we (a) introduce a budgeting mechanism that prevents the unbounded growth in the number of support vectors that would otherwise occur during tracking, and (b) show how to implement tracking on the GPU. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased tracking performance.
1,507 citations
Authors
Showing all 43397 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yi Chen | 217 | 4342 | 293080 |
Peidong Yang | 183 | 562 | 144351 |
Jie Zhang | 178 | 4857 | 221720 |
Yang Yang | 171 | 2644 | 153049 |
Qiang Zhang | 161 | 1137 | 100950 |
Bin Liu | 138 | 2181 | 87085 |
Jun Chen | 136 | 1856 | 77368 |
Hui Li | 135 | 2982 | 105903 |
Jie Liu | 131 | 1531 | 68891 |
Han Zhang | 130 | 970 | 58863 |
Jian Zhou | 128 | 3007 | 91402 |
Chao Zhang | 127 | 3119 | 84711 |
Wei Chen | 122 | 1946 | 89460 |
Xuan Zhang | 119 | 1530 | 65398 |
Yang Li | 117 | 1319 | 63111 |