scispace - formally typeset
Search or ask a question
Author

Ke Gong

Bio: Ke Gong is an academic researcher. The author has contributed to research in topics: Feature (computer vision) & Network model. The author has an hindex of 2, co-authored 4 publications receiving 81 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The feature refinement and filter network is proposed to solve the above problems from three aspects: by weakening the high response features, it aims to identify highly valuable features and extract the complete features of persons, thereby enhancing the robustness of the model.
Abstract: In the task of person re-identification, the attention mechanism and fine-grained information have been proved to be effective. However, it has been observed that models often focus on the extraction of features with strong discrimination, and neglect other valuable features. The extracted fine-grained information may include redundancies. In addition, current methods lack an effective scheme to remove background interference. Therefore, this paper proposes the feature refinement and filter network to solve the above problems from three aspects: first, by weakening the high response features, we aim to identify highly valuable features and extract the complete features of persons, thereby enhancing the robustness of the model; second, by positioning and intercepting the high response areas of persons, we eliminate the interference arising from background information and strengthen the response of the model to the complete features of persons; finally, valuable fine-grained features are selected using a multi-branch attention network for person re-identification to enhance the performance of the model. Our extensive experiments on the benchmark Market-1501, DukeMTMC-reID, CUHK03 and MSMT17 person re-identification datasets demonstrate that the performance of our method is comparable to that of state-of-the-art approaches.

154 citations

Journal ArticleDOI
TL;DR: A model that has joint weak saliency and attention aware is proposed, which can obtain more complete global features by weakening saliency features and obtains diversifiedsaliency features via attention diversity to improve the performance of the model.

99 citations

Journal ArticleDOI
Ke Gong, Xin Ning1, Hanchao Yu1, Liping Zhang1, Linjun Sun1 
01 Mar 2020
TL;DR: Wang et al. as mentioned in this paper designed the Weak Reverse Attention with Context Aware Network (WRCANet) to remove the background noise and suppress the loss of local detailed information as the network deepens.
Abstract: Person re-identification is a difficult topic in computer vision. Some study think that current deep learning methods is biased to capture the most discriminative features and ignore low-level details, more serious is it pay too much attention on relevance between background appearances of person images. It might limit their accuracy or makes them needlessly expensive for a not best performance. In this paper, we carefully design the Weak Reverse attention with Context Aware Network (WRCANet). Specifically, by merging weak reverse attention network and content aware module, the model can not only remove the background noise to extract the main information of persons, but also suppress the loss of local detailed information as the network deepens. We experiment on the Market-1501, DukeMTMC-reID and CUHK03, and the results show that our method achieves the state-of-the-art performance.

2 citations

Patent
31 Jul 2020
TL;DR: In this paper, an image classification method comprises the steps of: S1, acquiring a training set and a test set of images; S2, setting a significance weakening module for a convolution module in a classification model to construct a network model; S3, dividing the training set into multiple groups and inputting the multiple groups into the network model, acquired a salient feature map when each group of training sets is trained, and enhancing and intercepting the salient feature region according to the feature maps, and repeatedly executing the step S3 until a loss function of network model is
Abstract: The invention provides an image classification method. The image classification method comprises the steps of: S1, acquiring a training set and a test set of images; S2, setting a significance weakening module for a convolution module in a classification model to construct a network model; S3, dividing the training set into multiple groups and inputting the multiple groups into the network model,acquiring a feature map when each group of training sets is trained, acquiring a salient feature region according to the feature maps, and enhancing and intercepting the salient feature region; S4, inputting the salient feature region into the network model again, and repeatedly executing the step S3 until a loss function of the network model is not reduced any more or the accuracy rate of the network model reaches a preset value; and S5, inputting the test set into the network model trained in the step S4 to realize classification of the images in the test set. On the other hand, the invention further provides an image classification device and electronic equipment.

Cited by
More filters
Journal ArticleDOI
01 Sep 2021-Displays
TL;DR: A comprehensive review and classification of the latest developments in the deep learning methods for multi-view 3D object recognition is presented, which summarizes the results of these methods on a few mainstream datasets, provides an insightful summary, and puts forward enlightening future research directions.

101 citations

Journal ArticleDOI
01 Sep 2021-Displays
TL;DR: Wang et al. as mentioned in this paper proposed a voxel-based three-view hybrid parallel network for 3D shape classification, which first obtains the depth projection views of the three-dimensional model from the front view, the top view and the side view, and output its predicted probability value for the category of the 3D model.

64 citations

Journal ArticleDOI
01 Dec 2021-Displays
TL;DR: Wang et al. as mentioned in this paper proposed a quadratic polynomial guided fuzzy C-means and dual attention mechanism composite network model architecture to address the medical image's high complexity and noise.

51 citations

Journal ArticleDOI
01 Sep 2021-Displays
TL;DR: The current problems of image inpainting are summarized, the future development trend and research direction are prospected, and the different types of neural network structure based on deep learning methods are summarized.

50 citations

Journal ArticleDOI
08 Apr 2021-Entropy
TL;DR: In this paper, a new method that is more suitable for farmland vacancy segmentation is proposed, which uses an improved ResNet network as the backbone of signal transmission, and meanwhile uses data augmentation to improve the performance and robustness of the model.
Abstract: In the research of green vegetation coverage in the field of remote sensing image segmentation, crop planting area is often obtained by semantic segmentation of images taken from high altitude. This method can be used to obtain the rate of cultivated land in a region (such as a country), but it does not reflect the real situation of a particular farmland. Therefore, this paper takes low-altitude images of farmland to build a dataset. After comparing several mainstream semantic segmentation algorithms, a new method that is more suitable for farmland vacancy segmentation is proposed. Additionally, the Strip Pooling module (SPM) and the Mixed Pooling module (MPM), with strip pooling as their core, are designed and fused into the semantic segmentation network structure to better extract the vacancy features. Considering the high cost of manual data annotation, this paper uses an improved ResNet network as the backbone of signal transmission, and meanwhile uses data augmentation to improve the performance and robustness of the model. As a result, the accuracy of the proposed method in the test set is 95.6%, mIoU is 77.6%, and the error rate is 7%. Compared to the existing model, the mIoU value is improved by nearly 4%, reaching the level of practical application.

49 citations