scispace - formally typeset
W

Weifeng Ge

Researcher at University of Hong Kong

Publications -  28
Citations -  1403

Weifeng Ge is an academic researcher from University of Hong Kong. The author has contributed to research in topics: Computer science & Object detection. The author has an hindex of 9, co-authored 19 publications receiving 865 citations. Previous affiliations of Weifeng Ge include Tsinghua University & Fudan University.

Papers
More filters
Book ChapterDOI

Deep Metric Learning with Hierarchical Triplet Loss

TL;DR: Huang et al. as mentioned in this paper proposed a hierarchical triplet loss (HTL) to automatically collect informative training samples via a defined hierarchical tree that encodes global context information, which allows the model to learn more discriminative features from visual similar classes, leading to faster convergence and better performance.
Proceedings ArticleDOI

Weakly Supervised Complementary Parts Models for Fine-Grained Image Classification From the Bottom Up

TL;DR: This paper builds complementary parts models in a weakly supervised manner to retrieve information suppressed by dominant object parts detected by convolutional neural networks and builds a bi-directional long short-term memory (LSTM) network to fuze and encode the partial information of these complementary parts into a comprehensive feature for image classification.
Proceedings ArticleDOI

Multi-evidence Filtering and Fusion for Multi-label Classification, Object Detection and Semantic Segmentation Based on Weakly Supervised Learning

TL;DR: Zhang et al. as mentioned in this paper proposed a weakly supervised curriculum learning pipeline for multi-label object recognition, detection and semantic segmentation, which consists of four stages, including object localization, filtering and fusing object instances, pixel labeling for the training images, and task-specific network training.
Proceedings ArticleDOI

Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-Tuning

TL;DR: This paper introduces a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data, and can improve the classification accuracy by 2% - 10% using a single model.
Posted Content

Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning

TL;DR: In this paper, a source-target selective joint fine-tuning scheme was proposed to improve the performance of deep learning tasks with insufficient training data, where a target learning task was carried out simultaneously with another source learning task with abundant training data.