H
Hangyu Lin
Researcher at Fudan University
Publications - 10
Citations - 100
Hangyu Lin is an academic researcher from Fudan University. The author has contributed to research in topics: Sketch recognition & Image retrieval. The author has an hindex of 3, co-authored 9 publications receiving 27 citations.
Papers
More filters
Proceedings ArticleDOI
TC-Net for iSBIR: Triplet Classification Network for Instance-level Sketch Based Image Retrieval
TL;DR: A Triplet Classification Network (TC-Net) for iSBIR is presented which is composed of two major components: triplet Siamese network, and auxiliary classification loss which can break the limitations existed in previous works.
Proceedings ArticleDOI
Sketch-BERT: Learning Sketch Bidirectional Encoder Representation From Transformers by Self-Supervised Learning of Sketch Gestalt
TL;DR: This work presents a model of learning Sketch Bidirectional Encoder Representation from Transformer (Sketch-BERT), and generalizes BERT to sketch domain, with the novel proposed components and pre-training algorithms, including the newly designed sketch embedding networks, and the self-supervised learning of sketch gestalt.
Posted Content
Sketch-BERT: Learning Sketch Bidirectional Encoder Representation from Transformers by Self-supervised Learning of Sketch Gestalt.
TL;DR: In this article, a model of learning Sketch Bidirectional Encoder Representation from Transformer (Sketch-BERT) was proposed to improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.
Proceedings ArticleDOI
Domain-Aware SE Network for Sketch-based Image Retrieval with Multiplicative Euclidean Margin Softmax
TL;DR: Wang et al. as discussed by the authors proposed a Domain-Aware Squeeze-and-Excitation (DASE) network, which seamlessly incorporates the prior knowledge of sample sketch or photo into SE module and make the SE module capable of emphasizing appropriate channels according to domain signal.
Book ChapterDOI
Self-supervised Learning of Orc-Bert Augmentor for Recognizing Few-Shot Oracle Characters
TL;DR: This paper proposes a novel data augmentation approach, named Orc-Bert Augmentor pretrained by self-supervised learning, for few-shot oracle character recognition, which leverages a self- supervised BERT model pre-trained on large unlabeled Chinese characters datasets to generate sample-wise augmented samples.