scispace - formally typeset
X

Xintong Yan

Researcher at Hunan University

Publications -  5
Citations -  256

Xintong Yan is an academic researcher from Hunan University. The author has contributed to research in topics: Feature (computer vision) & Computer science. The author has an hindex of 3, co-authored 5 publications receiving 69 citations.

Papers
More filters
Journal ArticleDOI

Efficient skin lesion segmentation using separable-Unet with stochastic weight averaging.

TL;DR: The proposed Separable-Unet framework takes advantage of the separable convolutional block and U-Net architectures to enhance the pixel-level discriminative representation capability of fully Convolutional networks (FCN).
Journal ArticleDOI

GP-CNN-DTEL: Global-Part CNN Model With Data-Transformed Ensemble Learning for Skin Lesion Classification

TL;DR: A Global-Part Convolutional Neural Network (GP-CNN) model, which treats the fine-grained local information and global context information with equal importance, and a data-transformed ensemble learning strategy, which can boost the classification performance by integrating the different discriminant information from GP-CNNs that are trained with original images, color constancy transformed images, and feature saliency transformed images.
Journal ArticleDOI

AFLN-DGCL: Adaptive Feature Learning Network with Difficulty-Guided Curriculum Learning for skin lesion segmentation

TL;DR: An ensemble learning method is introduced to build a fusion model, enabling the AFLN model to capture the multi-scale information and a Selecting-The-Biggest-Connected-Region (STBCR) is proposed to alleviate the over-segmentation problem of the fusion model.
Journal ArticleDOI

Multi-proportion channel ensemble model for retinal vessel segmentation.

TL;DR: The segmentation results showed that the proposed algorithm based on the MPC-EM with simple submodels can achieve state-of-the-art accuracy with reduced computational complexity.
Journal ArticleDOI

FusionM4Net: A multi-stage multi-modal learning algorithm for multi-label skin lesion classification.

TL;DR: FusionM4Net as mentioned in this paper proposes a two-stage multi-modal learning algorithm (FusionNet) for multi-label skin disease classification, which exploits and integrates the representation of clinical and dermoscopy images at the feature level, and then uses a Fusion Scheme 1 to conduct the information fusion at the decision level.