scispace - formally typeset
F

Fuhua Yan

Researcher at Shanghai Jiao Tong University

Publications -  5
Citations -  757

Fuhua Yan is an academic researcher from Shanghai Jiao Tong University. The author has contributed to research in topics: Overfitting & Population. The author has an hindex of 4, co-authored 5 publications receiving 379 citations.

Papers
More filters
Journal ArticleDOI

Dual-Sampling Attention Network for Diagnosis of COVID-19 From Community Acquired Pneumonia

TL;DR: Wang et al. as mentioned in this paper developed a dual-sampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT), and proposed a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses.
Journal ArticleDOI

Diagnosis of Coronavirus Disease 2019 (COVID-19) With Structured Latent Multi-View Representation Learning

TL;DR: In this article, a unified latent representation is learned which can completely encode information from different aspects of features and is endowed with promising class structure for separability, while the completeness is guaranteed with a group of backward neural networks (each for one type of features), while by using class labels the representation is enforced to be compact within COVID-19/community-acquired pneumonia (CAP).
Posted Content

Dual-Sampling Attention Network for Diagnosis of COVID-19 from Community Acquired Pneumonia

TL;DR: A dual-sampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT) with a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses.
Journal ArticleDOI

Diagnosis of Coronavirus Disease 2019 (COVID-19) with Structured Latent Multi-View Representation Learning

TL;DR: This study proposes to conduct the diagnosis of COVID-19 with a series of features extracted from CT images, and shows that the proposed method outperforms all comparison methods, and rather stable performances are observed when varying the number of training data.