Author
Yan Li
Bio: Yan Li is an academic researcher from Shenzhen University. The author has contributed to research in topics: Feature learning & Deep learning. The author has an hindex of 8, co-authored 10 publications receiving 349 citations.
Papers
More filters
••
TL;DR: Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis.
Abstract: The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked DPN (MM-SDPN) algorithm, which MM-SDPN consists of two-stage SDPNs, is proposed to fuse and learn feature representation from multimodal neuroimaging data for AD diagnosis. Specifically speaking, two SDPNs are first used to learn high-level features of MRI and PET, respectively, which are then fed to another SDPN to fuse multimodal neuroimaging information. The proposed MM-SDPN algorithm is applied to the ADNI dataset to conduct both binary classification and multiclass classification tasks. Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis.
315 citations
••
TL;DR: The experimental results on three color histopathological image datasets show that the proposed C-RBH-PCANet algorithm is superior to the original PCANet and other conventional unsupervised deep learning algorithms, while the best performance is achieved by the proposed feature learning and classification framework that combines C- RBH- PCBanet and matrix-form classifier.
Abstract: The computer-aided diagnosis for histopathological images has attracted considerable attention. Principal component analysis network (PCANet) is a novel deep learning algorithm for feature learning with the simple network architecture and parameters. In this study, a color pattern random binary hashing-based PCANet (C-RBH-PCANet) algorithm is proposed to learn an effective feature representation from color histopathological images. The color norm pattern and angular pattern are extracted from the principal component images of R, G, and B color channels after cascaded PCA networks. The random binary encoding is then performed on both color norm pattern images and angular pattern images to generate multiple binary images. Moreover, we rearrange the pooled local histogram features by spatial pyramid pooling to a matrix-form for reducing the dimension of feature and preserving spatial information. Therefore, a C-RBH-PCANet and matrix-form classifier-based feature learning and classification framework is proposed for diagnosis of color histopathological images. The experimental results on three color histopathological image datasets show that the proposed C-RBH-PCANet algorithm is superior to the original PCANet and other conventional unsupervised deep learning algorithms, while the best performance is achieved by the proposed feature learning and classification framework that combines C-RBH-PCANet and matrix-form classifier.
69 citations
••
TL;DR: The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR.
30 citations
••
01 Apr 2017TL;DR: The experimental results demonstrate that the proposed RBM+ works well as an LUPI algorithm for feature learning, and the ensemble L UPI algorithm is superior to the traditional predictive models for the MRI-based AD diagnosis using the positron emission tomography as the privileged information.
Abstract: In clinical practice, the magnetic resonance imaging (MRI) is a prevalent neuroimaging technique for Alzheimer's disease (AD) diagnosis. As a learning using privileged information (LUPI) algorithm, SVM+ has shown its effectiveness on the classification of brain disorders, with single-modal neuroimaging samples for testing but multimodal neuroimaging samples for training. In this work, we propose to apply the multimodal restricted Boltzmann machines (RBM) as an LUPI algorithm for feature learning so as to form an RBM+ algorithm. Furthermore, an ensemble LUPI algorithm is developed, integrating SVM+ and RBM+ by the multiple kernel boosting based strategy. The experimental results demonstrate that the proposed RBM+ works well as an LUPI algorithm for feature learning, and the ensemble LUPI algorithm is superior to the traditional predictive models for the MRI-based AD diagnosis using the positron emission tomography as the privileged information.
25 citations
••
13 Apr 2016TL;DR: A stacked DPN (S- DPN) algorithm is proposed to further improve feature representation and a multi-modality S-DPN (MM-S-DPn) algorithm to fuse multi- modality neuroimaging data and learn more discriminative and robust feature representation for AD classification is proposed.
Abstract: Feature representation is the critical factor for the computer-aided Alzheimer's disease (AD) diagnosis. Deep polynomial network (DPN) is a novel deep learning algorithm, which can effectively learn feature representation from small samples. In this work, a stacked DPN (S-DPN) algorithm is proposed to further improve feature representation. We then propose a multi-modality S-DPN (MM-S-DPN) algorithm to fuse multi-modality neuroimaging data and learn more discriminative and robust feature representation for AD classification. Experiments are performed on ADNI dataset with MRI and PET images as multi-modality data. The results indicate that S-DPN is superior to DPN and stacked auto-encoder algorithms. Moreover, MM-S-DPN achieves best performance compared with single-modality S-DPN and other multi-modality feature learning based algorithms.
20 citations
Cited by
More filters
••
TL;DR: This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year, to survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks.
8,730 citations
••
TL;DR: This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data and compares the performances of DL techniques when applied to different data sets across various application domains.
Abstract: Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics , bioimaging , medical imaging , and (brain/body)–machine interfaces . These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
622 citations
••
TL;DR: The open-source framework for classification of AD using CNN and T1-weighted MRI is extended and found that more than half of the surveyed papers may have suffered from data leakage and thus reported biased performance.
346 citations
••
TL;DR: A hierarchical fully convolutional network (H-FCN) is proposed to automatically identify discriminative local patches and regions in the whole brain sMRI, upon which multi-scale feature representations are then jointly learned and fused to construct hierarchical classification models for AD diagnosis.
Abstract: Structural magnetic resonance imaging (sMRI) has been widely used for computer-aided diagnosis of neurodegenerative disorders, e.g., Alzheimer's disease (AD), due to its sensitivity to morphological changes caused by brain atrophy. Recently, a few deep learning methods (e.g., convolutional neural networks, CNNs) have been proposed to learn task-oriented features from sMRI for AD diagnosis, and achieved superior performance than the conventional learning-based methods using hand-crafted features. However, these existing CNN-based methods still require the pre-determination of informative locations in sMRI. That is, the stage of discriminative atrophy localization is isolated to the latter stages of feature extraction and classifier construction. In this paper, we propose a hierarchical fully convolutional network (H-FCN) to automatically identify discriminative local patches and regions in the whole brain sMRI, upon which multi-scale feature representations are then jointly learned and fused to construct hierarchical classification models for AD diagnosis. Our proposed H-FCN method was evaluated on a large cohort of subjects from two independent datasets (i.e., ADNI-1 and ADNI-2), demonstrating good performance on joint discriminative atrophy localization and brain disease diagnosis.
311 citations
••
18 Jun 2018TL;DR: In this article, a unified approach that simultaneously performs disease identification and localization through the same underlying model for all images is presented, which can effectively leverage both class information as well as limited location annotation.
Abstract: Accurate identification and localization of abnormalities from radiology images play an integral part in clinical diagnosis and treatment planning. Building a highly accurate prediction model for these tasks usually requires a large number of images manually annotated with labels and finding sites of abnormalities. In reality, however, such annotated data are expensive to acquire, especially the ones with location annotations. We need methods that can work well with only a small amount of location annotations. To address this challenge, we present a unified approach that simultaneously performs disease identification and localization through the same underlying model for all images. We demonstrate that our approach can effectively leverage both class information as well as limited location annotation, and significantly outperforms the comparative reference baseline in both classification and localization tasks.
275 citations