scispace - formally typeset
Search or ask a question
Author

Chunfeng Lian

Bio: Chunfeng Lian is an academic researcher from University of North Carolina at Chapel Hill. The author has contributed to research in topics: Segmentation & Computer science. The author has an hindex of 16, co-authored 92 publications receiving 1038 citations. Previous affiliations of Chunfeng Lian include Xi'an Jiaotong University & Institut national des sciences appliquées de Rouen.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: A hierarchical fully convolutional network (H-FCN) is proposed to automatically identify discriminative local patches and regions in the whole brain sMRI, upon which multi-scale feature representations are then jointly learned and fused to construct hierarchical classification models for AD diagnosis.
Abstract: Structural magnetic resonance imaging (sMRI) has been widely used for computer-aided diagnosis of neurodegenerative disorders, e.g., Alzheimer's disease (AD), due to its sensitivity to morphological changes caused by brain atrophy. Recently, a few deep learning methods (e.g., convolutional neural networks, CNNs) have been proposed to learn task-oriented features from sMRI for AD diagnosis, and achieved superior performance than the conventional learning-based methods using hand-crafted features. However, these existing CNN-based methods still require the pre-determination of informative locations in sMRI. That is, the stage of discriminative atrophy localization is isolated to the latter stages of feature extraction and classifier construction. In this paper, we propose a hierarchical fully convolutional network (H-FCN) to automatically identify discriminative local patches and regions in the whole brain sMRI, upon which multi-scale feature representations are then jointly learned and fused to construct hierarchical classification models for AD diagnosis. Our proposed H-FCN method was evaluated on a large cohort of subjects from two independent datasets (i.e., ADNI-1 and ADNI-2), demonstrating good performance on joint discriminative atrophy localization and brain disease diagnosis.

311 citations

Book ChapterDOI
16 Sep 2018
TL;DR: Experimental results on subjects from ADNI demonstrate that the authors' synthesized PET images with 3D-cGAN are reasonable, and also the two-stage deep learning method outperforms the state-of-the-art methods in AD diagnosis.
Abstract: Multi-modal neuroimages (e.g., MRI and PET) have been widely used for diagnosis of brain diseases such as Alzheimer’s disease (AD) by providing complementary information. However, in practice, it is unavoidable to have missing data, i.e., missing PET data for many subjects in the ADNI dataset. A straightforward strategy to tackle this challenge is to simply discard subjects with missing PET, but this will significantly reduce the number of training subjects for learning reliable diagnostic models. On the other hand, since different modalities (i.e., MRI and PET) were acquired from the same subject, there often exist underlying relevance between different modalities. Accordingly, we propose a two-stage deep learning framework for AD diagnosis using both MRI and PET data. Specifically, in the first stage, we impute missing PET data based on their corresponding MRI data by using 3D Cycle-consistent Generative Adversarial Networks (3D-cGAN) to capture their underlying relationship. In the second stage, with the complete MRI and PET (i.e., after imputation for the case of missing PET), we develop a deep multi-instance neural network for AD diagnosis and also mild cognitive impairment (MCI) conversion prediction. Experimental results on subjects from ADNI demonstrate that our synthesized PET images with 3D-cGAN are reasonable, and also our two-stage deep learning method outperforms the state-of-the-art methods in AD diagnosis.

132 citations

Journal ArticleDOI
TL;DR: This study proposes a novel deep-learning-based CAD system, guided by task-specific prior knowledge, for automated nodule detection and classification in ultrasound images, and demonstrates that the proposed method is effective in the discrimination of thyroid nodules.

115 citations

Journal ArticleDOI
TL;DR: A novel fully convolutional neural network with no requirement of any specified hand‐crafted features and ROIs is proposed for efficient segmentation of PVSs and the experimental results show its superior performance compared with several state‐of‐the‐art methods.

92 citations

Journal ArticleDOI
TL;DR: A co-clustering algorithm is proposed to concurrently segment 3D tumors in PET-CT images, considering that the two complementary imaging modalities can combine functional and anatomical information to improve segmentation performance.
Abstract: Precise delineation of target tumor is a key factor to ensure the effectiveness of radiation therapy. While hybrid positron emission tomography-computed tomography (PET-CT) has become a standard imaging tool in the practice of radiation oncology, many existing automatic/semi-automatic methods still perform tumor segmentation on mono-modal images. In this paper, a co-clustering algorithm is proposed to concurrently segment 3D tumors in PET-CT images, considering that the two complementary imaging modalities can combine functional and anatomical information to improve segmentation performance. The theory of belief functions is adopted in the proposed method to model, fuse, and reason with uncertain and imprecise knowledge from noisy and blurry PET-CT images. To ensure reliable segmentation for each modality, the distance metric for the quantification of clustering distortions and spatial smoothness is iteratively adapted during the clustering procedure. On the other hand, to encourage consistent segmentation between different modalities, a specific context term is proposed in the clustering objective function. Moreover, during the iterative optimization process, clustering results for the two distinct modalities are further adjusted via a belief-functions-based information fusion strategy. The proposed method has been evaluated on a data set consisting of 21 paired PET-CT images for non-small cell lung cancer patients. The quantitative and qualitative evaluations show that our proposed method performs well compared with the state-of-the-art methods.

79 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.

1,053 citations

01 Jan 2009
TL;DR: The aim of the research presented in this thesis is to create new methods for design for manufacturing, by using several approaches of KE, and find the beneficial and less beneficial aspects of these methods in comparison to each other and earlier research.
Abstract: As companies strive to develop artefacts intended for services instead of traditional sell-off, new challenges in the product development process arise to promote continuous improvement and increasing market profits. This creates a focus on product life-cycle components as companies then make life-cycle commitments, where they are responsible for the function availability during the extent of the life-cycle, i.e. functional products. One of these life-cycle components is manufacturing; therefore, companies search for new approaches of success during manufacturability evaluation already in engineering design. Efforts have been done to support early engineering design, as this phase sets constraints and opportunities for manufacturing. These efforts have turned into design for manufacturing methods and guidelines. A further step to improve the life-cycle focus during early engineering design is to reuse results and use experience from earlier projects. However, because results and experiences created during project work are often not documented for reuse, only remembered by some people, there is a need for design support. Knowledge engineering (KE) is a methodology for creating knowledge-based systems, e.g. systems that enable reuse of earlier results and make available both explicit and tacit corporate knowledge, enabling the automated generation and evaluation of new engineering design solutions during early product development. There are a variety of KE-approaches, such as knowledge-based engineering, case-based reasoning and programming, which have been used in research to develop design for manufacturing methods and applications. There are, however, opportunities for research where several approaches and their interdependencies, to create a transparent picture of how KE can be used to support engineering design, are investigated. The aim of the research presented in this thesis is to create new methods for design for manufacturing, by using several approaches of KE, and find the beneficial and less beneficial aspects of these methods in comparison to each other and earlier research. This thesis presents methods and applications for design for manufacturing using KE. KE has been employed in several ways, namely rule-based, rule-, programmingand finite element analysis (FEA)-based, and ruleand plan-based, which are tested and compared with each other. Results show that KE can be used to generate information about manufacturing in several ways. The rule-based way is suitable for supporting life-cycle commitments, as engineering design and manufacturing can be integrated with maintenance and performance predictions during early engineering design, though limited to the firing of production rules. The rule-, programmingand FEA-based way can be used to integrate computer-aided design tools and virtual manufacturing for non-linear stress and displacement analysis. This way may also bridge the gap between engineering designers and computational experts, even though this way requires a larger effort to program than the rule-based. The ruleand planbased way can enable design for manufacturing in two fashions – based on earlier manufacturing plans and based on rules. Because earlier manufacturing plans, together with programming algorithms, can handle knowledge that may be more intricate to capture as rules, as opposed to the time demanding routine work that is often automated by means of rules, several opportunities for designing for manufacturing exist.

727 citations

Journal ArticleDOI
TL;DR: Radiomics is a rapidly evolving field of research concerned with the extraction of quantitative metrics-the so-called radiomic features-within medical images as discussed by the authors, which capture tissue and lesion characteristics such as heterogeneity and shape and may, alone or in combination with demographic, histologic, genomic, or proteomic data, be used for clinical problem solving.
Abstract: Radiomics is a rapidly evolving field of research concerned with the extraction of quantitative metrics-the so-called radiomic features-within medical images. Radiomic features capture tissue and lesion characteristics such as heterogeneity and shape and may, alone or in combination with demographic, histologic, genomic, or proteomic data, be used for clinical problem solving. The goal of this continuing education article is to provide an introduction to the field, covering the basic radiomics workflow: feature calculation and selection, dimensionality reduction, and data processing. Potential clinical applications in nuclear medicine that include PET radiomics-based prediction of treatment response and survival will be discussed. Current limitations of radiomics, such as sensitivity to acquisition parameter variations, and common pitfalls will also be covered.

440 citations