scispace - formally typeset
Search or ask a question
Author

Guanyu Yang

Bio: Guanyu Yang is an academic researcher from Southeast University. The author has contributed to research in topics: Computer science & Segmentation. The author has an hindex of 18, co-authored 80 publications receiving 1181 citations. Previous affiliations of Guanyu Yang include Leiden University Medical Center & Chinese Ministry of Education.


Papers
More filters
Journal ArticleDOI
TL;DR: This work presents the methodologies and evaluation results for the WHS algorithms selected from the submissions to the Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017.

216 citations

Journal ArticleDOI
TL;DR: A fully automatic centerline extraction algorithm for coronary arteries in CCTA images with excellent performances in extraction ability and accuracy is presented and validated.
Abstract: Coronary computed tomographic angiography (CCTA) is a non-invasive imaging modality for the visualization of the heart and coronary arteries. To fully exploit the potential of the CCTA datasets and apply it in clinical practice, an automated coronary artery extraction approach is needed. The purpose of this paper is to present and validate a fully automatic centerline extraction algorithm for coronary arteries in CCTA images. The algorithm is based on an improved version of Frangi’s vesselness filter which removes unwanted step-edge responses at the boundaries of the cardiac chambers. Building upon this new vesselness filter, the coronary artery extraction pipeline extracts the centerlines of main branches as well as side-branches automatically. This algorithm was first evaluated with a standardized evaluation framework named Rotterdam Coronary Artery Algorithm Evaluation Framework used in the MICCAI Coronary Artery Tracking challenge 2008 (CAT08). It includes 128 reference centerlines which were manually delineated. The average overlap and accuracy measures of our method were 93.7% and 0.30 mm, respectively, which ranked at the 1st and 3rd place compared to five other automatic methods presented in the CAT08. Secondly, in 50 clinical datasets, a total of 100 reference centerlines were generated from lumen contours in the transversal planes which were manually corrected by an expert from the cardiology department. In this evaluation, the average overlap and accuracy were 96.1% and 0.33 mm, respectively. The entire processing time for one dataset is less than 2 min on a standard desktop computer. In conclusion, our newly developed automatic approach can extract coronary arteries in CCTA images with excellent performances in extraction ability and accuracy.

153 citations

Journal ArticleDOI
TL;DR: A two-step processing scheme called 'artifact suppressed large-scale nonlocal means' for suppressing both noise and artifacts in thoracic LDCT images is described, which allows conclusion on the efficacy of the method in improving thoraci LDCT data.
Abstract: The x-ray exposure to patients has become a major concern in computed tomography (CT) and minimizing the radiation exposure has been one of the major efforts in the CT field. Due to plenty high-attenuation tissues in the human chest, under low-dose scan protocols, thoracic low-dose CT (LDCT) images tend to be severely degraded by excessive mottled noise and non-stationary streak artifacts. Their removal is rather a challenging task because the streak artifacts with directional prominence are often hard to discriminate from the attenuation information of normal tissues. This paper describes a two-step processing scheme called 'artifact suppressed large-scale nonlocal means' for suppressing both noise and artifacts in thoracic LDCT images. Specific scale and direction properties were exploited to discriminate the noise and artifacts from image structures. Parallel implementation has been introduced to speed up the whole processing by more than 100 times. Phantom and patient CT images were both acquired for evaluation purpose. Comparative qualitative and quantitative analyses were both performed that allows conclusion on the efficacy of our method in improving thoracic LDCT data.

129 citations

Journal ArticleDOI
TL;DR: It is found that the information in the process of backtracking from reached points can be well utilized to overcome the above problems and improve the extraction performance.
Abstract: Minimal path techniques can efficiently extract geometrically curve-like structures by finding the path with minimal accumulated cost between two given endpoints. Though having found wide practical applications (e.g., line identification, crack detection, and vascular centerline extraction), minimal path techniques suffer from some notable problems. The first one is that they require setting two endpoints for each line to be extracted (endpoint problem). The second one is that the connection might fail when the geodesic distance between the two points is much shorter than the desirable minimal path (shortcut problem). In addition, when connecting two distant points, the minimal path connection might become inefficient as the accumulated cost increases over the propagation and results in leakage into some non-feature regions near the starting point (accumulation problem). To address these problems, this paper proposes an approach termed minimal path propagation with backtracking. We found that the information in the process of backtracking from reached points can be well utilized to overcome the above problems and improve the extraction performance. The whole algorithm is robust to parameter setting and allows a coarse setting of the starting point. Extensive experiments with both simulated and realistic data are performed to validate the performance of the proposed method.

119 citations

Journal ArticleDOI
TL;DR: An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible and CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVDrisk determination.
Abstract: Purpose: The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Methods: Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Results: Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohens kappa for patient CVD risk categorization by the evaluated methods ranged from 0.80 to 1.00. Conclusions: A publicly available standardized framework for the evaluation of (semi)automatic methods for CAC identification in cardiac CT is described. An evaluation of five (semi)automatic methods within this framework shows that automatic per patient CVD risk categorization is feasible. CAC lesions at ambiguous locations such as the coronary ostia remain challenging, but their detection had limited impact on CVD risk determination.

71 citations


Cited by
More filters
01 Feb 2009
TL;DR: This Secret History documentary follows experts as they pick through the evidence and reveal why the plague killed on such a scale, and what might be coming next.
Abstract: Secret History: Return of the Black Death Channel 4, 7-8pm In 1348 the Black Death swept through London, killing people within days of the appearance of their first symptoms. Exactly how many died, and why, has long been a mystery. This Secret History documentary follows experts as they pick through the evidence and reveal why the plague killed on such a scale. And they ask, what might be coming next?

5,234 citations

Reference EntryDOI
15 Oct 2004

2,118 citations

Journal ArticleDOI
TL;DR: In this paper, the authors review the current state of the art as CT transforms from a qualitative diagnostic tool to a quantitative one, including the use of iterative reconstruction strategies suited to specific segmentation tasks and emerging methods that provide more insight than conventional attenuation based tomography.
Abstract: X-ray computer tomography (CT) is fast becoming an accepted tool within the materials science community for the acquisition of 3D images. Here the authors review the current state of the art as CT transforms from a qualitative diagnostic tool to a quantitative one. Our review considers first the image acquisition process, including the use of iterative reconstruction strategies suited to specific segmentation tasks and emerging methods that provide more insight (e.g. fast and high resolution imaging, crystallite (grain) imaging) than conventional attenuation based tomography. Methods and shortcomings of CT are examined for the quantification of 3D volumetric data to extract key topological parameters such as phase fractions, phase contiguity, and damage levels as well as density variations. As a non-destructive technique, CT is an ideal means of following structural development over time via time lapse sequences of 3D images (sometimes called 3D movies or 4D imaging). This includes information nee...

1,009 citations

Journal ArticleDOI
TL;DR: A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion, demonstrating a great potential of the proposed method on artifact reduction and structure preservation.
Abstract: In order to reduce the potential radiation risk, low-dose CT has attracted an increasing attention. However, simply lowering the radiation dose will significantly degrade the image quality. In this paper, we propose a new noise reduction method for low-dose CT via deep learning without accessing original projection data. A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion. Qualitative results demonstrate a great potential of the proposed method on artifact reduction and structure preservation. In terms of the quantitative metrics, the proposed method has showed a substantial improvement on PSNR, RMSE and SSIM than the competing state-of-art methods. Furthermore, the speed of our method is one order of magnitude faster than the iterative reconstruction and patch-based image denoising methods.

603 citations