scispace - formally typeset
Search or ask a question
Author

Chung-Hsing Li

Other affiliations: National Defense Medical Center
Bio: Chung-Hsing Li is an academic researcher from Tri-Service General Hospital. The author has contributed to research in topics: Cephalometric analysis & Dentistry. The author has an hindex of 2, co-authored 2 publications receiving 226 citations. Previous affiliations of Chung-Hsing Li include National Defense Medical Center.

Papers
More filters
Journal ArticleDOI
TL;DR: Based on the quantitative evaluation results, it is believed automatic dental radiography analysis is still a challenging and unsolved problem and the datasets and the evaluation software are made available to the research community, further encouraging future developments in this field.

246 citations

Journal ArticleDOI
TL;DR: Evaluation of the methods submitted to the Automatic Cephalometric X-Ray Landmark Detection Challenge provides insights into the performance of different landmark detection approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
Abstract: Cephalometric analysis is an essential clinical and research tool in orthodontics for the orthodontic analysis and treatment planning. This paper presents the evaluation of the methods submitted to the Automatic Cephalometric X-Ray Landmark Detection Challenge, held at the IEEE International Symposium on Biomedical Imaging 2014 with an on-site competition. The challenge was set to explore and compare automatic landmark detection methods in application to cephalometric X-ray images. Methods were evaluated on a common database including cephalograms of 300 patients aged six to 60 years, collected from the Dental Department, Tri-Service General Hospital, Taiwan, and manually marked anatomical landmarks as the ground truth data, generated by two experienced medical doctors. Quantitative evaluation was performed to compare the results of a representative selection of current methods submitted to the challenge. Experimental results show that three methods are able to achieve detection rates greater than 80% using the 4 mm precision range, but only one method achieves a detection rate greater than 70% using the 2 mm precision range, which is the acceptable precision range in clinical practice. The study provides insights into the performance of different landmark detection approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.

136 citations

Journal ArticleDOI
TL;DR: Smaller lower gonial angle and smaller anterior cranial base (SN) predict a favorable outcome for pediatric OSAS using a tongue-beaded OA, which will equip practitioners with additional insights when selecting suitable candidates for OA therapy in pediatric patients.
Abstract: We conducted this retrospective study to identify potential clinical, polysomnographic, and cephalometric predictors for the treatment outcomes of a tongue-beaded oral appliance (OA) in children with obstructive sleep apnea syndrome (OSAS). In total, 63 patients—50 boys and 13 girls ranging in age from 4 to 16 years—underwent OA treatment nightly for at least 6 months. A baseline digital lateral cephalometric radiograph was obtained for each patient. Multivariate logistic regression analysis was performed to examine predictors for the treatment outcome based on the clinical and cephalometric measurements. Overall, 28 patients responded to the treatment (post-treatment improvement > 50% or apnea–hypopnea index (AHI) < 1/h), and 35 did not (post-treatment improvement < 50% and AHI ≥ 1/h). Significantly larger cranial base angle (SNBa), smaller lower gonial angle (LGo Angle), and shorter length of anterior cranial base (SN) were found in responders. Smaller lower gonial angle (LGo Angle) and smaller anterior cranial base (SN) predict a favorable outcome for pediatric OSAS using a tongue-beaded OA. This finding will equip practitioners with additional insights when selecting suitable candidates for OA therapy in pediatric patients.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This review covers computer-assisted analysis of images in the field of medical imaging and introduces the fundamentals of deep learning methods and their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on.
Abstract: This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

2,653 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: It is shown that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficientDenoising of medical images.
Abstract: Image denoising is an important pre-processing step in medical image analysis. Different algorithms have been proposed in past three decades with varying denoising performances. More recently, having outperformed all conventional methods, deep learning based models have shown a great promise. These methods are however limited for requirement of large training sample size and high computational costs. In this paper we show that using small sample size, denoising autoencoders constructed using convolutional layers can be used for efficient denoising of medical images. Heterogeneous images can be combined to boost sample size for increased denoising performance. Simplest of networks can reconstruct images with corruption levels so high that noise and signal are not differentiable to human eye.

488 citations

Journal ArticleDOI
TL;DR: This work proposed the first deep learning‐based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state‐of‐the‐art automated segmentation algorithms, commercial software, and interobserver variability.
Abstract: Purpose Accurate segmentation of organs-at-risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning-based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state-of-the-art automated segmentation algorithms, commercial software, and interobserver variability. Methods Convolutional neural networks (CNNs)—a concept from the field of deep learning—were used to study consistent intensity patterns of OARs from training CT images and to segment the OAR in a previously unseen test CT image. For CNN training, we extracted a representative number of positive intensity patches around voxels that belong to the OAR of interest in training CT images, and negative intensity patches around voxels that belong to the surrounding structures. These patches then passed through a sequence of CNN layers that captured local image features such as corners, end-points, and edges, and combined them into more complex high-order features that can efficiently describe the OAR. The trained network was applied to classify voxels in a region of interest in the test image where the corresponding OAR is expected to be located. We then smoothed the obtained classification results by using Markov random fields algorithm. We finally extracted the largest connected component of the smoothed voxels classified as the OAR by CNN, performed dilate–erode operations to remove cavities of the component, which resulted in segmentation of the OAR in the test image. Results The performance of CNNs was validated on segmentation of spinal cord, mandible, parotid glands, submandibular glands, larynx, pharynx, eye globes, optic nerves, and optic chiasm using 50 CT images. The obtained segmentation results varied from 37.4% Dice coefficient (DSC) for chiasm to 89.5% DSC for mandible. We also analyzed the performance of state-of-the-art algorithms and commercial software reported in the literature, and observed that CNNs demonstrate similar or superior performance on segmentation of spinal cord, mandible, parotid glands, larynx, pharynx, eye globes, and optic nerves, but inferior performance on segmentation of submandibular glands and optic chiasm. Conclusion We concluded that convolution neural networks can accurately segment most of OARs using a representative database of 50 HaN CT images. At the same time, inclusion of additional information, for example, MR images, may be beneficial to some OARs with poorly visible boundaries.

403 citations

Journal ArticleDOI
TL;DR: Based on the quantitative evaluation results, it is believed automatic dental radiography analysis is still a challenging and unsolved problem and the datasets and the evaluation software are made available to the research community, further encouraging future developments in this field.

246 citations

Journal ArticleDOI
TL;DR: This study investigated the application of a deep convolutional neural network (DCNN) for classifying tooth types on dental cone-beam computed tomography (CT) images and found the proposed method is advantageous in obtaining high classification accuracy without the need for precise tooth segmentation.

221 citations