scispace - formally typeset
Search or ask a question
Author

Rafael Wiemker

Other affiliations: University of Hamburg
Bio: Rafael Wiemker is an academic researcher from Philips. The author has contributed to research in topics: Rendering (computer graphics) & Segmentation. The author has an hindex of 24, co-authored 161 publications receiving 2263 citations. Previous affiliations of Rafael Wiemker include University of Hamburg.


Papers
More filters
Journal ArticleDOI
TL;DR: A fusion scheme that obtained superior results is presented, demonstrating that there is complementary information provided by the different algorithms and there is still room for further improvements in airway segmentation algorithms.
Abstract: This paper describes a framework for establishing a reference airway tree segmentation, which was used to quantitatively evaluate 15 different airway tree extraction algorithms in a standardized manner. Because of the sheer difficulty involved in manually constructing a complete reference standard from scratch, we propose to construct the reference using results from all algorithms that are to be evaluated. We start by subdividing each segmented airway tree into its individual branch segments. Each branch segment is then visually scored by trained observers to determine whether or not it is a correctly segmented part of the airway tree. Finally, the reference airway trees are constructed by taking the union of all correctly extracted branch segments. Fifteen airway tree extraction algorithms from different research groups are evaluated on a diverse set of 20 chest computed tomography (CT) scans of subjects ranging from healthy volunteers to patients with severe pathologies, scanned at different sites, with different CT scanner brands, models, and scanning protocols. Three performance measures covering different aspects of segmentation quality were computed for all participating algorithms. Results from the evaluation showed that no single algorithm could extract more than an average of 74% of the total length of all branches in the reference standard, indicating substantial differences between the algorithms. A fusion scheme that obtained superior results is presented, demonstrating that there is complementary information provided by the different algorithms and there is still room for further improvements in airway segmentation algorithms.

241 citations

Journal ArticleDOI
TL;DR: It is found that enhancement based on the FWT suffers from one serious drawback-the introduction of visible artifacts when large structures are enhanced strongly, by contrast, the Laplacian Pyramid allows a smooth enhancement of large structures, such that visible artifacts can be avoided.
Abstract: Contrast enhancement of radiographies based on a multiscale decomposition of the images recently has proven to be a far more versatile and efficient method than regular unsharp-masking techniques, while containing these as a subset. In this paper, we compare the performance of two multiscale-methods, namely the Laplacian Pyramid and the fast wavelet transform (FWT). We find that enhancement based on the FWT suffers from one serious drawback-the introduction of visible artifacts when large structures are enhanced strongly. By contrast, the Laplacian Pyramid allows a smooth enhancement of large structures, such that visible artifacts can be avoided. Only for the enhancement of very small details, for denoising applications or compression of images, the FWT may have some advantages over the Laplacian Pyramid.

211 citations

Journal ArticleDOI
Rafael Wiemker1, P. Rogalla, T. Blaffert, D. Sifri, O Hay, E Shah, R. Truyen, T Fleiter 
TL;DR: Several studies have shown the feasibility and robustness of automated matching of corresponding nodule pairs between follow-up examinations, and tools for fast and accurate three-dimensional volume measurement of detected nodules are needed for diagnostic quality assurance.
Abstract: With the superb spatial resolution of modern multislice CT scanners and their ability to complete a thoracic scan within one breath-hold, software algorithms for computer-aided detection (CAD) of pulmonary nodules are now reaching high sensitivity levels at moderate false positive rates. A number of pilot studies have shown that CAD modules can successfully find overlooked pulmonary nodules and serve as a powerful tool for diagnostic quality assurance. Equally important are tools for fast and accurate three-dimensional volume measurement of detected nodules. These allow monitoring of nodule growth between follow-up examinations for differential diagnosis and response to oncological therapy. Owing to decreasing partial volume effect, nodule volumetry is more accurate with high resolution CT data. Several studies have shown the feasibility and robustness of automated matching of corresponding nodule pairs between follow-up examinations. Fast and automated growth rate monitoring with only few reader interactions also adds to diagnostic quality assurance.

100 citations

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the UV photospheric lines of 29 CMa, a 4.39 day period, double-lined O-type spectroscopic binary, and found that the mass ratio is q = 1.20 +/- 0.16 (secondary more massive) based on three independent arguments.
Abstract: We have analyzed the UV photospheric lines of 29 CMa, a 4.39 day period, double-lined O-type spectroscopic binary. Archival data from International Ultraviolet Explorer (IUE)(28 spectra well distributed in oribital phase) were analyzed with several techniques. We find that the mass ratio is q = 1.20 +/- 0.16 (secondary more massive) based on three independent arguments. A tomography algorithm was used to produce the separate spectra of the two stars in six UV spectral regions. The MK spectral classifications of the primary and secondary, O7.5-8 Iab and O9.7 Ib, respectively, were estimated through a comparison of UV line ratios with those in spectral standard stars. The flux ratio of the stars in the UV is 0.36 +/- 0.07 (primary brighter). The primary has a strong P Cygni NIV wavelength 1718 feature, indicating a strong stellar wind. We also present tomographic reconstructions of visual spectral data in the range 4300-4950 A, based on seven observations of differing orbital phases, which confirm the UV classifications, and show that the primary is an Of star. From the spectral classifications, we estimate the temperatures of the stars to be 33,750 K and 29,000 K for primary and secondary, respectively. We then fit visual and UV light curves and show that reasonably good fits can be obtained with these temperatures, a semicontact configuration, an inclination of 74 deg. +/- 2 deg., and an intensity ratio r is less than 0.5.

81 citations

Proceedings ArticleDOI
Thomas Buelow1, Rafael Wiemker1, Thomas Blaffert1, Cristian Lorenz1, Steffen Renisch1 
14 Apr 2005
TL;DR: An automated method is presented for the extraction of the pulmonary vessel tree from multi-slice CT data and a method for the separation of pulmonary arteries from veins is investigated.
Abstract: The purpose of this paper is to present an automated method for the extraction of the pulmonary vessel tree from multi-slice CT data. Furthermore we investigate a method for the separation of pulmonary arteries from veins. The vessel tree extraction is performed by a seed-point based front-propagation algorithm. This algorithm is based on a similar methodology as the bronchial tree segmentation and coronary artery tree extraction methods presented at earlier SPIE conferences. Our method for artery/vein separation is based upon the fact that the pulmonary artery tree accompanies the bronchial tree. For each extracted vessel segment, we evaluate a measure of "arterialness". This measure combines two components: a method for identifying candidate positions for a bronchus running in the vicinity of a given vessel on the one hand and a co-orientation measure for the vessel segment and bronchus candidates. The latter component rewards vessels running parallel to a nearby bronchus. The spatial orientation of vessel segments and bronchi is estimated by applying the structure tensor to the local gray-value neighbourhood. In our experiments we used multi slice CT datasets of the lung acquired by Philips IDT 16-slice, and Philips Brilliance 40-slice scanners. It can be shown that the proposed measure reduces the number of pulmonary veins falsely included into the arterial tree.

79 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

3,699 citations

Journal ArticleDOI
TL;DR: A convolutional neural network performs automated prediction of malignancy risk of pulmonary nodules in chest CT scan volumes and improves accuracy of lung cancer screening.
Abstract: With an estimated 160,000 deaths in 2018, lung cancer is the most common cause of cancer death in the United States1. Lung cancer screening using low-dose computed tomography has been shown to reduce mortality by 20–43% and is now included in US screening guidelines1–6. Existing challenges include inter-grader variability and high false-positive and false-negative rates7–10. We propose a deep learning algorithm that uses a patient’s current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the-art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases. We conducted two reader studies. When prior computed tomography imaging was not available, our model outperformed all six radiologists with absolute reductions of 11% in false positives and 5% in false negatives. Where prior computed tomography imaging was available, the model performance was on-par with the same radiologists. This creates an opportunity to optimize the screening process via computer assistance and automation. While the vast majority of patients remain unscreened, we show the potential for deep learning models to increase the accuracy, consistency and adoption of lung cancer screening worldwide. A convolutional neural network performs automated prediction of malignancy risk of pulmonary nodules in chest CT scan volumes and improves accuracy of lung cancer screening.

1,077 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed enhancement algorithm can not only enhance the details but also preserve the naturalness for non-uniform illumination images.
Abstract: Image enhancement plays an important role in image processing and analysis. Among various enhancement algorithms, Retinex-based algorithms can efficiently enhance details and have been widely adopted. Since Retinex-based algorithms regard illumination removal as a default preference and fail to limit the range of reflectance, the naturalness of non-uniform illumination images cannot be effectively preserved. However, naturalness is essential for image enhancement to achieve pleasing perceptual quality. In order to preserve naturalness while enhancing details, we propose an enhancement algorithm for non-uniform illumination images. In general, this paper makes the following three major contributions. First, a lightness-order-error measure is proposed to access naturalness preservation objectively. Second, a bright-pass filter is proposed to decompose an image into reflectance and illumination, which, respectively, determine the details and the naturalness of the image. Third, we propose a bi-log transformation, which is utilized to map the illumination to make a balance between details and naturalness. Experimental results demonstrate that the proposed algorithm can not only enhance the details but also preserve the naturalness for non-uniform illumination images.

918 citations