Other affiliations: University of Łódź
Bio: Anna Fabijańska is an academic researcher from Lodz University of Technology. The author has contributed to research in topics: Image segmentation & Segmentation. The author has an hindex of 14, co-authored 100 publications receiving 1050 citations. Previous affiliations of Anna Fabijańska include University of Łódź.
Papers published on a yearly basis
University of Copenhagen1, Radboud University Nijmegen Medical Centre2, University of Iowa3, Utrecht University4, University College London5, Telecom SudParis6, University of Antwerp7, Technische Universität München8, University of Łódź9, Graz University of Technology10, University of Seville11, Philips12, Cornell University13, Leipzig University14, University of Mainz15, Nagoya University16, Siemens17, New York University18, Erasmus University Rotterdam19, Copenhagen University Hospital20
TL;DR: A fusion scheme that obtained superior results is presented, demonstrating that there is complementary information provided by the different algorithms and there is still room for further improvements in airway segmentation algorithms.
Abstract: This paper describes a framework for establishing a reference airway tree segmentation, which was used to quantitatively evaluate 15 different airway tree extraction algorithms in a standardized manner. Because of the sheer difficulty involved in manually constructing a complete reference standard from scratch, we propose to construct the reference using results from all algorithms that are to be evaluated. We start by subdividing each segmented airway tree into its individual branch segments. Each branch segment is then visually scored by trained observers to determine whether or not it is a correctly segmented part of the airway tree. Finally, the reference airway trees are constructed by taking the union of all correctly extracted branch segments. Fifteen airway tree extraction algorithms from different research groups are evaluated on a diverse set of 20 chest computed tomography (CT) scans of subjects ranging from healthy volunteers to patients with severe pathologies, scanned at different sites, with different CT scanner brands, models, and scanning protocols. Three performance measures covering different aspects of segmentation quality were computed for all participating algorithms. Results from the evaluation showed that no single algorithm could extract more than an average of 74% of the total length of all branches in the reference standard, indicating substantial differences between the algorithms. A fusion scheme that obtained superior results is presented, demonstrating that there is complementary information provided by the different algorithms and there is still room for further improvements in airway segmentation algorithms.
University of Navarra1, Radboud University Nijmegen Medical Centre2, Arizona State University3, Bahçeşehir University4, Brigham and Women's Hospital5, University of Lyon6, University of Los Andes7, Polytechnic University of Valencia8, Leiden University Medical Center9, Hunan University10, Lodz University of Technology11, Norwegian University of Science and Technology12, Shahed University13, University of Alberta14, Technical University of Madrid15, Graz University of Technology16, Utrecht University17
TL;DR: An annotated reference dataset is presented and a quantitative scoring system is proposed for objective comparison of algorithms and performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.
Abstract: The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.
TL;DR: In this paper, an approach to impulse noise removal is presented, which is a switching filter which identifies the noisy pixels and then corrects them by using median filter, in order to identify pixels corrupted by noise, local intensity extrema is applied.
Abstract: In this study an approach to impulse noise removal is presented. The introduced algorithm is a switching filter which identifies the noisy pixels and then corrects them by using median filter. In order to identify pixels corrupted by noise an analysis of local intensity extrema is applied. Comprehensive analysis of the algorithm performance [in terms of peak signal-to-noise ratio (PSNR) and Structural SIMilarity (SSIM) index] is presented. Results obtained on wide range of noise corruption (up to 98%) are shown and discussed. Moreover, comparison with well-established methods for impulse noise removal is provided. Presented results reveal that the proposed algorithm outperforms other approaches to impulse noise removal and its performance is close to ideal switching median filter. For high noise densities, the method correctly detects up to 100% of noisy pixels.
TL;DR: This paper proposes to perform cell segmentation using a U-Net-based convolutional neural network that is trained to discriminate pixels located at the borders between cells and results in accurate values of the cell morphometric parameters.
Abstract: Diagnostic information regarding the health status of the corneal endothelium may be obtained by analyzing the size and the shape of the endothelial cells in specular microscopy images. Prior to the analysis, the endothelial cells need to be extracted from the image. Up to today, this has been performed manually or semi-automatically. Several approaches to automatic segmentation of endothelial cells exist; however, none of them is perfect. Therefore this paper proposes to perform cell segmentation using a U-Net-based convolutional neural network. Particularly, the network is trained to discriminate pixels located at the borders between cells. The edge probability map outputted by the network is next binarized and skeletonized in order to obtain one-pixel wide edges. The proposed solution was tested on a dataset consisting of 30 corneal endothelial images presenting cells of different sizes, achieving an AUROC level of 0.92. The resulting DICE is on average equal to 0.86, which is a good result, regarding the thickness of the compared edges. The corresponding mean absolute percentage error of cell number is at the level of 4.5% which confirms the high accuracy of the proposed approach. The resulting cell edges are well aligned to the ground truths and require a limited number of manual corrections. This also results in accurate values of the cell morphometric parameters. The corresponding errors range from 5.2% for endothelial cell density, through 6.2% for cell hexagonality to 11.93% for the coefficient of variation of the cell size.
••03 Jul 2012
TL;DR: The segmentation approach proposed in this paper overcomes limitations by incorporating watershed transform and normalized cuts and results are presented, compared with results of the original normalized cut method.
Abstract: In this paper problem of image segmentation is considered. Specifically, normalized graph cut algorithm is regarded. In its source version the Ncut approach is computationally complex and time consuming, what decreases possibilities of its application in practical applications of machine vision. The segmentation approach proposed in this paper overcomes these limitations by incorporating watershed transform and normalized cuts. Results of the proposed method are presented, compared with results of the original normalized cut method and discussed. (6 pages)
Technische Universität München1, ETH Zurich2, University of Bern3, Harvard University4, National Institutes of Health5, University of Debrecen6, University Hospital Heidelberg7, McGill University8, University of Pennsylvania9, French Institute for Research in Computer Science and Automation10, University at Buffalo11, Microsoft12, University of Cambridge13, Stanford University14, University of Virginia15, Imperial College London16, Massachusetts Institute of Technology17, Columbia University18, Sabancı University19, Old Dominion University20, RMIT University21, Purdue University22, General Electric23
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource
01 Jan 1994
TL;DR: The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images.
Abstract: MUCKE aims to mine a large volume of images, to structure them conceptually and to use this conceptual structuring in order to improve large-scale image retrieval. The last decade witnessed important progress concerning low-level image representations. However, there are a number problems which need to be solved in order to unleash the full potential of image mining in applications. The central problem with low-level representations is the mismatch between them and the human interpretation of image content. This problem can be instantiated, for instance, by the incapability of existing descriptors to capture spatial relationships between the concepts represented or by their incapability to convey an explanation of why two images are similar in a content-based image retrieval framework. We start by assessing existing local descriptors for image classification and by proposing to use co-occurrence matrices to better capture spatial relationships in images. The main focus in MUCKE is on cleaning large scale Web image corpora and on proposing image representations which are closer to the human interpretation of images. Consequently, we introduce methods which tackle these two problems and compare results to state of the art methods. Note: some aspects of this deliverable are withheld at this time as they are pending review. Please contact the authors for a preview.
01 Jan 2006
TL;DR: This paper introduces a robust, learning-based brain extraction system (ROBEX), which combines a discriminative and a generative model to achieve the final result and shows that ROBEX provides significantly improved performance measures for almost every method/dataset combination.
Abstract: Automatic whole-brain extraction from magnetic resonance images (MRI), also known as skull stripping, is a key component in most neuroimage pipelines. As the first element in the chain, its robustness is critical for the overall performance of the system. Many skull stripping methods have been proposed, but the problem is not considered to be completely solved yet. Many systems in the literature have good performance on certain datasets (mostly the datasets they were trained/tuned on), but fail to produce satisfactory results when the acquisition conditions or study populations are different. In this paper we introduce a robust, learning-based brain extraction system (ROBEX). The method combines a discriminative and a generative model to achieve the final result. The discriminative model is a Random Forest classifier trained to detect the brain boundary; the generative model is a point distribution model that ensures that the result is plausible. When a new image is presented to the system, the generative model is explored to find the contour with highest likelihood according to the discriminative model. Because the target shape is in general not perfectly represented by the generative model, the contour is refined using graph cuts to obtain the final segmentation. Both models were trained using 92 scans from a proprietary dataset but they achieve a high degree of robustness on a variety of other datasets. ROBEX was compared with six other popular, publicly available methods (BET, BSE, FreeSurfer, AFNI, BridgeBurner, and GCUT) on three publicly available datasets (IBSR, LPBA40, and OASIS, 137 scans in total) that include a wide range of acquisition hardware and a highly variable population (different age groups, healthy/diseased). The results show that ROBEX provides significantly improved performance measures for almost every method/dataset combination.
TL;DR: Medical imaging systems: Physical principles and image reconstruction algorithms for magnetic resonance tomography, ultrasound and computer tomography (CT), and applications: Image enhancement, image registration, functional magnetic resonance imaging (fMRI).
Abstract: Medical Image Analysis provides a forum for the dissemination of new research results in the field of medical and biological image analysis, with special emphasis on efforts related to the applications of computer vision, virtual reality and robotics to biomedical imaging problems. A bi-monthly journal, it publishes the highest quality, original papers that contribute to the basic science of processing, analysing and utilizing medical and biological images for these purposes.