scispace - formally typeset
Search or ask a question
Author

Felix Ambellan

Bio: Felix Ambellan is an academic researcher from Zuse Institute Berlin. The author has contributed to research in topics: Segmentation & Statistical shape analysis. The author has an hindex of 7, co-authored 19 publications receiving 271 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Combining localized classification via CNNs with statistical anatomical knowledge via SSMs results in a state‐of‐the‐art segmentation method for knee bones and cartilage from MRI data.

238 citations

Journal ArticleDOI
TL;DR: The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations.

112 citations

Book ChapterDOI
TL;DR: This chapter is describing how to reconstruct three-dimensional anatomy from medical image data and how to build Statistical 3D Shape Models out of many such reconstructions yielding a new kind of anatomy that not only allows quantitative analysis of anatomical variation but also a visual exploration and educational visualization.
Abstract: In our chapter we are describing how to reconstruct three-dimensional anatomy from medical image data and how to build Statistical 3D Shape Models out of many such reconstructions yielding a new kind of anatomy that not only allows quantitative analysis of anatomical variation but also a visual exploration and educational visualization. Future digital anatomy atlases will not only show a static (average) anatomy but also its normal or pathological variation in three or even four dimensions, hence, illustrating growth and/or disease progression.Statistical Shape Models (SSMs) are geometric models that describe a collection of semantically similar objects in a very compact way. SSMs represent an average shape of many three-dimensional objects as well as their variation in shape. The creation of SSMs requires a correspondence mapping, which can be achieved e.g. by parameterization with a respective sampling. If a corresponding parameterization over all shapes can be established, variation between individual shape characteristics can be mathematically investigated.We will explain what Statistical Shape Models are and how they are constructed. Extensions of Statistical Shape Models will be motivated for articulated coupled structures. In addition to shape also the appearance of objects will be integrated into the concept. Appearance is a visual feature independent of shape that depends on observers or imaging techniques. Typical appearances are for instance the color and intensity of a visual surface of an object under particular lighting conditions, or measurements of material properties with computed tomography (CT) or magnetic resonance imaging (MRI). A combination of (articulated) Statistical Shape Models with statistical models of appearance lead to articulated Statistical Shape and Appearance Models (a-SSAMs).After giving various examples of SSMs for human organs, skeletal structures, faces, and bodies, we will shortly describe clinical applications where such models have been successfully employed. Statistical Shape Models are the foundation for the analysis of anatomical cohort data, where characteristic shapes are correlated to demographic or epidemiologic data. SSMs consisting of several thousands of objects offer, in combination with statistical methods or machine learning techniques, the possibility to identify characteristic clusters, thus being the foundation for advanced diagnostic disease scoring.

46 citations

Journal ArticleDOI
TL;DR: A novel Riemannian framework for statistical analysis of shapes that is able to account for the nonlinearity in shape variation is proposed and a statistical shape descriptor is derived that outperforms the standard Euclidean approach in terms of shape‐based classification of morphological disorders.

39 citations

Posted Content
TL;DR: A detailed performance analysis of eleven fully automated algorithms of the participating teams were submitted to be benchmarked on the VerSe data and the best performing algorithm achieving a vertebrae identification rate of 95% and a Dice coefficient of 90%.
Abstract: This work is a technical report concerning the Large Scale Vertebrae Segmentation Challenge (VerSe) organised in conjunction with the MICCAI 2019. The challenge set-up consisting of two tasks, vertebrae labelling and vertebrae segmentation, is detailed. A total of 160 multidetector CT scans closely resembling a typical spine-centreed clinical setting were prepared and annotated at voxel-level by a human-machine hybrid algorithm. Both the annotation protocol and the algorithm that aided the medical experts in this annotation process are presented. More importantly, eleven fully automated algorithms of the participating teams were submitted to be benchmarked on the VerSe data. This work presents a detailed performance analysis of these algorithms with the best performing algorithm achieving a vertebrae identification rate of 95% and a Dice coefficient of 90%. VerSe'19 is an open-call challenge and its image data along with the annotations and evaluation tools will continue to be publicly accessible through its online portal.

24 citations


Cited by
More filters
Journal ArticleDOI
07 Sep 2020-Sensors
TL;DR: The history of how the 3D CNN was developed from its machine learning roots is traced, a brief mathematical description of3D CNN is provided and the preprocessing steps required for medical images before feeding them to 3DCNNs are provided.
Abstract: The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.

238 citations

Journal ArticleDOI
TL;DR: Combining localized classification via CNNs with statistical anatomical knowledge via SSMs results in a state‐of‐the‐art segmentation method for knee bones and cartilage from MRI data.

238 citations

Journal ArticleDOI
TL;DR: In this article, a review of the recent developments in medical image analysis with deep learning can be found and a critical review of related major aspects is provided. But the authors do not assume prior knowledge of deep learning and make a significant contribution in explaining the core deep learning concepts to the non-experts in the Medical Community.
Abstract: Medical image analysis is currently experiencing a paradigm shift due to deep learning. This technology has recently attracted so much interest of the Medical Imaging Community that it led to a specialized conference in “Medical Imaging with Deep Learning” in the year 2018. This paper surveys the recent developments in this direction and provides a critical review of the related major aspects. We organize the reviewed literature according to the underlying pattern recognition tasks and further sub-categorize it following a taxonomy based on human anatomy. This paper does not assume prior knowledge of deep learning and makes a significant contribution in explaining the core deep learning concepts to the non-experts in the Medical Community. This paper provides a unique computer vision/machine learning perspective taken on the advances of deep learning in medical imaging. This enables us to single out “lack of appropriately annotated large-scale data sets” as the core challenge (among other challenges) in this research direction. We draw on the insights from the sister research fields of computer vision, pattern recognition, and machine learning, where the techniques of dealing with such challenges have already matured, to provide promising directions for the Medical Imaging Community to fully harness deep learning in the future.

148 citations

Journal ArticleDOI
TL;DR: The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations.

112 citations

Journal ArticleDOI
29 Jul 2020
TL;DR: This research highlights the need to understand more fully the role that language plays in the development of identity and how language and identity politics play a role in the creation of identity.
Abstract: This dataset provides vertebral segmentation masks for spine CT images and annotations of vertebral fractures or abnormalities per vertebral level; it is available from https://osf.io/nqjyw/ and is...

84 citations