scispace - formally typeset
Search or ask a question
Author

Marco Feuerstein

Other affiliations: Nagoya University
Bio: Marco Feuerstein is an academic researcher from Technische Universität München. The author has contributed to research in topics: Augmented reality & Image registration. The author has an hindex of 21, co-authored 44 publications receiving 1712 citations. Previous affiliations of Marco Feuerstein include Nagoya University.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper not only reviews the related literature but also establishes the relationship between subsets of this body of work in medical augmented reality and discusses the remaining challenges for this young and active multidisciplinary research community.
Abstract: The impressive development of medical imaging technology during the last decades provided physicians with an increasing amount of patient specific anatomical and functional data. In addition, the increasing use of non-ionizing real-time imaging, in particular ultrasound and optical imaging, during surgical procedures created the need for design and development of new visualization and display technology allowing physicians to take full advantage of rich sources of heterogeneous preoperative and intraoperative data. During 90's, medical augmented reality was proposed as a paradigm bringing new visualization and interaction solutions into perspective. This paper not only reviews the related literature but also establishes the relationship between subsets of this body of work in medical augmented reality. It finally discusses the remaining challenges for this young and active multidisciplinary research community.

301 citations

Journal ArticleDOI
TL;DR: A fusion scheme that obtained superior results is presented, demonstrating that there is complementary information provided by the different algorithms and there is still room for further improvements in airway segmentation algorithms.
Abstract: This paper describes a framework for establishing a reference airway tree segmentation, which was used to quantitatively evaluate 15 different airway tree extraction algorithms in a standardized manner. Because of the sheer difficulty involved in manually constructing a complete reference standard from scratch, we propose to construct the reference using results from all algorithms that are to be evaluated. We start by subdividing each segmented airway tree into its individual branch segments. Each branch segment is then visually scored by trained observers to determine whether or not it is a correctly segmented part of the airway tree. Finally, the reference airway trees are constructed by taking the union of all correctly extracted branch segments. Fifteen airway tree extraction algorithms from different research groups are evaluated on a diverse set of 20 chest computed tomography (CT) scans of subjects ranging from healthy volunteers to patients with severe pathologies, scanned at different sites, with different CT scanner brands, models, and scanning protocols. Three performance measures covering different aspects of segmentation quality were computed for all participating algorithms. Results from the evaluation showed that no single algorithm could extract more than an average of 74% of the total length of all branches in the reference standard, indicating substantial differences between the algorithms. A fusion scheme that obtained superior results is presented, demonstrating that there is complementary information provided by the different algorithms and there is still room for further improvements in airway segmentation algorithms.

241 citations

Journal ArticleDOI
TL;DR: A balancing of the key challenges and possible benefits of endoscopic navigation refines the perspectives of this increasingly important discipline of computer-aided medical procedures.
Abstract: Despite rapid developments in the research areas of medical imaging, medical image processing, and robotics, the use of computer assistance in surgical routine is still limited to diagnostics, surgical planning, and interventions on mostly rigid structures. In order to establish a computer-aided workflow from diagnosis to surgical treatment and follow-up, several proposals for computer-assisted soft tissue interventions have been made in recent years. By means of different pre- and intraoperative information sources, such as surgical planning, intraoperative imaging, and tracking devices, surgical navigation systems aim to support surgeons in localizing anatomical targets, observing critical structures, and sparing healthy tissue. Current research in particular addresses the problem of organ shift and tissue deformation, and obstacles in communication between navigation system and surgeon. In this paper, we review computer-assisted navigation systems for soft tissue surgery. We concentrate on approaches that can be applied in endoscopic thoracic and abdominal surgery, because endoscopic surgery has special needs for image guidance due to limitations in perception. Furthermore, this paper informs the reader about new trends and technologies in the area of computer-assisted surgery. Finally, a balancing of the key challenges and possible benefits of endoscopic navigation refines the perspectives of this increasingly important discipline of computer-aided medical procedures.

185 citations

Journal ArticleDOI
TL;DR: An optically tracked mobile C-arm providing cone-beam CT imaging capability intraoperatively is proposed, providing the surgeon with advanced visual aid for the localization of veins, arteries, and bile ducts to be divided or sealed.
Abstract: In recent years, an increasing number of liver tumor indications were treated by minimally invasive laparoscopic resection. Besides the restricted view, two major intraoperative issues in laparoscopic liver resection are the optimal planning of ports as well as the enhanced visualization of (hidden) vessels, which supply the tumorous liver segment and thus need to be divided (e.g., clipped) prior to the resection. We propose an intuitive and precise method to plan the placement of ports. Pre operatively, self-adhesive fiducials are affixed to the patient's skin and a computed tomography (CT) data set is acquired while contrasting the liver vessels. Immediately prior to the intervention, the laparoscope is moved around these fiducials, which are automatically reconstructed to register the patient to its preoperative imaging data set. This enables the simulation of a camera flight through the patient's interior along the laparoscope's or instruments' axes to easily validate potential ports. Intraoperatively, surgeons need to update their surgical planning based on actual patient data after organ deformations mainly caused by application of carbon dioxide pneumoperitoneum. Therefore, preoperative imaging data can hardly be used. Instead, we propose to use an optically tracked mobile C-arm providing cone-beam CT imaging capability intraoperatively. After patient positioning, port placement, and carbon dioxide insufflation, the liver vessels are contrasted and a 3-D volume is reconstructed during patient exhalation. Without any further need for patient registration, the reconstructed volume can be directly augmented on the live laparoscope video, since prior calibration enables both the volume and the laparoscope to be positioned and oriented in the tracking coordinate frame. The augmentation provides the surgeon with advanced visual aid for the localization of veins, arteries, and bile ducts to be divided or sealed.

137 citations

Journal ArticleDOI
TL;DR: The concept of a tangible/controllable virtual mirror for medical AR applications intuitively augments the direct view of the surgeon with all desired views on volumetric medical imaging data registered with the operation site without moving around the operating table or displacing the patient.
Abstract: Medical augmented reality (AR) has been widely discussed within the medical imaging as well as computer aided surgery communities. Different systems for exemplary medical applications have been proposed. Some of them produced promising results. One major issue still hindering AR technology to be regularly used in medical applications is the interaction between physician and the superimposed 3-D virtual data. Classical interaction paradigms, for instance with keyboard and mouse, to interact with visualized medical 3-D imaging data are not adequate for an AR environment. This paper introduces the concept of a tangible/controllable virtual mirror for medical AR applications. This concept intuitively augments the direct view of the surgeon with all desired views on volumetric medical imaging data registered with the operation site without moving around the operating table or displacing the patient. We selected two medical procedures to demonstrate and evaluate the potentials of the Virtual Mirror for the surgical workflow. Results confirm the intuitiveness of this new paradigm and its perceptive advantages for AR-based computer aided interventions.

102 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Two specific computer-aided detection problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification are studied, achieving the state-of-the-art performance on the mediastinal LN detection, and the first five-fold cross-validation classification results are reported.
Abstract: Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.

4,249 citations

Journal ArticleDOI
TL;DR: The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) as mentioned in this paper was organized in conjunction with the MICCAI 2012 and 2013 conferences, and twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low and high grade glioma patients.
Abstract: In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

3,699 citations

01 Jan 2004
TL;DR: Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance and describes numerous important application areas such as image based rendering and digital libraries.
Abstract: From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise.

3,627 citations

Proceedings Article
01 Jan 1989
TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Abstract: A scheme is developed for classifying the types of motion perceived by a humanlike robot. It is assumed that the robot receives visual images of the scene using a perspective system model. Equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented. >

2,000 citations