scispace - formally typeset
Search or ask a question

Showing papers by "Bernd Fischer published in 2009"


Journal ArticleDOI
01 Jan 2009
TL;DR: A method of combining anatomical landmark information with a fast non-parametric intensity registration approach that improves the mean and percentage of point distances above 3 mm compared to rigid and thin-plate spline registration based only on landmarks.
Abstract: An important issue in computer-assisted surgery of the liver is a fast and reliable transfer of preoperative resection plans to the intraoperative situation. One problem is to match the planning data, derived from preoperative CT or MR images, with 3D ultrasound images of the liver, acquired during surgery. As the liver deforms significantly in the intraoperative situation non-rigid registration is necessary. This is a particularly challenging task because pre- and intraoperative image data stem from different modalities and ultrasound images are generally very noisy. One way to overcome these problems is to incorporate prior knowledge into the registration process. We propose a method of combining anatomical landmark information with a fast non-parametric intensity registration approach. Mathematically, this leads to a constrained optimization problem. As distance measure we use the normalized gradient field which allows for multimodal image registration. A qualitative and quantitative validation on clinical liver data sets of three different patients has been performed. We used the distance of dense corresponding points on vessel center lines for quantitative validation. The combined landmark and intensity approach improves the mean and percentage of point distances above 3 mm compared to rigid and thin-plate spline registration based only on landmarks. The proposed algorithm offers the possibility to incorporate additional a priori knowledge—in terms of few landmarks—provided by a human expert into a non-rigid registration process.

140 citations


Journal ArticleDOI
TL;DR: A novel model for the HPA system based on three simple rules that constitute a principle of homeostasis and include only the most substantive physiological elements is derived, which is capable of simulating clinical trials and leading to insights about diseases like depression, obesity, or diabetes.
Abstract: The hypothalamus-pituitary-adrenal (HPA) system is closely related to stress and the restoration of homeostasis. This system is stimulated in the second half of the night, decreases its activity in the daytime, and reaches the homeostatic level during the late evening. In this paper, we derive and discuss a novel model for the HPA system. It is based on three simple rules that constitute a principle of homeostasis and include only the most substantive physiological elements. In contrast to other models, its main components include, apart from the conventional negative feedback ingredient, a positive feedback loop. To validate the model, we present a parameter estimation procedure that enables one to adapt the model to clinical observations. Using this methodology, we are able to show that the novel model is capable of simulating clinical trials. Furthermore, the stationary state of the system is investigated. We show that, under mild conditions, the system always has a well-defined set-point, which reflects the clinical situation to be modeled. Finally, the computed parameters may be interpreted from a physiological point of view, thereby leading to insights about diseases like depression, obesity, or diabetes.

51 citations


Journal ArticleDOI
TL;DR: In this article, a new approach for motion correction is introduced based on the measured SPECT data and therefore belongs to the data-driven motion correction algorithm class, which does not overcome some of the shortcomings of conventional methods.
Abstract: Due to the long imaging times in SPECT, patient motion is inevitable and constitutes a serious problem for any reconstruction algorithm. The measured inconsistent projection data lead to reconstruction artifacts which can significantly affect the diagnostic accuracy of SPECT if not corrected. To address this problem a new approach for motion correction is introduced. It is purely based on the measured SPECT data and therefore belongs to the data-driven motion correction algorithm class. However, it does overcome some of the shortcomings of conventional methods. This is mainly due to the innovative idea to combine reconstruction and motion correction in one optimization problem. The scheme allows for the correction of abrupt and gradual patient motion. To demonstrate the performance of the proposed scheme extensive 3D tests with numerical phantoms for 3D rigid motion are presented. In addition, a test with real patient data is shown. Each test shows an impressive improvement of the quality of the reconstructed image. In this note, only rigid movements are considered. The extension to non-linear motion, as for example breathing or cardiac motion, is straightforward and will be investigated in a forthcoming paper.

38 citations


Book ChapterDOI
01 Jan 2009
TL;DR: This work introduces inequality constraints, which allow for a sphere-like tolerance around each landmark, which offers the possibility to control the quality of the matching of each landmark pair individually in a variational non-parametric registration framework.
Abstract: The incorporation of additional user knowledge into a nonrigid registration process is a promising topic in modern registration schemes. The combination of intensity based registration and some interactively chosen landmark pairs is a major approach in this direction. There exist different possibilities to incorporate landmark pairs into a variational non-parametric registration framework. As the interactive localization of point landmarks is always prone to errors, a demand for precise landmark matching is bound to fail. Here, the treatment of the distances of corresponding landmarks as penalties within a constrained optimization problem offers the possibility to control the quality of the matching of each landmark pair individually. More precisely, we introduce inequality constraints, which allow for a sphere-like tolerance around each landmark. We illustrate the performance of this new approach for artificial 2D images as well as for the challenging registration of preoperative CT data to intra-operative 3D ultrasound data of the liver.

14 citations


Proceedings ArticleDOI
TL;DR: A novel method is derived based on variational image registration methods and additional given anatomic landmarks that embed the landmark information as inequality hard constraints and thereby allowing for inaccurately placed landmarks in navigated liver surgery.
Abstract: In navigated liver surgery the key challenge is the registration of pre-operative planing and intra-operative navigation data. Due to the patients individual anatomy the planning is based on segmented, pre-operative CT scans whereas ultrasound captures the actual intra-operative situation. In this paper we derive a novel method based on variational image registration methods and additional given anatomic landmarks. For the first time we embed the landmark information as inequality hard constraints and thereby allowing for inaccurately placed landmarks. The yielding optimization problem allows to ensure the accuracy of the landmark fit by simultaneous intensity based image registration. Following the discretize-then-optimize approach the overall problem is solved by a generalized Gauss-Newton-method. The upcoming linear system is attacked by the MinRes solver. We demonstrate the applicability of the new approach for clinical data which lead to convincing results.

13 citations


Book ChapterDOI
01 Jan 2009
TL;DR: An approach using tensor grids, which provide a sparser image representation and thereby allow the use of the highest image resolution locally, is presented, which shows that one may considerably save on time and memory while preserving the registration quality in the regions of interest.
Abstract: In non-parametric image registration it is often not possible to work with the original resolution of the images due to high processing times and lack of memory. However, for some medical applications the information contained in the original resolution is crucial in certain regions of the image while being negligible in others. To adapt to this problem we will present an approach using tensor grids, which provide a sparser image representation and thereby allow the use of the highest image resolution locally. Applying the presented scheme to a lung ventilation estimation shows that one may considerably save on time and memory while preserving the registration quality in the regions of interest.

3 citations


Book ChapterDOI
01 Jan 2009
TL;DR: The contribution of this work is a consistent modeling of a combined intensity and landmark registration approach as an inequality constrained optimization problem that guarantees that each reference landmark lies within an error ellipsoid around the corresponding template landmark at the end of the registration process.
Abstract: The registration of medical images containing soft tissue like inner organs, muscles, fat , etc., is challenging due to complex deformations between different image acquisitions. Despite different approaches to get smooth transformations the number of feasible transformations is still huge and ambiguous local image contents may lead to unwanted results. The incorporation of additional user knowledge is a promising way to restrict the number of possible non-rigid transformations and to increase the probability to find a clinically reasonable solution. A small number of pre-operatively and interactively defined landmarks is a straight forward example for such expert knowledge. Typically, when vessels appear in the image data, a natural way is to determine landmarks as vessel branchings. Here, we present a generalization that allows also the usage of corresponding vessel segments. Therefor, we introduce a registration scheme that can handle anisotropic localization uncertainties. The contribution of this work is a consistent modeling of a combined intensity and landmark registration approach as an inequality constrained optimization problem. This guarantees that each reference landmark lies within an error ellipsoid around the corresponding template landmark at the end of the registration process. First results are presented for the registration of preoperative CT images to intra-operative 3D ultrasound data of the liver as an important issue in an intra-operative navigation system.

2 citations


Proceedings ArticleDOI
TL;DR: An approach to generate a synthetic ground truth database for the validation of image registration and segmentation is proposed and its application is illustrated using the example of the validate of a registration procedure, using 50 magnetic resonance images from different patients and two atlases.
Abstract: Image registration and segmentation are two important tasks in medical image analysis. However, the validation of algorithms for non-linear registration in particular often poses significant challenges:1, 2 Anatomical labeling based on scans for the validation of segmentation algorithms is often not available, and is tedious to obtain. One possibility to obtain suitable ground truth is to use anatomically labelled atlas images. Such atlas images are, however, generally limited to single subjects, and the displacement field of the registration between the template and an arbitrary data set is unknown. Therefore, the precise registration error cannot be determined, and approximations of a performance measure like the consistency error must be adapted. Thus, validation requires that some form of ground truth is available. In this work, an approach to generate a synthetic ground truth database for the validation of image registration and segmentation is proposed. Its application is illustrated using the example of the validation of a registration procedure, using 50 magnetic resonance images from different patients and two atlases. Three different non-linear image registration methods were tested to obtain a synthetic validation database consisting of 50 anatomically labelled brain scans.

1 citations


Book ChapterDOI
01 Jan 2009
TL;DR: Diese Arbeit stellt die Entwicklung eines Verfahrens vor, mit dessen Hilfe die Symmetrie von Hirnaufnahmen entlang der Sagittalebene verbessert werden kann.
Abstract: Die lokale Symmetrie von Hirnscans entlang der Sagittalebene zu ermitteln und zu modizifieren, ist fur eine Reihe neurologischer Anwendungen interessant. Beispielsweise kann der voxelweise Vergleich von rechter und linker Hirnhalfte nur dann Aufschluss uber die Lokalisierung von Lasionen geben, wenn durch Transformation ein Hirnscan eine moglichst hohe Symmetrie aufweist. Ein weiteres Anwendungsgebiet ist die Visualisierung von medialen Hirnschnitten, fur die die Trennflache beider Hirnhalfte moglichst eben sein sollte. Diese Arbeit stellt die Entwicklung eines Verfahrens vor, mit dessen Hilfe die Symmetrie von Hirnaufnahmen entlang der Sagittalebene verbessert werden kann. Dies geschieht unter Verwendung von aktiven Konturen, die mit Hilfe einer neuartigen Kostenfunktion gesteuert werden. Experimente am Ende der Arbeit mit strukturellen Kernspinaufnahmen demonstrieren die Leistungsfahigkeit des Verfahrens.

Proceedings ArticleDOI
TL;DR: In this article, the authors make use of typical vector analysis operators like the divergence and curl operator to identify meaningful portions of the displacement field to be used in a follow-up run.
Abstract: Image registration is an important and active area of medical image processing Given two images, the idea is to compute a reasonable displacement field which deforms one image such that it becomes similar to the other image The design of an automatic registration scheme is a tricky task and often the computed displacement field has to be discarded, when the outcome is not satisfactory On the other hand, however, any displacement field does contain useful information on the underlying images It is the idea of this note, to utilize this information and to benefit from an even unsuccessful attempt for the subsequent treatment of the images Here, we make use of typical vector analysis operators like the divergence and curl operator to identify meaningful portions of the displacement field to be used in a follow-up run The idea is illustrated with the help of academic as well as a real life medical example It is demonstrated on how the novel methodology may be used to substantially improve a registration result and to solve a difficult segmentation problem