scispace - formally typeset
Search or ask a question

Showing papers by "Derek L. G. Hill published in 1999"


Journal ArticleDOI
TL;DR: The results clearly indicate that the proposed nonrigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms.
Abstract: In this paper the authors present a new approach for the nonrigid registration of contrast-enhanced breast MRI. A hierarchical transformation model of the motion of the breast has been developed. The global motion of the breast is modeled by an affine transformation while the local breast motion is described by a free-form deformation (FFD) based on B-splines. Normalized mutual information is used as a voxel-based similarity measure which is insensitive to intensity changes as a result of the contrast enhancement. Registration is achieved by minimizing a cost function, which represents a combination of the cost associated with the smoothness of the transformation and the cost associated with the image similarity. The algorithm has been applied to the fully automated registration of three-dimensional (3-D) breast MRI in volunteers and patients. In particular, the authors have compared the results of the proposed nonrigid registration algorithm to those obtained using rigid and affine registration techniques. The results clearly indicate that the nonrigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms.

5,490 citations


Journal ArticleDOI
TL;DR: Results indicate that the normalised entropy measure provides significantly improved behaviour over a range of imaged fields of view.

2,364 citations


Book ChapterDOI
28 Jun 1999
TL;DR: The authors have evaluated eight different similarity measures used for rigid body registration of serial magnetic resonance (MR) brain scans and shown that of the eight measures tested, the ones based on joint entropy produced the best consistency.
Abstract: We investigated 7 different similarity measures for rigid body registration of serial MR brain scans. To assess their accuracy we used a set of 33 clinical 3D serial MR images, manually segmented by a radiologist to remove deformable extra-dural tissue, and also simulated brain model data. For each measure we determined the consistency of registration transformations for both sets of segmented and unsegmented data. The difference images produced by registration with and without segmentation were visually inspected by two radiologists in a blinded study. We have shown that of the measures tested, those based on joint entropy produced the best consistency and seemed least sensitive to the presence of extra-dural tissue. For this data the difference in accuracy of these joint entropy measures, with or without brain segmentation, was within the threshold of visually detectable change in the difference images.

180 citations


Journal ArticleDOI
TL;DR: A new post‐processing strategy is presented that can reduce artifacts due to in‐plane, rigid‐body motion in times comparable to that required to re‐scan a patient.
Abstract: Patient motion during the acquisition of a magnetic resonance image can cause blurring and ghosting artifacts in the image. This paper presents a new post-processing strategy that can reduce artifacts due to in-plane, rigid-body motion in times comparable to that required to re-scan a patient. The algorithm iteratively determines unknown patient motion such that corrections for this motion provide the best image quality, as measured by an entropy-related focus criterion. The new optimization strategy features a multi-resolution approach in the phase-encode direction, separate successive one-dimensional searches for rotations and translations, and a novel method requiring only one re-gridding calculation for each rotation angle considered. Applicability to general rigid-body in-plane rotational and translational motion and to a range of differently weighted images and k-space trajectories is demonstrated. Motion artifact reduction is observed for data from a phantom, volunteers, and patients.

132 citations


Journal ArticleDOI
TL;DR: Nonrigid registration significantly reduces the effects of movement artifact in subtracted contrast-enhanced breast MRI, which may enable better visualization of small tumors and those within a glandular breast.
Abstract: PURPOSE: A new nonrigid registration method, designed to reduce the effect of movement artifact in subtraction images from breast MR, is compared with existing rigid and affine registration methods. METHOD: Nonrigid registration was compared with rigid and affine registration methods and unregistered images using 54 gadolinium-enhanced 3D breast MR data sets. Twenty-seven data sets had been previously reported normal, and 27 contained a histologically proven carcinoma. The comparison was based on visual assessment and ranking by two radiologists. RESULTS: When analyzed by two radiologists independently, all three registration methods gave better-quality subtraction images than unregistered images (p < 0.01), but nonrigid registration gave significantly better results than the rigid and affine registration methods (p < 0.01). There was no significant difference between rigid and affine registration methods. CONCLUSION: Nonrigid registration significantly reduces the effects of movement artifact in subtracted contrast-enhanced breast MRI. This may enable better visualization of small tumors and those within a glandular breast

132 citations


Book ChapterDOI
19 Sep 1999
TL;DR: The authors have introduced bone-implanted markers for registration and incorporated a locking acrylic dental stent (LADS) for patient tracking and improved the graphical representation of the stereo overlays, providing three-dimensional surgical navigation for microscope-iss guided interventions (MAGI).
Abstract: The problem of providing surgical navigation using image overlays on the operative scene can be split into four main tasks – calibration of the optical system; registration of preoperative images to the patient; tracking of the display system and patient and display using a suitable visualisation scheme.

105 citations


Journal Article
TL;DR: In this article, an augmented reality system that allows surgeons to view features from preoperative radiological images accurately overlaid in stereo in the optical path of a surgical microscope is presented.
Abstract: We present an augmented reality system that allows surgeons to view features from preoperative radiological images accurately overlaid in stereo in the optical path of a surgical microscope. The purpose of the system is to show the surgeon structures beneath the viewed surface in the correct 3-D position. The technical challenges are registration, tracking, calibration and visualisation. For patient registration, or alignment to preoperative images, we use bone-implanted markers and a dental splint is used for patient tracking. Both microscope and patient are tracked by an optical localiser. Calibration uses an accurately manufactured object with high contrast circular markers which are identified automatically. All ten camera parameters are modelled as a bivariate polynomial function of zoom and focus. The overall system has a theoretical overlay accuracy of better than 1 mm. Implementations of the system have been tested on seven patients. Recent measurements in the operating room conformed to our accuracy predictions. For visualisation the system has been implemented on a graphics workstation to enable high frame rates with a variety of rendering schemes. Several issues of 3-D depth perception remain unsolved, but early results suggest that perception of structures in the correct 3-D position beneath the viewed surface is possible.

79 citations


Journal ArticleDOI
TL;DR: This augmented reality system for surgical navigation using stereo overlays in the operating microscope aligned to the operative scene provides 3D information about nearby structures and offers a significant advancement over pointer-based guidance, which provides only the location of one point and requires the surgeon to look away from the operativescene.
Abstract: We present a system for surgical navigation using stereo overlays in the operating microscope aligned to the operative scene. This augmented reality system provides 3D information about nearby structures and offers a significant advancement over pointer-based guidance, which provides only the location of one point and requires the surgeon to look away from the operative scene. With a previous version of this system, we demonstrated feasibility, but it became clear that to achieve convincing guidance through the magnified microscope view, a very high alignment accuracy was required. We have made progress with several aspects of the system, including automated calibration, error simulation, bone-implanted fiducials and a dental attachment for tracking. We have performed experiments to establish the visual display parameters required to perceive overlaid structures beneath the operative surface. Easy perception of real and virtual structures with the correct transparency has been demonstrated in a laboratory and through the microscope. The result is a system with a predicted accuracy of 0.9 mm and phantom errors of 0.5 mm. In clinical practice errors are 0.5-1.5 mm, rising to 2-4 mm when brain deformation occurs.

73 citations


Book ChapterDOI
19 Sep 1999
TL;DR: The preliminary results suggest that accurate, noninvasive, image-to-physical registration of head images may be possible using an A-mode ultrasound-based system.
Abstract: In this paper, we describe a system for noninvasively determining bone surface points using an optically tracked A-mode ultrasound transducer. We develop and validate a calibration method; acquire cranial surface points for a skull phantom, three volunteers, and one patient; and register these points to surfaces extracted from CT images of the phantom and patient. Our results suggest that the bone surface point localization error of this system is less than 0.5 mm. The target registration error (TRE) of the cranial surface-based registration for the skull phantom was computed by using as a reference gold standard the point-based registration obtained with eight bone-implanted markers. The mean TRE for a 150-surface-point registration is 1.0 mm, and ranges between 1.0 and 1.7 mm for six 25-surface-point registrations. Our preliminary results suggest that accurate, noninvasive, image-to-physical registration of head images may be possible using an A-mode ultrasound-based system.

62 citations


Proceedings ArticleDOI
21 May 1999
TL;DR: The results clearly indicate that the non-rigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms.
Abstract: In this paper we present a new approach for the non-rigid registration of contrast-enhanced breast MRI. A hierarchical transformation model of the motion of the breast has been developed. The global motion of the breast is modelled by an affine transformation while the local breast motion is described by a free-form deformation based on B-splines. Normalized mutual information is used as a voxel-based similarity measure which is insensitive to intensity changes as the result of the contrast enhancement. Registration is achieved by minimizing a cost function which represents a combination of the cost associated with the smoothness of the transformation and the cost associated with the image similarity. The algorithm has been applied to the fully automated registration of 3D breast MRI in volunteers and patients. In particular, we have compared the results of the proposed non-rigid registration algorithm to those obtained using rigid and affine registration techniques. The results clearly indicate that the non-rigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

49 citations


Book ChapterDOI
19 Sep 1999
TL;DR: The variability of deformation between subjects was considerable, suggesting the automatic correction of intraoperative deformation without use of interventional images may be difficult to achieve.
Abstract: We study brain deformation for a series of 8 resection cases carried out in the interventional MR suite at the University of Minnesota. The pattern of deformation is described qualitatively. We also quantify deformation by identifying anatomical landmarks spread over the brain in pre- and post-resection images, and show these values agree well with the results obtained from an automatic non-rigid registration algorithm. For all but one patient, the deformation was significantly greater ipsilateral to the lesion than contralateral, with the contralateral deformation being of the same order as the precision of the measurements. For the remaining patient, there was bi-lateral deformation of several millimetres. Example deformation fields are shown illustrating the distribution of deformation over the brain. The variability of deformation between subjects was considerable, suggesting the automatic correction of intraoperative deformation without use of interventional images may be difficult to achieve.

Book ChapterDOI
19 Sep 1999
TL;DR: A novel tracking method to update the pose of stereo video cameras with respect to a surface model derived from a 3D tomographic image has a number of applications in image guided interventions and therapy.
Abstract: In this paper we propose a novel tracking method to update the pose of stereo video cameras with respect to a surface model derived from a 3D tomographic image. This has a number of applications in image guided interventions and therapy. Registration of 2D video images to the pre-operative 3D image provides a mapping between image and physical space and enables a perspective projection of the pre-operative data to be overlaid onto the video image. Assuming an initial registration can be achieved, we propose a method for updating the registration, which is based on image intensity and texture mapping. We performed five experiments on simulated, phantom and volunteer data and validated the algorithm against an accurate gold standard in all three cases. We measured the mean 3D error of our tracking algorithm to be 1.05 mm for the simulation and 1.89 mm for the volunteer data. Visually this corresponds to a good registration.

Proceedings ArticleDOI
21 May 1999
TL;DR: The results show a significant improvement in the detection of structural change and inter-observer agreement when aligned and subtracted images were used instead of unregistered ones.
Abstract: Spoiled gradient echo volume MR scans were obtained from 5 growth hormone (GH) patients and 6 normal controls. The patients were scanned before treatment and after 3 and 6 months of GH therapy. The controls were scanned at similar intervals. A calibration phantom was scanned on the same day as each subject. The phantom images were registered with a 9 degree of freedom algorithm to measure scaling errors due to changes in scanner calibration. The second and third images were each registered with a 6 degree of freedom algorithm to the first (baseline) image by maximizing normalized mutual information, and transformed, with and without scaling error correction, using sinc interpolation. Each registered and transformed image had the baseline image subtracted to generate a difference image. Two neuro-radiologists were trained to detect structural change with difference images containing synthetic misregistration and scale changes. They carried out a blinded assessment of anatomical change for the unregistered; aligned and subtracted; and scale corrected, aligned and subtracted images. The results show a significant improvement in the detection of structural change and inter-observer agreement when aligned and subtracted images were used instead of unregistered ones. The structural change corresponded to an increase in brain: CSF ratio.

Proceedings ArticleDOI
21 May 1999
TL;DR: In this article, the authors used a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data, and then registered five video views simultaneously to the 3D model.
Abstract: In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.© (1999) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.


Journal ArticleDOI
Philip L. Gildenberg, Michael L.J. Apuzzo, Joseph C.T. Chen, M.I. Ruge, J. Victor, S. Hosain, Denise D. Correa, N.R. Relkin, Viviane Tabar, Cameron Brennan, Philip H. Gutin, J. Hirsch, A.P. King, P.J. Edwards, C.R. Maurer, D.A. de Cunha, D.J. Hawkes, D.L.G. Hill, R.P. Gaston, M.R. Fenlon, A.J. Strong, C.L. Chandler, A. Richards, M.J. Gleeson, Hidehiro Hirabayashi, Shiro Chitoku, Tohru Hoshida, Toshisuke Sakaki, Calvin R. Maurer, Robert J. Maciunas, J. Michael Fitzpatrick, George T. Mandybur, Gurmeet Dhillon, Beth Gasson, Ron L. Alterman, G. Timothy Reiter, Jay L. Shils, Brett E. Skolnick, William W. Orrison, Michael I. Miga, David W. Roberts, Alexander Hartov, Symma Eisner, John Lemery, Francis E. Kennedy, Keith D. Paulsen, P. Roldan, J.L. Barcia-Salorio, F. Talamantes, M. Alcañiz, V. Grau, C. Monserrat, C. Juan, M.S. Eljamel, Michael Schulder, Peter Fontana, Marvin A. Lavenhar, Peter W. Carmel, Ronald P. Gaston, Derek L. G. Hill, Michael Gleeson, M. Graeme Taylor, Michael R. Fenlon, Philip J. Edwards, David J. Hawkes, Michael S. Yoon, Michael Munz, Jeffrey E. Arle, Mary Lesutis, Tanya Simuni, Amy Colcher, Matthew B. Stern, Howard I. Hurtig, A. Forster, T.R.K. Varma, M. Tulley, M. Latimer, Marwan Hariz, Harald Fodstad, M.F. Lévesque, S. Taylor, R. Rogers, M.T. Le, D. Swope, M. Cenk Akbostanci, Konstantin V. Slavin, Kim J. Burchiel, Ajay Niranjan, Ajay Jawahar, Douglas Kondziolka, L. Dade Lunsford, Richard M. Lehman, Evangelia Micheli-Tzanakou, Jian Zheng, Jennifer L. Hamilton, Gregory J. Anderson, Thomas J. Sernas, David Mahalick, Roberta Adler, Stuart Cook, A. Pascual-Leone, Parviz Shamsgovara, F. Johansson, Gun-Marie Hariz, Deane B. Jacques, Oleg V. Kopyov, Kaaren S. Eagle, Thomas Carter, Abraham N. Lieberman, Fardad Mobin, Antonio A.F. De Salles, Eric J. Behnke, Robert Frysinger, W. Michael King, Kenneth Moore, Robert T. Sataloff, Joseph R. Spiegel, Reinhardt J. Heuer, Emad N. Eskandar, Leslie A. Shinobu, John B. Penney, G. Rees Cosgrove, M. Guenot, J.M. Hupe, P. Mertens, J. Bullier, M. Sindou, Philip A. Starr, Robert Feiwell, William Marks, Sertaç İşlekel, Mehmet Zileli, Berna Zileli, Donnie Tyler 

Book ChapterDOI
TL;DR: In this article, an extension of the scale space idea to surfaces, with the aim of extending ideas like Gaussian derivatives to function on curved spaces, is presented, using the fact that among the continuous range of scales at which one can look at an image, or surface, there is a infinite discrete subset which has a natural geometric interpretation.
Abstract: We present an extension of the scale space idea to surfaces, with the aim of extending ideas like Gaussian derivatives to function on curved spaces. This is done by using the fact, also valid for normal images, that among the continuous range of scales at which one can look at an image, or surface, there is a infinite discrete subset which has a natural geometric interpretation. We call them “proper scales” as they are defined by eigenvalues of an elliptic partial differential operator associated with the image, or shape. The computations are performed using the Finite Element technique.

26 Sep 1999
TL;DR: An extension of the scale space idea to surfaces, with the aim of extending ideas like Gaussian derivatives to function on curved spaces, is presented by using the fact that among the continuous range of scales at which one can look at an image, or surface, there is a infinite discrete subset which has a natural geometric interpretation.
Abstract: We present an extension of the scale space idea to surfaces, with the aim of extending ideas like Gaussian derivatives to function on curved spaces. This is done by using the fact, also valid for normal images, that among the continuous range of scales at which one can look at an image, or surface, there is a infinite discrete subset which has a natural geometric interpretation. We call them "proper scales", as they are defined by eigenvalues of an elliptic partial Differential operator associated with the image, or shape. The computations are performed using the Finite Element technique.