scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 1987"


Journal ArticleDOI
TL;DR: The expectation maximization method is applied to find the a posteriori probability maximizer and is demonstrated to be superior to pure likelihood maximization, in that the penalty function prevents the occurrence of irregular high amplitude patterns in the image with a large number of iterations.
Abstract: The expectation maximization method for maximum likelihood image reconstruction in emission tomography, based on the Poisson distribution of the statistically independent components of the image and measurement vectors, is extended to a maximum aposteriori image reconstruction using a multivariate Gaussian a priori probability distribution of the image vector. The approach is equivalent to a penalized maximum likelihood estimation with a special choice of the penalty function. The expectation maximization method is applied to find the a posteriori probability maximizer. A simple iterative formula is derived for a penalty function that is a weighted sum of the squared deviations of image vector components from their a priori mean values. The method is demonstrated to be superior to pure likelihood maximization, in that the penalty function prevents the occurrence of irregular high amplitude patterns in the image with a large number of iterations (the so-called "checkerboard effect" or "noise artifact").

442 citations


Journal ArticleDOI
TL;DR: This paper develops a mathematical approach for suppressing both the noise and edge artifacts by modifying the maximum-likelihood approach to include constraints which the estimate must satisfy.
Abstract: Images produced in emission tomography with the expectation-maximization algorithm have been observed to become more noisy and to have large distortions near edges as iterations proceed and the images converge towards the maximum-likelihood estimate. It is our conclusion that these artifacts are fundamental to reconstructions based on maximum-likelihood estimation as it has been applied usually; they are not due to the use of the expectation-maximization algorithm, which is but one numerical approach for finding the maximum-likelihood estimate. In this paper, we develop a mathematical approach for suppressing both the noise and edge artifacts by modifying the maximum-likelihood approach to include constraints which the estimate must satisfy.

420 citations


Journal ArticleDOI
TL;DR: The Bayesian versions of the EM algorithms are shown to have superior convergence properties in a vicinity of the maximum and it is anticipated that some of the other algorithms will also converge faster than theEM algorithms.
Abstract: This paper has the dual purpose of introducing some new algorithms for emission and transmission tomography and proving mathematically that these algorithms and related antecedent algorithms converge. Like the EM algorithms for positron, single-photon, and transmission tomography, the algorithms provide maximum likelihood estimates of pixel concentration or linear attenuation parameters. One particular innovation we discuss is a computationally practical scheme for modifying the EM algorithms to include a Bayesian prior. The Bayesian versions of the EM algorithms are shown to have superior convergence properties in a vicinity of the maximum. We anticipate that some of the other algorithms will also converge faster than the EM algorithms.

344 citations


Journal ArticleDOI
Linda Kaufman1
TL;DR: The data structures one might use and ways of taking advantage of the geometry of the physical system are discussed and the numerical aspects of the EM (expectation maximization) algorithm are treated.
Abstract: Since the publication of Shepp and Vadi's [ 14] maximum likelihood reconstruction algorithm for emission tomography (ET), many medical research centers engaged in ET have made an effort to change their reconstruction algorithms to this new approach. Some have succeeded, while others claim they could not adopt this new approach primarily because of limited computing power. In this paper, we discuss techniques for reducing the computational requirements of the reconstruction algorithm. Specifically, the paper discusses the data structures one might use and ways of taking advantage of the geometry of the physical system. The paper also treats some of the numerical aspects of the EM (expectation maximization) algorithm, and ways of speeding up the numerical algorithm using some of the traditional techniques of numerical analysis.

312 citations


Journal ArticleDOI
TL;DR: This work proposes a quantitative criterion with a simple probabilistic interpretation that allows the user to stop the MLE algorithm just before this effect begins, and test a statistical hypothesis whereby the projection data could have been generated by the image produced after each iteration.
Abstract: It is known that when the maximum likelihood estimator (MLE) algorithm passes a certain point, it produces images that begin to deteriorate. We propose a quantitative criterion with a simple probabilistic interpretation that allows the user to stop the algorithm just before this effect begins. The MLE algorithm searches for the image that has the maximum probability to generate the projection data. The underlying assumption of the algorithm is a Poisson distribution of the data. Therefore, the best image, according to the MLE algorithm, is the one that results in projection means which are as close to the data as possible. It is shown that this goal conflicts with the assumption that the data are Poisson-distributed. We test a statistical hypothesis whereby the projection data could have been generated by the image produced after each iteration. The acceptance or rejection of the hypothesis is based on a parameter that decreases as the images improve and increases as they deteriorate. We show that the best MLE images, which pass the test, result in somewhat lower noise in regions of high activity than the filtered back-projection results and much improved images in low activity regions. The applicability of the proposed stopping rule to other iterative schemes is discussed.

262 citations


Journal ArticleDOI
TL;DR: The well-known technique of histogram equalization is implemented, noting the problems encountered when it is adapted to chest images and successfully solved with the regionally adaptive histograms equalization method.
Abstract: Advances in the area of digital chest radiography have resulted in the acquisition of high-quality images of the human chest. With these advances, there arises a genuine need for image processing algorithms specific to the chest, in order to fully exploit this digital technology. We have implemented the well-known technique of histogram equalization, noting the problems encountered when it is adapted to chest images. These problems have been successfully solved with our regionally adaptive histogram equalization method. With this technique histograms are calculated locally and then modified according to both the mean pixel value of that region as well as certain characteristics of the cumulative distribution function. This process, which has allowed certain regions of the chest radiograph to be enhanced differentially, may also have broader implications for other image processing tasks.

182 citations


Journal ArticleDOI
TL;DR: The modified version of rapid imaging, where thephase rotation due to the phase encoding process is compensated for in each time interval, can have sensitivity superior to the original version where the phase rotation is not compensated for.
Abstract: The steady-state magnetizations in three versions of rapid NMR imaging using small flip angles and short repetition intervals are studied. It is shown that in the original version, the estimation using (1 - E1) sin ?/(1 - E1 cos ?) contains errors that depend on the increment of the phase rotation angle arising from the phase encoding process. The modified version of rapid imaging, where the phase rotation due to the phase encoding process is compensated for in each time interval, can have sensitivity superior to the original version where the phase rotation is not compensated for. Here, flip angles larger than the Ernst angle must be used. In the third version, the steady-state magnetization is obtained by a rapid imaging sequence in which the phase rotations arising not only from the application of the phase encoding gradient but also from the applications of other gradients are compensated for. Analysis of this version showed a remarkable increase in sensitivity although it required the use of an extremely uniform field. It is estimated that this increase reaches 80 percent with a repetition interval of 10 ms, although a field uniformity less than 1 ?T is necessary.

166 citations


Journal ArticleDOI
TL;DR: A theorem is proved expressing the backprojection operator in terms of the Radon transform and simple changes of variables and a novel image reconstruction method based on the theorem is presented.
Abstract: The notion of a linogram is introduced. It corresponds to the notion of a sinogram in the conventional representation of projection data in image reconstruction. In the sinogram, points which correspond to rays which go through a fixed point in the cross section to be reconstructed all fall on a sinusoidal curve. In the linogram, however, these points fall on a straight line. Thus, backprojection corresponds to integration along straight lines in the linogram. A theorem is proved expressing the backprojection operator in terms of the Radon transform and simple changes of variables. Consequences of this theorem are discussed. A novel image reconstruction method based on the theorem is presented.

137 citations


Journal ArticleDOI
Chang-Beom Ahn1, Zang-Hee Cho1
TL;DR: In this paper, a new statistical approach to phase correction in NMR imaging is proposed, which consists of first and zero-order phase corrections each by the inverse multiplication of estimated phase error.
Abstract: A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.

134 citations


Journal ArticleDOI
TL;DR: The algorithm is basically an enhanced EM (expectation maximization) algorithm with improved frequency response and is promising to achieve significant saving in computation compared to the standard EM algorithm.
Abstract: An efficient iterative reconstruction method for positron emission tomography (PET) is presented. The algorithm is basically an enhanced EM (expectation maximization) algorithm with improved frequency response. High-frequency components of the ratio of measured to calculated projections are extracted and are taken into account for the iterative correction of image density in such a way that the correction is performed with a uniform efficiency over the image plane and with a flat frequency response. As a result, the convergence speed is not so sensitive to the image pattern or matrix size as the standard EM algorithm, and nonuniformity of the spatial resolution is significantly improved. Nonnegativity of the reconstructed image is preserved. Simulation studies have been made assuming two PET systems: a scanning PET with ideal sampling and a stationary PET with sparse sampling. In the latter, a "bank array" of detectors is employed to improve the sampling in the object plane. The new algorithm provides satisfactory images by two or three iterations starting from a flat image in either case. The behavior of convergence is monitored by evaluating the root mean square of C(b)-1 where C(b) is the correction factor for pixel b in the EM algorithm. The value decreases rapidly and monotonically with iteration number. Although the theory is not accurate enough to assure the stability of convergence, the algorithm is promising to achieve significant saving in computation compared to the standard EM algorithm.

86 citations


Journal ArticleDOI
TL;DR: An improved procedure is proposed here for automatic myocardial border tracking (AMBT) of the endocardial and epicardial edges in a sequence of video images that includes nonlinear filtering of whole images, debiasing of gray levels, and location-dependent contrast stretching.
Abstract: Two-dimensional ultrasound sector scans of the left ventricle (LV) are commonly used to diagnose cardiac mechanical function. Present quantification procedures of wall motion by this technique entail inaccuracies, mainly due to relatively poor image quality and the absence of a definition of the relative position of the probe and the heart. The poor quality dictates subjective determination of the myocardial edges, while the absence of a position vector increases the errors in the calculations of wall displacement, LV blood volume, and ejection fraction. An improved procedure is proposed here for automatic myocardial border tracking (AMBT) of the endocardial and epicardial edges in a sequence of video images. The procedure includes nonlinear filtering of whole images, debiasing of gray levels, and location-dependent contrast stretching. The AMBT algorithm is based upon tracking movement of a small number of predefined set of points, which are manually defined on the two myocardial borders. Information from one image is used, by utilizing predetermined statistical criteria to iteratively search and detect the border points on the next one. Border contours are reconstructed by Spline interpolation of the border points. The AMBT procedure is tested by comparing processed sequences of cine echocardiographic scan images to manual tracings by an objective observer and to results from previously published data.

Journal ArticleDOI
TL;DR: The conclusion is that the spiral scan is more sensitive to this type of inhomogeneity than a Cartesian scan echo-planar method also subjected to inhomogeneities in the static field.
Abstract: A study of a spiral scan echo planar method in the presence of static field inhomogeneities is presented. The approach consists of obtaining the samples over the Fourier plane trajectory of the spiral scan, as modified by the inhomogeneities. The resulting images are compared to a Cartesian scan echo-planar method also subjected to inhomogeneities in the static field. The conclusion is that the spiral scan is more sensitive to this type of inhomogeneity. The possibility of compensating for the inhomogeneities during the reconstruction procedure is suggested by a preliminary experimental result.

Journal ArticleDOI
TL;DR: In this paper, the authors modified a faster algorithm, sequential similarity detection (SSD), to use only the portion of the template that contains retinal vessels, which improved the reliability of detection for a variety of retinal imaging modalities.
Abstract: Registration of retinal images taken at different times frequently is required to measure changes caused by disease or to document retinal location of visual stimuli. Cross-correlation has been used previously for such registration, but it is computationally intensive. We have modified a faster algorithm, sequential similarity detection (SSD), to use only the portion of the template that contains retinal vessels. When compared to standard SSD and cross-correlation, this modification improves the reliability of detection for a variety of retinal imaging modalities. The improved reliability enables implementation of a two-stage registration strategy that further decreases the amount of computation and increases the speed of registration.

Journal ArticleDOI
TL;DR: The ISRA of [1] is shown to be an iterative algorithm that aims to converge to the least-squares estimates of emission densities and it is pointed out that the resulting estimators are inferior to the maximum likelihood estimators, for which the EM algorithm is a computational procedure.
Abstract: The ISRA of [1] is shown to be an iterative algorithm that aims to converge to the least-squares estimates of emission densities. Convergence is established in the case where a unique least-squares estimate exists that is, elementwise, strictly positive. It is pointed out that, in terms of asymptotic theory, the resulting estimators are inferior to the maximum likelihood estimators, for which the EM algorithm is a computational procedure. Potential difficulties with the behavior of the ISRA are illustrated using very simple examples.

Journal ArticleDOI
TL;DR: A new algorithm for reconstructing a density f in the plane from its projections along those lines making an angle greater than a fixed ¿ > 0 with the x axis appears to give a practical and simple solution to the problem whenever one exists.
Abstract: We give a new algorithm for reconstructing a density f in the plane from its projections along those lines making an angle greater than a fixed ? > 0 with the x axis. Of course, the performance of the algorithm depends on ? and on the smoothness of f, but it appears to give a practical and simple solution to the problem whenever one exists. The basic idea, which seems to be new, is to make an affine (squashing) scale change of f to g for which the projections are then known at n equally spaced angles, so that we know how to find g, and then we obtainf from g by inverting the scale change.

Journal ArticleDOI
TL;DR: A new algorithm is proposed which is designed to fully utilize all emitted gamma rays which can be detected in a truncated spherical detector, and should produce images of better statistical accuracy than could be produced by previously known algorithms.
Abstract: A review is made of selected recent publications on three-dimensional image reconstruction for PET. A new algorithm is proposed which is designed to fully utilize all emitted gamma rays which can be detected in a truncated spherical detector. By such full utilization of emitted rays the new algorithm should produce images of better statistical accuracy than could be produced by previously known algorithms.

Journal ArticleDOI
TL;DR: The concepts have been implemented on a high-speed image analyzing system, which measures blood vessel diameters with advanced automation, and the performance of the system was evaluated with blood vessel phantoms.
Abstract: Statistical considerations on the precision in the determination of blood vessel dimensions from digitized cine angiographic images are described. The resolution requirements related to "point measurements" and segmental diameter curve evaluations are discussed. The error associated with inaccurate determination of the vessel's centerline is analyzed. The concepts have been implemented on a high-speed image analyzing system, which measures blood vessel diameters with advanced automation. The performance of the system was evaluated with blood vessel phantoms, ranging in diameter from 0.88 to 6.26 mm. For these phantoms the minimum measurable change in vessel dimension over 20-pixel (-1.1 mm) long segments ranged from 3.4 to 0.2 percent, respectively.

Journal ArticleDOI
TL;DR: A key result of this paper is that high-quality imagery can be reconstructed from fan-beam data using the DFM in 0 (N2 log N) operations.
Abstract: We consider the problem of reconstructing tomographic imagery from fan-beam projections using the direct Fourier method (DFM). Previous DFM reconstructions from parallel-beam projections produced images of quality comparable to that of filtered convolution back-projection. Moreover, the number of operations using DFM in the parallel-beam case is proportional to N2 log N versus N3 for back projection [3]. The fan-beam case is more complicated because additional interpolation of the nonuniformly spaced rebinned data is required. We derive bounds on the detector spacing in fan-beam CT that enable direct Fourier reconstruction and describe the full algorithm necessary for processing the fan-beam data. The feasibility of the method is demonstrated with an example. A key result of this paper is that high-quality imagery can be reconstructed from fan-beam data using the DFM in 0 (N2 log N) operations.

Journal ArticleDOI
TL;DR: In this article, explicit formulas for a cone-beam convolution and back-projection reconstruction algorithm are given in a form which can be easily coded for a computer, justified by analyzing tomographic reconstructions of a uniformly attenuating sphere from simulated noisy projection data.
Abstract: Direct reconstruction in three dimensions for two-dimensional projection data has been achieved by cone-beam reconstruction techniques. In this paper explicit formulas for a cone-beam convolution and back-projection reconstruction algorithm are given in a form which can be easily coded for a computer. The algorithm is justified by analyzing tomographic reconstructions of a uniformly attenuating sphere from simulated noisy projection data. A particular feature of this algorithm is the use of a one-dimensional rather than two-dimensional convolution function, greatly speeding up the reconstruction. The technique is applicable however large the cone angle of data capture and correctly reduces to the pure fan-beam reconstruction technique in the central section of the cone. The method has been applied to data captured on a cone-beam CT scanner designed for bone mineral densitometry.

Journal ArticleDOI
TL;DR: A matrix operator for obtaining the encoding gradient for any kind of phase encoding is derived and specific examples illustrating how to obtain "pure" spatial, velocity, or acceleration encoding gradients for moving spins are presented.
Abstract: A general mathematical formalism for generating multiparametric NMR image encoding gradients is introduced. The new schematic approach enables one to construct any desired encoding gradient which may be used in an imaging sequence. Basic gradient waveforms which can be used as building blocks of the desired encoding gradients are presented. A matrix operator for obtaining the encoding gradient for any kind of phase encoding is derived. Specific examples illustrating how to obtain "pure" spatial, velocity, or acceleration encoding gradients for moving spins are presented.

Journal ArticleDOI
TL;DR: A Bayesian image processing formalism which incorporates a priori amplitude and spatial probability density information was applied to two-dimensional source fields, and strikingly improved results for ideal and experimental radioisotope phantom imaging data were obtained.
Abstract: A Bayesian image processing (BIP) formalism which incorporates a priori amplitude and spatial probability density information was applied to two-dimensional source fields. For valid, moderately restrictive a priori information, strikingly improved results for ideal and experimental radioisotope phantom imaging data, compared to a standard non-Bayesian formalism (maximum likelihood, ML), were obtained. The applicability of a fast Fourier transform technique for "convolution" calculations, a reduced-region restriction for the initial "deconvolution" calculations, and a relaxation parameter for accelerating convergence are considered.

Journal ArticleDOI
TL;DR: A method is presented that will automatically find the left ventricular (LV) boundary in any of several equilibrium radionuclide angiocardiographic (ERNA) image views using a knowledge-based strategy that incorporates appropriate local and global information in heuristic cost functions to find complete boundaries.
Abstract: A method is presented that will automatically find the left ventricular (LV) boundary in any of several equilibrium radionuclide angiocardiographic (ERNA) image views. The lower level image analysis involves the use of directional gradient edge operators and an edge point linking scheme. The higher level portion of the algorithm uses a knowledge-based strategy that incorporates appropriate local and global information in heuristic cost functions to find complete boundaries. A detailed example of LV boundary delineation in the end diastolic frame from a left lateral view, ERNA image sequence is tracked through each stage of the proposed methodology. The use of this-approach for following the LV boundary through entire temporal sequences is also illustrated on 16 frames of both a left lateral view and a left anterior oblique view ERNA study.

Journal ArticleDOI
TL;DR: An approach to magnetic resonance imaging employing a magnetic field gradient that rotates 180 degrees in the image plane while the gradient magnitude oscillates rapidly during the rotation offers the potential of great speed, which is limited only by the gradient modulation frequency.
Abstract: We present an approach to magnetic resonance imaging employing a magnetic field gradient that rotates 180 degrees in the image plane while the gradient magnitude oscillates rapidly during the rotation. A single free induction decay recorded during this rotation contains all the information needed to reconstruct a two-dimensional image. In effect, each sinusoidal oscillation of the gradient provides information corresponding to one projection in more conventional Fourier-projection approaches. Since the data acquisition can be achieved in a period less than T2, the method offers the potential of great speed, which is limited only by the gradient modulation frequency. An explicit image reconstruction formula is derived that gives, when evaluated, a reconstruction of the magnetization equal to the true magnetization convolved with a space-invariant point spread function. This point spread function is derived and characterizes the resolving power and sidelobe response of the technique. Moreover, we derive a similar reconstruction formula which is valid when known inhomogeneities in the static field H0 and T2 are present. Finally, we show how the general approach can be extended to three dimensions.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a velocity phase-encoding strategy for increasing the sensitivity of magnetic resonance imaging (MRI) measurements to the signal from moving material, which is formulated within the framework of a velocity-selective excitation composite which leaves only magnetization of moving material in the transverse plane.
Abstract: We propose a technique aimed at increasing the sensitivity of magnetic resonance imaging (MRI) measurements to the signal from moving material. The technique is formulated within the framework of a velocity phase-encoding strategy. The salient new feature of the protocol is the application of an excitation pulse, following the conventional 90°excitation and bipolar phase gradient modulation pulses, which rotates the magnetization through -90°. The whole comprises a velocity-selective excitation composite which leaves only magnetization of moving material in the transverse plane.

Journal ArticleDOI
TL;DR: The development of suitable automated techniques for the objective determination of endocardial and epicardial borders in two-dimensional echocardiographic images suggests the potential to extract more information concerning left ventricular function than is available with current techniques.
Abstract: Although two-dimensional echocardiography (2-D echo) is a useful technique for evaluation of global and regional left ventricular function, the main limitation is the inability to easily extract reliable and accurate quantitative information throughout all phases of the cardiac cycle. We sought to develop suitable automated techniques for the objective determination of endocardial and epicardial borders in two-dimensional echocardiographic images. To test algorithms for the automatic detection of myocardial borders we constructed a cardiac ultrasound phantom consisting of 16 echogenic annuli of known dimensions embedded in a material of low echogenicity which allowed imaging without partial volume effects. An algorithm based on Gaussian filtering followed by a difference gradient operator was applied to detect edges in the 2-D echo images of these annuli. The radii of the automatically determined inner borders were within 0.44 mm root meansquared error over a range of 15-25 mm true radius. This lower boundary for the error in our approach to automatic placement of myocardial borders in 2-D echocardiograms suggests the potential to extract more information concerning left ventricular function than is available with current techniques.

Journal ArticleDOI
TL;DR: The effects of phenomena other than tissue acoustic properties upon estimates of statistical parameters are investigated, including system characteristics, sample volume dimensions, and tissue velocity in myocardium.
Abstract: Several investigators have characterized various forms of heart disease from the statistical properties of envelopes of ultrasonic echos from myocardium. In particular, the mean-to-standard deviation ratio (MSR), skewness, and kurtosis of the envelope probability density function have been used for the detection of myocardial ischemia, infarction, reperfusion, and hypertrophy. In this paper, the effects of phenomena other than tissue acoustic properties upon estimates of statistical parameters are investigated. These include system characteristics (center frequency, bandwidth, beam width, etc.), sample volume dimensions, and tissue velocity. In myocardium, relatively small amounts of tissue are available for interrogation. It is shown that, under these limited data acquisition conditions, substantial systematic biases in the estimates of statistical parameters may occur. Analytic forms for errors in the envelope variance estimate are derived. Estimation of the envelope mean, variance, MSR, skewness, and kurtosis is investigated experimentally, using a commercial medical ultrasound scanner and a tissue-mimicking phantom.

Journal ArticleDOI
TL;DR: It is shown that the variance of the edge-point-based estimates of the axis lengths increases when the location error of the center of the supposed ellipse or its orientation error increases, and local search algorithms can be applied to find the maximum likelihood estimate of the parameters of theEllipse.
Abstract: To delineate the myocardium in planar thallium-201 scintigrams of the left ventricle, a method, based on the Hough transformation, is presented. The method maps feature points (X, Y, Y')-where Y' reflects the direction of the tangent in edge point (X,Y)-into the two-dimensional space of the axis lengths of the ellipse. Within this space, a probability density function (pdf) can be estimated. When the center of the ellipse or its orientation are unknown, the 2-D pdf of the lengths of the axes is extended to a 5-D pdf of all parameters of the ellipse (lengths of the axes, coordinates of the center, and the orientation). It is shown that the variance of the edge-point-based estimates of the axis lengths increases when the location error of the center of the supposed ellipse or its orientation error increases. The likelihood of the estimates is expected to decrease with increasing variance. Therefore, local search algorithms can be applied to find the maximum likelihood estimate of the parameters of the ellipse. Curves describing the convergency of the algorithm are presented, as well as an example of the application of the algorithm to real scintigrams. The method is able to detect contours even if they are only partly visualized, as in thallium scintigrams of the myocardium of patients with ischemic heart disease. As long as the number of parameters describing the contour is relatively low, such an algorithm is also suitable for application to differently curved contours.

Journal ArticleDOI
TL;DR: This paper reports the results of a preliminary set of registration experiments, carried out on low quality photographs of the ocular fundus obtained without contrast medium, which confirm the robustness of the chosen approach, phase correlation.
Abstract: Involuntary eye movements make the observation of the ocular fundus on a TV monitor fatiguing for the physician, not to mention the practical impossibility of measuring dynamic phenomena such as the venous pulse observed in a sensible percentage of patients. It is therefore necessary to measure and compensate, for display and analysis, the displacement between each image in the video sequence and a reference one. This paper reports the results of a preliminary set of registration experiments, carried out on low quality photographs of the ocular fundus obtained without contrast medium. The results confirm the robustness of the chosen approach, phase correlation. The effects of choices such as computer word length or raw-data windowing on system performance are also analyzed.

Journal ArticleDOI
TL;DR: It can be concluded that this system will provide valuable diagnostic and physiologic information that will provide added insight into normal and abnormal structure and its relationship to function.
Abstract: A system has been developed to facilitate three-dimensional visualizations of tomographic image data. Tomographic techniques yield parallel planes of data at discrete locations; thus, a series of images comprises a three-dimensional database. From this database, a system has been developed to perform three-dimensional calculations, measurements, and display. The system consists of a conventional two-dimensional video monitor, a digitizing tablet for user interaction and region-of-interest (ROI) definition, application-oriented computational software, and an image display system for true three-dimensional database visualization. The three-dimensional display makes use of a varifocal mirror system with vector graphics capability. Through the use of specialized contouring software, we illustrate the utility of this system in the specific examples of displays prepared from magnetic resonance (MR) images of the brain and carotid arteries. It can be concluded that this system will provide valuable diagnostic and physiologic information that will provide added insight into normal and abnormal structure and its relationship to function.

Journal ArticleDOI
TL;DR: The theorem and/or its practical implementations suggest the possibility of using direct Fourier reconstruction from linear spiral-scan NMR imaging by using exact interpolation from spiral samples to a Cartesian lattice.
Abstract: An interpolation method useful for reconstructing an image from its Fourier plane samples on a linear spiral scan trajectory is presented. This kind of sampling arises in NMR imaging. We first present a theorem that enables exact interpolation from spiral samples to a Cartesian lattice. We then investigate two practical implementations of the theorem in which a finite number of interpolating points are used to calculate the value at a new point. Our experimental results confirm the theorem's validity and also demonstrate that both practical implementations yield very good reconstructions. Thus, the theorem and/or its practical implementations suggest the possibility of using direct Fourier reconstruction from linear spiral-scan NMR imaging.