scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Medical Imaging in 1994"


Journal ArticleDOI
TL;DR: Ordered subsets EM (OS-EM) provides a restoration imposing a natural positivity condition and with close links to the EM algorithm, applicable in both single photon (SPECT) and positron emission tomography (PET).
Abstract: The authors define ordered subset processing for standard algorithms (such as expectation maximization, EM) for image restoration from projections. Ordered subsets methods group projection data into an ordered sequence of subsets (or blocks). An iteration of ordered subsets EM is defined as a single pass through all the subsets, in each subset using the current estimate to initialize application of EM with that data subset. This approach is similar in concept to block-Kaczmarz methods introduced by Eggermont et al. (1981) for iterative reconstruction. Simultaneous iterative reconstruction (SIRT) and multiplicative algebraic reconstruction (MART) techniques are well known special cases. Ordered subsets EM (OS-EM) provides a restoration imposing a natural positivity condition and with close links to the EM algorithm. OS-EM is applicable in both single photon (SPECT) and positron emission tomography (PET). In simulation studies in SPECT, the OS-EM algorithm provides an order-of-magnitude acceleration over EM, with restoration quality maintained. >

3,740 citations


Journal ArticleDOI
TL;DR: The results indicate that, compared to a manual method, the use of the semiautomatic technique not only facilitates the analysis of the images, but also has similar or lower intra- and interrater variabilities.
Abstract: The analysis of MR images is evolving from qualitative to quantitative. More and more, the question asked by clinicians is how much and where, rather than a simple statement on the presence or absence of abnormalities. The authors present a study in which the results obtained with a semiautomatic, multispectral segmentation technique are quantitatively compared to manually delineated regions. The core of the semiautomatic image analysis system is a supervised artificial neural network classifier augmented with dedicated preand postprocessing algorithms, including anisotropic noise filtering and a surface-fitting method for the correction of spatial intensity variations. The study was focused on the quantitation of white matter lesions in the human brain. A total of 36 images from six brain volumes was analyzed twice by each of two operators, under supervision of a neuroradiologist. Both the intra- and interrater variability of the methods were studied in terms of the average tissue area detected per slice, the correlation coefficients between area measurements, and a measure of similarity derived from the kappa statistic. The results indicate that, compared to a manual method, the use of the semiautomatic technique not only facilitates the analysis of the images, but also has similar or lower intra- and interrater variabilities. >

1,287 citations


Journal ArticleDOI
TL;DR: Qualitative results suggest that the streak artifacts common to the FBP method are nearly eliminated by the PWLS+SOR method, and indicate that the proposed method for weighting the measurements is a significant factor in the improvement over FBP.
Abstract: Presents an image reconstruction method for positron-emission tomography (PET) based on a penalized, weighted least-squares (PWLS) objective. For PET measurements that are precorrected for accidental coincidences, the author argues statistically that a least-squares objective function is as appropriate, if not more so, than the popular Poisson likelihood objective. The author proposes a simple data-based method for determining the weights that accounts for attenuation and detector efficiency. A nonnegative successive over-relaxation (+SOR) algorithm converges rapidly to the global minimum of the PWLS objective. Quantitative simulation results demonstrate that the bias/variance tradeoff of the PWLS+SOR method is comparable to the maximum-likelihood expectation-maximization (ML-EM) method (but with fewer iterations), and is improved relative to the conventional filtered backprojection (FBP) method. Qualitative results suggest that the streak artifacts common to the FBP method are nearly eliminated by the PWLS+SOR method, and indicate that the proposed method for weighting the measurements is a significant factor in the improvement over FBP. >

673 citations


Journal ArticleDOI
TL;DR: It is demonstrated that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement and by improving the visualization of breast pathology, one can improve chances of early detection while requiring less time to evaluate mammograms for most patients.
Abstract: Introduces a novel approach for accomplishing mammographic feature analysis by overcomplete multiresolution representations. The authors show that efficient representations may be identified within a continuum of scale-space and used to enhance features of importance to mammography. Methods of contrast enhancement are described based on three overcomplete multiscale representations: 1) the dyadic wavelet transform (separable), 2) the /spl phi/-transform (nonseparable, nonorthogonal), and 3) the hexagonal wavelet transform (nonseparable). Multiscale edges identified within distinct levels of transform space provide local support for image enhancement. Mammograms are reconstructed from wavelet coefficients modified at one or more levels by local and global nonlinear operators. In each case, edges and gain parameters are identified adaptively by a measure of energy within each level of scale-space. The authors show quantitatively that transform coefficients, modified by adaptive nonlinear operators, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. The authors' results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. They demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology, one can improve chances of early detection while requiring less time to evaluate mammograms for most patients. >

382 citations


Journal ArticleDOI
TL;DR: An algorithm is presented for the analysis and quantification of the vascular structures of the human retina that relies on a matched filtering approach coupled with a priori knowledge about retinal vessel properties to automatically detect the vessel boundaries, track the midline of the vessel, and extract useful parameters of clinical interest.
Abstract: An algorithm is presented for the analysis and quantification of the vascular structures of the human retina. Information about retinal blood vessel morphology is used in grading the severity and progression of a number of diseases. These disease processes are typically followed over relatively long time courses, and subjective analysis of the sequential images dictates the appropriate therapy for these patients. In this research, retinal fluorescein angiograms are acquired digitally in a 1024/spl times/1024 16-b image format and are processed using an automated vessel tracking program to identify and quantitate stenotic and/or tortuous vessel segments. The algorithm relies on a matched filtering approach coupled with a priori knowledge about retinal vessel properties to automatically detect the vessel boundaries, track the midline of the vessel, and extract useful parameters of clinical interest. By modeling the vessel profile using Gaussian functions, improved estimates of vessel diameters are obtained over previous algorithms. An adaptive densitometric tracking technique based on local neighborhood information is also used to improve computational performance in regions where the vessel is relatively straight. >

314 citations


Journal ArticleDOI
TL;DR: Results indicate significant improvements in emission image quality using the Bayesian approach, in comparison to filtered backprojection, particularly when reprojections of the MAP transmission image are used in place of the standard attenuation correction factors.
Abstract: The authors describe conjugate gradient algorithms for reconstruction of transmission and emission PET images. The reconstructions are based on a Bayesian formulation, where the data are modeled as a collection of independent Poisson random variables and the image is modeled using a Markov random field. A conjugate gradient algorithm is used to compute a maximum a posteriori (MAP) estimate of the image by maximizing over the posterior density. To ensure nonnegativity of the solution, a penalty function is used to convert the problem to one of unconstrained optimization. Preconditioners are used to enhance convergence rates. These methods generally achieve effective convergence in 15-25 iterations. Reconstructions are presented of an /sup 18/FDG whole body scan from data collected using a Siemens/CTI ECAT931 whole body system. These results indicate significant improvements in emission image quality using the Bayesian approach, in comparison to filtered backprojection, particularly when reprojections of the MAP transmission image are used in place of the standard attenuation correction factors. >

302 citations


Journal ArticleDOI
TL;DR: A set of shape factors to measure the roughness of contours of calcifications in mammograms and for use in their classification as malignant or benign as well as for classification as benign or benign are developed.
Abstract: The authors have developed a set of shape factors to measure the roughness of contours of calcifications in mammograms and for use in their classification as malignant or benign. The analysis of mammograms is performed in three stages. First, a region growing technique is used to obtain the contours of calcifications. Then, three measures of shape features, including compactness, moments, and Fourier descriptors are computed for each region. Finally, their applicability for classification is studied by using the three shape measures to form feature vectors. Classification of 143 calcifications from 18 biopsy-proven cases as benign or malignant using the three measures with the nearest-neighbor method was 100% accurate. >

292 citations


Journal ArticleDOI
TL;DR: A hierarchy of image processing steps which rapidly detects both the contours of the myocardial boundaries of the left ventricle and the tags within theMyocardium, which is currently being used in the analysis of cardiac strain and as a basis for theAnalysis of alternate tag geometries.
Abstract: Tracking magnetic resonance tags in myocardial tissue promises to be an effective tool for the assessment of myocardial motion. The authors describe a hierarchy of image processing steps which rapidly detects both the contours of the myocardial boundaries of the left ventricle and the tags within the myocardium. The method works on both short axis and long axis images containing radial and parallel tag patterns, respectively. Left ventricular boundaries are detected by first removing the tags using morphological closing and then selecting candidate edge points. The best inner and outer boundaries are found using a dynamic program that minimizes a nonlinear combination of several local cost functions. Tags are tracked by matching a template of their expected profile using a least squares estimate. Since blood pooling, contiguous and adjacent tissue, and motion artifacts sometimes cause detection errors, a graphical user interface was developed to allow user correction of anomalous points. The authors present results on several tagged images of a human. A fully automated run generally finds the endocardial boundary and the tag lines extremely well, requiring very little manual correction. The epicardial boundary sometimes requires more intervention to obtain an acceptable result. These methods are currently being used in the analysis of cardiac strain and as a basis for the analysis of alternate tag geometries. >

290 citations


Journal ArticleDOI
TL;DR: A novel adaptive algorithm is presented that tailors the required amount of contrast enhancement based on the local contrast of the image and the observer's Just-Noticeable-Difference (JND) and offers considerable benefits in digital radiography applications where the objective is to increase the diagnostic utility of images.
Abstract: Existing methods for image contrast enhancement focus mainly on the properties of the image to be processed while excluding any consideration of the observer characteristics. In several applications, particularly in the medical imaging area, effective contrast enhancement for diagnostic purposes can be achieved by including certain basic human visual properties. Here the authors present a novel adaptive algorithm that tailors the required amount of contrast enhancement based on the local contrast of the image and the observer's Just-Noticeable-Difference (JND). This algorithm always produces adequate contrast in the output image, and results in almost no ringing artifacts even around sharp transition regions, which is often seen in images processed by conventional contrast enhancement techniques. By separating smooth and detail areas of an image and considering the dependence of noise visibility on the spatial activity of the image, the algorithm treats them differently and thus avoids excessive enhancement of noise, which is another common problem for many existing contrast enhancement techniques. The present JND-Guided Adaptive Contrast Enhancement (JGACE) technique is very general and can be applied to a variety of images. In particular, it offers considerable benefits in digital radiography applications where the objective is to increase the diagnostic utility of images. A detailed performance evaluation together with a comparison with the existing techniques is given to demonstrate the strong features of JGACE. >

256 citations


Journal ArticleDOI
TL;DR: A statistical method is developed to classify tissue types and to segment the corresponding tissue regions from relaxation time T(1), T(2), and proton density P(D) weighted magnetic resonance images.
Abstract: A statistical method is developed to classify tissue types and to segment the corresponding tissue regions from relaxation time T/sub 1/, T/sub 2/, and proton density P/sub D/ weighted magnetic resonance images. The method assumes that the distribution of image intensities associated with each tissue type can be expressed as a multivariate likelihood function of three weighted signal intensity values (T/sub 1/, T/sub 2/, P/sub D/) at each location within that tissue regions. The method further assumes that the underlying tissue regions are piecewise contiguous and can be characterized by a Markov random field prior. In classifying the tissue types, the method models the likelihood of realizing the images as a finite multivariate-mixture function. The class parameters associated with the tissue types (i.e. the weighted intensity means, variances and correlation coefficients of the multivariate function, as well as the number of voxels within regions of the tissue types of are estimated by maximum likelihood. The estimation fits the class parameters to the image data via the expectation-maximization algorithm. The number of classes associated with the tissue types is determined by the information criterion of minimum description length. The method segments the tissue regions, given the estimated class parameters, by maximum a posteriori probability. The prior is constructed by the tissue-region membership of the first- and second-order neighborhood. The method is tested by a few sets of T/sub 1/, T/sub 2/, and P/sub D/ weighted images of the brain acquired with a 1.5 Tesla whole body scanner. The number of classes and the associated class parameters are automatically estimated. The regions of different brain tissues are satisfactorily segmented. >

238 citations


Journal ArticleDOI
TL;DR: An exact inversion formula written in the form of shift-variant filtered-backprojection (FBP) is given for reconstruction from cone-beam data taken from any orbit satisfying H.K. Tuy's (1983) sufficiency conditions.
Abstract: An exact inversion formula written in the form of shift-variant filtered-backprojection (FBP) is given for reconstruction from cone-beam data taken from any orbit satisfying H.K. Tuy's (1983) sufficiency conditions. The method is based on a result of P. Grangeat (1987), involving the derivative of the three-dimensional (3D) Radon transform, but unlike Grangeat's algorithm, no 3D rebinning step is required. Data redundancy, which occurs when several cone-beam projections supply the same values in the Radon domain, is handled using an elegant weighting function and without discarding data. The algorithm is expressed in a convenient cone-beam detector reference frame, and a specific example for the case of a dual orthogonal circular orbit is presented. When the method is applied to a single circular orbit (even though Tuy's condition is not satisfied), it is shown to be equivalent to the well-known algorithm of L.A. Feldkamp et al. (1984). >

Journal ArticleDOI
TL;DR: This technique makes use of the fact that, in most time-sequential imaging problems, the high-resolution image morphology does not change from one image to another, and it improves imaging efficiency over the conventional Fourier imaging methods by eliminating the repeated encodings of this stationary information.
Abstract: Many magnetic resonance imaging applications require the acquisition of a time series of images. In conventional Fourier transform based imaging methods, each of these images is acquired independently so that the temporal resolution possible is limited by the number of spatial encodings (or data points in the Fourier space) collected, or one has to sacrifice spatial resolution for temporal resolution. In this paper, a generalized series based imaging technique is proposed to address this problem. This technique makes use of the fact that, in most time-sequential imaging problems, the high-resolution image morphology does not change from one image to another, and it improves imaging efficiency (and temporal resolution) over the conventional Fourier imaging methods by eliminating the repeated encodings of this stationary information. Additional advantages of the proposed imaging technique include a reduced number of radio-frequency (RF) pulses for data collection, and thus lower RF power deposition. This method should prove useful for a variety of dynamic imaging applications, including dynamic studies of contrast agents and functional brain imaging.

Journal ArticleDOI
TL;DR: Experiments involving several MR and US images show that the entropy-coded DPCM method can provide compression in the range from 4 to 10 with a peak SNR of about 50 dB for 8-bit medical images, and a comparison with the JPEG standard reveals that it can provide about 7 to 8 dB higher SNR for the same compression performance.
Abstract: The near-lossless, i.e., lossy but high-fidelity, compression of medical Images using the entropy-coded DPCM method is investigated. A source model with multiple contexts and arithmetic coding are used to enhance the compression performance of the method. In implementing the method, two different quantizers each with a large number of quantization levels are considered. Experiments involving several MR (magnetic resonance) and US (ultrasound) images show that the entropy-coded DPCM method can provide compression in the range from 4 to 10 with a peak SNR of about 50 dB for 8-bit medical images. The use of multiple contexts is found to improve the compression performance by about 25% to 30% for MR images and 30% to 35% for US images. A comparison with the JPEG standard reveals that the entropy-coded DPCM method can provide about 7 to 8 dB higher SNR for the same compression performance. >

Journal ArticleDOI
TL;DR: Results of computer simulations are presented which demonstrate the ability of the algorithms to achieve useful reconstructions in the absence of measurement uncertainties (other than those caused by quantization).
Abstract: The Compton scattering camera (sometimes called the electronically collimated camera) has been shown by others to have the potential to better the photon counting statistics and the energy resolution of the Anger camera for imaging in SPECT. By using coincident detection of Compton scattering events on two detecting planes, a photon can be localized to having been sourced on the surface of a cone. New algorithms are needed to achieve fully three-dimensional reconstruction of the source distribution from such a camera. If a complete set of cone-surface projections are collected over an infinitely extending plane, it is shown that the reconstruction problem is not only analytically solvable, but also overspecified in the absence of measurement uncertainties. Two approaches to direct reconstruction are proposed, both based on the photons which travel perpendicularly between the detector planes. Results of computer simulations are presented which demonstrate the ability of the algorithms to achieve useful reconstructions in the absence of measurement uncertainties (other than those caused by quantization). The modifications likely to be required in the presence of realistic measurement uncertainties are discussed. >

Journal ArticleDOI
TL;DR: A cone-beam inversion formula that can be applied when a nonplanar orbit satisfying the completeness condition is used and is reformulated for data acquisition systems with a planar detector to obtain a new reconstruction algorithm.
Abstract: B.D. Smith (ibid., vol.MI-4, p.15-25, 1985; Opt. Eng., vol.29, p.524-34, 1990) and P. Grangeat (These de doctorat, 1987; Lecture Notes in Mathematics 1497, p.66-97, 1991) derived a cone-beam inversion formula that can be applied when a nonplanar orbit satisfying the completeness condition is used. Although Grangeat's inversion formula is mathematically different from Smith's one, they have similar overall structures to each other. The contribution of the present paper is two-fold. First, based on the derivation of Smith, the authors point out that Grangeat's inversion formula and Smith's one can be conveniently described using a single formula (the Smith-Grangeat inversion formula) that is in the form of space-variant filtering followed by cone-beam backprojection. Furthermore, the resulting formula is reformulated for data acquisition systems with a planar detector to obtain a new reconstruction algorithm. Second, the authors make two significant modifications to the new algorithm to reduce artifacts and numerical errors encountered in direct implementation of the new algorithm. As for exactness of the new algorithm, the following fact can be stated. The algorithm based on Grangeat's intermediate function is exact for any complete orbit, whereas that based on Smith's intermediate function should be considered as an approximate inverse excepting the special case where almost every plane in 3D space meets the orbit. The validity of the new algorithm is demonstrated by simulation studies. >

Journal ArticleDOI
TL;DR: The authors propose a new method called the "weighted line site" method, in which a prior structural image is employed in a modified updating scheme for the boundary variable used in the iterative Bayesian reconstruction of positron emission tomography images.
Abstract: Reports on a new method in which spatially correlated magnetic resonance (MR) or X-ray computed tomography (CT) images are employed as a source of prior information in the Bayesian reconstruction of positron emission tomography (PET) images This new method incorporates the correlated structural images as anatomic templates which can be used for extracting information about boundaries that separate regions exhibiting different tissue characteristics In order to avoid the possible introduction of artifacts caused by discrepancies between functional and anatomic boundaries, the authors propose a new method called the "weighted line site" method, in which a prior structural image is employed in a modified updating scheme for the boundary variable used in the iterative Bayesian reconstruction This modified scheme is based on the joint probability of structural and functional boundaries As to the structural information provided by CT or MR images, only those which have high joint probability with the corresponding PET data are used; whereas other boundary information that is not supported by the PET image is suppressed The new method has been validated by computer simulation and phantom studies The results of these validation studies indicate that this new method offers significant improvements in image quality when compared to other reconstruction algorithms, including the filtered backprojection method and the maximum likelihood approach, as well as the Bayesian method without the use of the prior boundary information >

Journal ArticleDOI
TL;DR: It was observed that the tomographic point response, after distance-dependent filtering with the FDP, was approximately isotropic and varied substantially less with position than that obtained with other correction methods.
Abstract: A filtering approach is described, which accurately compensates for the 2D distance-dependent detector response, as well as for photon attenuation in a uniform attenuating medium. The filtering method is based on the frequency distance principle (FDP) which states that points in the object at a specific source-to-detector distance provide the most significant contribution to specified frequency regions in the discrete Fourier transform (DFT) of the sinogram. By modeling the detector point spread function as a 2D Gaussian function whose width is dependent on the source-to-detector distance, a spatially variant inverse filter can be computed and applied to the 3D DFT of the set of all sinogram slices. To minimize noise amplification the inverse filter is rolled off at high frequencies by using a previously published Wiener filter strategy. Attenuation compensation is performed with Bellini's method. It was observed that the tomographic point response, after distance-dependent filtering with the FDP, was approximately isotropic and varied substantially less with position than that obtained with other correction methods. Furthermore, it was shown that processing with this filtering technique provides reconstructions with minimal degradation in image fidelity. >

Journal ArticleDOI
TL;DR: The authors present a minimizing strategy which is suitable for tracking the SPAMM grid by using snakes-active contour models with an associated energy functional to continuously minimize their energy functionals.
Abstract: Presents a new approach for the automatic tracking of SPAMM (Spatial Modulation of Magnetization) grid in cardiac MR images and consequent estimation of deformation parameters. The tracking is utilized to extract grid points from MR images and to establish correspondences between grid points in images taken at consecutive frames. These correspondences are used with a thin plate spline model to establish a mapping from one image to the next. This mapping is then used for motion and deformation estimation. Spatio-temporal tracking of SPAMM grid is achieved by using snakes/spl minus/active contour models with an associated energy functional. The authors present a minimizing strategy which is suitable for tracking the SPAMM grid. By continuously minimizing their energy functionals, the snakes lock on to and follow the in-slice motion and deformation of the SPAMM grid. The proposed algorithm was tested with excellent results on 123 images (three data sets each a multiple slice 2D, 16 phase Cine study, three data sets each a multiple slice 2D, 13 phase Cine study and three data sets each a multiple slice 2D, 12 phase Cine study). >

Journal ArticleDOI
TL;DR: In all applications, the proposed filter suggested better detail preservation, noise suppression, and edge detection than all other approaches and it may prove to be a useful tool for computer-assisted diagnosis in digital mammography.
Abstract: A new class of nonlinear filters with more robust characteristics for noise suppression and detail preservation is proposed for processing digital mammographic images. The new algorithm consists of two major filtering blocks: (a) a multistage tree-structured filter for image enhancement that uses central weighted median filters as basic sub-filtering blocks and (b) a dispersion edge detector. The design of the algorithm also included the use of linear and curved windows to determine whether variable shape windowing could improve detail preservation. First, the noise-suppressing properties of the tree-structured filter were compared to single filters, namely the median and the central weighted median with conventional square and variable shape adaptive windows; simulated images were used for this purpose. Second, the edge detection properties of the tree-structured filter cascaded with the dispersion edge detector were compared to the performance of the dispersion edge detector alone, the Sobel operator, and the single median filter cascaded with the dispersion edge detector. Selected mammographic images with representative biopsy-proven malignancies were processed with all methods and the results were visually evaluated by an expert mammographer. In all applications, the proposed filter suggested better detail preservation, noise suppression, and edge detection than all other approaches and it may prove to be a useful tool for computer-assisted diagnosis in digital mammography. >

Journal ArticleDOI
TL;DR: An interactive system exploits recent computer graphics and computer vision techniques to significantly reduce the time required to build 3D nerve cell models from serial microscopy.
Abstract: Neuroscientists have studied the relationship between nerve cell morphology and function for over a century. To pursue these studies, they need accurate three-dimensional models of nerve cells that facilitate detailed anatomical measurement and the identification of internal structures. Although serial transmission electron microscopy has been a source of such models since the mid 1960s, model reconstruction and analysis remain very time consuming. The authors have developed a new approach to reconstructing and visualizing 3D nerve cell models from serial microscopy. An interactive system exploits recent computer graphics and computer vision techniques to significantly reduce the time required to build such models. The key ingredients of the system are a digital "blink comparator" for section registration, "snakes," or active deformable contours, for semiautomated cell segmentation, and voxel-based techniques for 3D reconstruction and visualization of complex cell volumes with internal structures. >

Journal ArticleDOI
TL;DR: It is concluded that multispectral analysis of magnetic resonance images is a valuable tool to recognize the most common normal tissue types in the brain and surrounding structures.
Abstract: The authors demonstrate an improved differentiation of the most common tissue types in the human brain and surrounding structures by quantitative validation using multispectral analysis of magnetic resonance images. This is made possible by a combination of a special training technique and an increase in the number of magnetic resonance channel images with different pulse acquisition parameters. The authors give a description of the tissue-specific multivariate statistical distributions of the pixel intensity values and discuss how their properties may be explored to improve the statistical modeling further. A statistical method to estimate the tissue-specific longitudinal and transverse relaxation times is also given. It is concluded that multispectral analysis of magnetic resonance images is a valuable tool to recognize the most common normal tissue types in the brain and surrounding structures. >

Journal ArticleDOI
TL;DR: A three-dimensional (3D) reconstruction of the vessel lumen from two angiographic views, based on the reconstruction of a series of cross-sections, is proposed, which performs well both on single vessels and on branching vessels possessing an additional inherent ambiguity when viewed at oblique angles.
Abstract: A three-dimensional (3D) reconstruction of the vessel lumen from two angiographic views, based on the reconstruction of a series of cross-sections, is proposed Assuming uniform mixing of contrast medium and background subtraction, the cross-section of each vessel is reconstructed through a binary representation A priori information about both the slice to be reconstructed and the relationships between adjacent slices are incorporated to lessen ambiguities on the reconstruction Taking into account the knowledge of normal vessel geometry, an initial solution of each slice is created using an elliptic model-based method This initial solution is then deformed to be made consistent with projection data while being constrained into a connected realistic shape For that purpose, properties on the expected optimal solution are described through a Markov random field To find an optimal solution, a specific optimization algorithm based on simulated annealing is used The method performs well both on single vessels and on branching vessels possessing an additional inherent ambiguity when viewed at oblique angles Results on 2D slice independent reconstruction and 3D reconstruction of a stack of spatially continuous 2D slices are presented for single vessels and bifurcations >

Journal ArticleDOI
TL;DR: A new automatic target recognition algorithm has been developed to extract craniofacial landmarks from lateral skull X-rays (cephalograms) and showed an 85% recognition rate on average.
Abstract: A new automatic target recognition algorithm has been developed to extract craniofacial landmarks from lateral skull X-rays (cephalograms). The locations of these landmarks are used by orthodontists in what is referred to as a cephalometric evaluation. The evaluation assists in the diagnosis of anomalies and in the monitoring of treatments. The algorithm is based on gray-scale mathematical morphology. A statistical approach to training was used to overcome subtle differences in skeletal topographies. Decomposition was used to desensitize the algorithm to size differences. A system was trained to locate 20 landmarks. Tests on 40 X-rays showed an 85% recognition rate on average. >

Journal ArticleDOI
TL;DR: A reconstruction algorithm using neural network techniques which calculates a linear approximation of the inverse problem directly from finite element simulations of the forward problem, adapted to the geometry of the medium and the signal-to-noise ratio (SNR) used during network training.
Abstract: Reconstruction of images in electrical impedance tomography requires the solution of a nonlinear inverse problem on noisy data. This problem is typically ill-conditioned and requires either simplifying assumptions or regularization based on a priori knowledge. The authors present a reconstruction algorithm using neural network techniques which calculates a linear approximation of the inverse problem directly from finite element simulations of the forward problem. This inverse is adapted to the geometry of the medium and the signal-to-noise ratio (SNR) used during network training. Results show good conductivity reconstruction where measurement SNR is similar to the training conditions. The advantages of this method are its conceptual simplicity and ease of implementation, and the ability to control the compromise between the noise performance and resolution of the image reconstruction. >

Journal ArticleDOI
TL;DR: Evaluations using both Monte Carlo simulations and phantom studies on the Siemens 953B scanner suggest that the expectation-maximization algorithm yields unbiased images with significantly lower variances than filtered-backprojection when the images are reconstructed to the intrinsic resolution.
Abstract: The expectation-maximization (EM) algorithm for computing maximum-likelihood estimates of transmission images in positron-emission tomography (PET) (see K. Lange and R. Carson, J. Comput. Assist. Tomogr., vol.8, no.2, p.306-16, 1984) is extended to include measurement error, accidental coincidences and Compton scatter. A method for accomplishing the maximization step using one step of Newton's method is proposed. The algorithm is regularized with the method of sieves. Evaluations using both Monte Carlo simulations and phantom studies on the Siemens 953B scanner suggest that the algorithm yields unbiased images with significantly lower variances than filtered-backprojection when the images are reconstructed to the intrinsic resolution. Large features in the images converge in under 200 iterations while the smallest features required up to 2,000 iterations. All but the smallest features in typical transmission scans converge in approximately 250 iterations. The initial implementation of the algorithm requires 50 sec per iteration on a DECStation 5000. >

Journal ArticleDOI
TL;DR: The authors found that some of these methods may give clinically useful information which is very difficult to get from ordinary 2D ultrasonic images, and in some cases renderings with very fine structural details.
Abstract: The authors explore the application of volume rendering in medical ultrasonic imaging. Several volume rendering methods have been developed for X-ray computed tomography (X-CT), magnetic resonance imaging (MRI) and positron emission tomography (PET). Limited research has been done on applications of volume rendering techniques in medical ultrasound imaging because of a general lack of adequate equipment for 3D acquisitions. Severe noise sources and other limitations in the imaging system make volume rendering of ultrasonic data a challenge compared to rendering of MRI and X-CT data. Rendering algorithms that rely on an initial classification of the data into different tissue categories have been developed for high quality X-CT and MR-data. So far, there is a lack of general and reliable methods for tissue classification in ultrasonic imaging. The authors focus on volume rendering methods which are not dependent on any classification into different tissue categories. Instead, features are extracted from the original 3D data-set, and projected onto the view plane. The authors found that some of these methods may give clinically useful information which is very difficult to get from ordinary 2D ultrasonic images, and in some cases renderings with very fine structural details. The authors have applied the methods to 3D ultrasound images from fetal examinations. The methods are now in use as clinical tools at the National Center of Fetal Medicine in Trondheim, Norway. >

Journal ArticleDOI
TL;DR: The authors present a complete approach to obtain optimal biplane views with computer-assisted techniques to cause misinterpretation of the extent and degree of coronary artery disease in angiographic (biplane) projection images.
Abstract: Foreshortening of vessel segments in angiographic (biplane) projection images may cause misinterpretation of the extent and degree of coronary artery disease. The views in which the object of interest are visualized with minimum foreshortening are called optimal views. The authors present a complete approach to obtain such views with computer-assisted techniques. The object of interest is first visualized in two arbitrary views. Two landmarks of the object are manually defined in the two projection images. With complete information of the projection geometry, the vector representation of the object in the three-dimensional space is computed. This vector is perpendicular to a plane in which the views are called optimal. The user has one degree of freedom to define a set of optimal biplane views. The angle between the central beams of the imaging systems can be chosen freely. The computation of the orientation of the object and of corresponding optimal biplane views have been evaluated with a simple hardware phantom. The mean and the standard deviation of the overall errors in the calculation of the optimal angulation angles were 1.8/spl deg/ and 1.3/spl deg/, respectively, when the user defined a rotation angle. >

Journal ArticleDOI
TL;DR: The results indicate that the TEW technique works well for a wide range of activity distributions and object sizes, and the comparisons between the Tew and dual window techniques show better quantitative accuracy.
Abstract: A practical triple energy window technique (TEW) is proposed, which is based on using the information in two lower energy windows and one single calibration, to estimate the scatter within the photopeak window. The technique is basically a conventional dual-window technique plus a modification factor, which can partially compensate object-distribution dependent scatters. The modification factor is a function of two lower scatter windows of both the calibration phantom and the actual object. In order to evaluate the technique, a Monte Carlo simulation program, which simulates the PENN-PET scanner geometry, was used. Different phantom activity distributions and phantom sizes were tested to simulate brain studies, including uniform and nonuniform distributions. The results indicate that the TEW technique works well for a wide range of activity distributions and object sizes. The comparisons between the TEW and dual window techniques show better quantitative accuracy for the TEW, especially for different phantom sizes. The technique is also applied to experimental data from a PENN-PET scanner to test its practicality. >

Journal ArticleDOI
TL;DR: The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT using a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images.
Abstract: The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy. >

Journal ArticleDOI
TL;DR: The authors show that with a least squares estimation algorithm, it is possible to define the position of the tag lines with a precision on the order of a tenth of a pixel with the help of the Cramer-Rao bound.
Abstract: Using magnetic resonance (MR) tagging, it is possible to track tissue motion by accurate detection of tag line positions. The authors show that with a least squares estimation algorithm, it is possible to define the position of the tag lines with a precision on the order of a tenth of a pixel. They calculate the Cramer-Rao bound for the tag position estimation error as a function of the tag thickness, the shape of the tag profile and line spread function of the MR imaging system. The tag thickness that minimizes tag position estimation error is between 0.8 to 1.5 pixels depending on the shape of the tag and the line spread function. In addition, tag position estimation error is inversely proportional to contrast-to-noise ratio between the tag and the background tissue. The theoretical results obtained in this study were verified by experiments performed on a whole-body 1.5 T MR imager and Monte-Carlo simulation studies. >