scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 2003"


Proceedings ArticleDOI
09 Nov 2003
TL;DR: This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions, and develops an image synthesis method to calibrate the parameters that define the relative importance of different scales.
Abstract: The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multiscale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method.

4,333 citations


Journal ArticleDOI
TL;DR: This work presents an improvement on the spin‐echo (SE) diffusion sequence that displays less distortion and consequently improves image quality, and allows more flexible diffusion gradient timing.
Abstract: Image distortion due to field gradient eddy currents can create image artifacts in diffusion-weighted MR images. These images, acquired by measuring the attenuation of NMR signal due to directionally dependent diffusion, have recently been shown to be useful in the diagnosis and assessment of acute stroke and in mapping of tissue structure. This work presents an improvement on the spin-echo (SE) diffusion sequence that displays less distortion and consequently improves image quality. Adding a second refocusing pulse provides better image quality with less distortion at no cost in scanning efficiency or effectiveness, and allows more flexible diffusion gradient timing. By adjusting the timing of the diffusion gradients, eddy currents with a single exponential decay constant can be nulled, and eddy currents with similar decay constants can be greatly reduced. This new sequence is demonstrated in phantom measurements and in diffusion anisotropy images of normal human brain.

1,283 citations


Proceedings Article
01 Dec 2003
TL;DR: This paper proposes a multi-scale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions, and develops an image synthesis method to calibrate the parameters that define the relative importance of different scales.
Abstract: The structural similarity image quality paradigm is based on the assumption that the human visual system is highly adapted for extracting structural information from the scene, and therefore a measure of structural similarity can provide a good approximation to perceived image quality. This paper proposes a multi-scale structural similarity method, which supplies more flexibility than previous single-scale methods in incorporating the variations of viewing conditions. We develop an image synthesis method to calibrate the parameters that define the relative importance of different scales. Experimental comparisons demonstrate the effectiveness of the proposed method.

1,205 citations


Proceedings ArticleDOI
24 Nov 2003
TL;DR: Three variants of a new quality metric for image fusion based on an image quality index recently introduced by Wang and Bovik are presented, which are compliant with subjective evaluations and can therefore be used to compare different image fusion methods or to find the best parameters for a given fusion algorithm.
Abstract: We present three variants of a new quality metric for image fusion. The interest of our metrics, which are based on an image quality index recently introduced by Wang and Bovik in [Z. Wang et al., March 2002], lies in the fact that they do not require a ground-truth or reference image. We perform several simulations which show that our metrics are compliant with subjective evaluations and can therefore be used to compare different image fusion methods or to find the best parameters for a given fusion algorithm.

782 citations


Journal ArticleDOI
TL;DR: Simulation results with the chosen feature set and well-known watermarking and steganographic techniques indicate that the proposed approach is able with reasonable accuracy to distinguish between cover and stego images.
Abstract: We present techniques for steganalysis of images that have been potentially subjected to steganographic algorithms, both within the passive warden and active warden frameworks. Our hypothesis is that steganographic schemes leave statistical evidence that can be exploited for detection with the aid of image quality features and multivariate regression analysis. To this effect image quality metrics have been identified based on the analysis of variance (ANOVA) technique as feature sets to distinguish between cover-images and stego-images. The classifier between cover and stego-images is built using multivariate regression on the selected quality metrics and is trained based on an estimate of the original image. Simulation results with the chosen feature set and well-known watermarking and steganographic techniques indicate that our approach is able with reasonable accuracy to distinguish between cover and stego images.

610 citations


Proceedings ArticleDOI
17 Jun 2003
TL;DR: This paper tries to quantify the colourfulness in natural images to perceptually qualify the effect that processing or coding has on colour, and fit a metric to the results, and obtain a correlation of over 90% with the experimental data.
Abstract: We want to integrate colourfulness in an image quality evaluation framework. This quality framework is meant to evaluate the perceptual impact of a compression algorithm or an error prone communication channel on the quality of an image. The image might go through various enhancement or compression algorithms, resulting in a different -- but not necessarily worse -- image. In other words, we will measure quality but not fidelity to the original picture.While modern colour appearance models are able to predict the perception of colourfulness of simple patches on uniform backgrounds, there is no agreement on how to measure the overall colourfulness of a picture of a natural scene. We try to quantify the colourfulness in natural images to perceptually qualify the effect that processing or coding has on colour. We set up a psychophysical category scaling experiment, and ask people to rate images using 7 categories of colourfulness. We then fit a metric to the results, and obtain a correlation of over 90% with the experimental data. The metric is meant to be used real time on video streams. We ignored any issues related to hue in this paper.

511 citations


Proceedings ArticleDOI
18 Jun 2003
TL;DR: This work is inspired by recent progress on natural image statistics that the priors of image primitives can be well represented by examples and proposes a Bayesian approach to image hallucination, where primal sketch priors are constructed and used to enhance the quality of the hallucinated high resolution image.
Abstract: We propose a Bayesian approach to image hallucination. Given a generic low resolution image, we hallucinate a high resolution image using a set of training images. Our work is inspired by recent progress on natural image statistics that the priors of image primitives can be well represented by examples. Specifically, primal sketch priors (e.g., edges, ridges and corners) are constructed and used to enhance the quality of the hallucinated high resolution image. Moreover, a contour smoothness constraint enforces consistency of primitives in the hallucinated image by a Markov-chain based inference algorithm. A reconstruction constraint is also applied to further improve the quality of the hallucinated image. Experiments demonstrate that our approach can hallucinate high quality super-resolution images.

444 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed image processing algorithms for measuring two-dimensional distributions of linear birefringence using a pair of variable retarders and obtained the lowest noise level of 0.036 nm.
Abstract: We propose image processing algorithms for measuring two-dimensional distributions of linear birefringence using a pair of variable retarders. Several algorithms that use between two and five recorded frames allow us to optimize measurements for speed, sensitivity, and accuracy. We show images of asters, which consist of radial arrays of microtubule polymers recorded with a polarized light microscope equipped with a universal compensator. Our experimental results confirm our theoretical expectations. The lowest noise level of 0.036 nm was obtained when we used the five-frame technique and four-frame algorithm without extinction setting. The two-frame technique allows us to increase the speed of measurement with acceptable image quality.

398 citations


Journal ArticleDOI
TL;DR: A method is described for using a limited number of low-dose radiographs to reconstruct the three-dimensional distribution of x-rays attenuation in the breast, using x-ray cone-beam imaging, an electronic digital detector, and constrained nonlinear iterative computational techniques.
Abstract: A method is described for using a limited number (typically 10–50) of low-dose radiographs to reconstruct the three-dimensional (3D) distribution of x-ray attenuation in the breast. The method uses x-ray cone-beam imaging, an electronic digital detector, and constrained nonlinear iterative computational techniques. Images are reconstructed with high resolution in two dimensions and lower resolution in the third dimension. The 3D distribution of attenuation that is projected into one image in conventional mammography can be separated into many layers (typically 30–80 1-mm-thick layers, depending on breast thickness), increasing the conspicuity of features that are often obscured by overlapping structure in a single-projection view. Schemes that record breast images at nonuniform angular increments, nonuniform image exposure, and nonuniform detector resolution are investigated in order to reduce the total x-ray exposure necessary to obtain diagnostically useful 3D reconstructions, and to improve the quality of the reconstructed images for a given exposure. The total patient radiation dose can be comparable to that used for a standard two-view mammogram. The method is illustrated with images from mastectomy specimens, a phantom, and human volunteers. The results show how image quality is affected by various data-collection protocols.

392 citations


Journal ArticleDOI
TL;DR: A new adaptive imaging technique that uses the generalized coherence factor (GCF) to reduce the focusing errors resulting from the sound-velocity inhomogeneities rivals that of the correlation-based technique and the parallel adaptive receive compensation algorithm.
Abstract: Sound-velocity inhomogeneities degrade both spatial and contrast resolutions. This paper proposes a new adaptive imaging technique that uses the generalized coherence factor (GCF) to reduce the focusing errors resulting from the sound-velocity inhomogeneities. The GCF is derived from the spatial spectrum of the received aperture data after proper receive delays have been applied. It is defined as the ratio of the spectral energy within a prespecified low-frequency range to the total energy. It is demonstrated that the low-frequency component of the spectrum corresponds to the coherent portion of the received data, and that the high-frequency component corresponds to the incoherent portion. Hence, the GCF reduces to the coherence factor defined in the literature if the prespecified low-frequency range is restricted to DC only. In addition, the GCF is also an index of the focusing quality and can be used as a weighting factor for the reconstructed image. The efficacy of the GCF technique is demonstrated for focusing errors resulting from the sound-velocity inhomogeneities. Simulations and real ultrasound data are used to evaluate the efficacy of the proposed GCF technique. The characteristics of the GCF, including the effects of the signal-to-noise ratio and the number of channels, are also discussed. The GCF technique also is compared with the correlation-based technique and the parallel adaptive receive compensation algorithm; the improvement in image quality obtained with the proposed technique rivals that of the latter technique. In the presence of a displaced phase screen, this proposed technique also outperforms the correlation-based technique. Computational complexity and implementation issues also are addressed.

381 citations


Journal ArticleDOI
Zhigang Fan1, R.L. de Queiroz1
TL;DR: A fast and efficient method is provided to determine whether an image has been previously JPEG compressed, and a method for the maximum likelihood estimation of JPEG quantization steps is developed.
Abstract: Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been previously JPEG compressed, but to learn what quantization table was used. This is the case, for example, if one wants to remove JPEG artifacts or for JPEG re-compression. In this paper, a fast and efficient method is provided to determine whether an image has been previously JPEG compressed. After detecting a compression signature, we estimate compression parameters. Specifically, we developed a method for the maximum likelihood estimation of JPEG quantization steps. The quantizer estimation method is very robust so that only sporadically an estimated quantizer step size is off, and when so, it is by one value.

Journal ArticleDOI
TL;DR: This paper shall use the dynamic programming strategy to get the optimal solution and the experimental results will show that this method consumes less computation time and also gets the optimal Solution.

Journal ArticleDOI
TL;DR: This paper proposes an effective color filter array (CFA) interpolation method for digital still cameras (DSCs) using a simple image model that correlates the R,G,B channels and shows that the frequency response of the proposed method is better than the conventional methods.
Abstract: We propose an effective color filter array (CFA) interpolation method for digital still cameras (DSCs) using a simple image model that correlates the R,G,B channels. In this model, we define the constants K/sub R/ as green minus red and K/sub B/ as green minus blue. For real-world images, the contrasts of K/sub R/ and K/sub B/ are quite flat over a small region and this property is suitable for interpolation. The main contribution of this paper is that we propose a low-complexity interpolation method to improve the image quality. We show that the frequency response of the proposed method is better than the conventional methods. Simulation results also verify that the proposed method obtain superior image quality on typical images. The luminance channel of the proposed method outperforms by 6.34-dB peak SNR the bilinear method, and the chrominance channels have a 7.69-dB peak signal-to-noise ratio improvement on average. Furthermore, the complexity of the proposed method is comparable to conventional bilinear interpolation. It requires only add and shift operations to implement.

Journal ArticleDOI
TL;DR: This work proposes to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space, and shows that face-space super- Resolution is more robust to registration errors and noise than pixel-domain super- resolution because of the addition of model-based constraints.
Abstract: Face images that are captured by surveillance cameras usually have a very low resolution, which significantly limits the performance of face recognition systems. In the past, super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. These techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system. Considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method, we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space. Such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction. The reconstruction algorithm no longer tries to obtain a visually improved high-quality image, but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead. In addition, we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints.

Journal ArticleDOI
05 Oct 2003
TL;DR: This paper compares chirp and Golay code performance with respect to image quality and system requirements, then shows clinical images that illustrate the current applications of coded excitation in B-mode, harmonic, and flow imaging.
Abstract: Resolution and penetration are primary criteria for clinical image quality. Conventionally, high bandwidth for resolution was achieved with a short pulse, which results in a tradeoff between resolution and penetration. Coded excitation extends the bounds of this tradeoff by increasing signal-to-noise ratio (SNR) through appropriate coding on transmit and decoding on receive. Although used for about 50 years in radar, coded excitation was successfully introduced into commercial ultrasound scanners only within the last 5 years. This delay is at least partly due to practical implementation issues particular to diagnostic ultrasound, which are the focus of this paper. After reviewing the basics of biphase and chirp coding, we present simulation results to quantify tradeoffs between penetration and resolution under frequency-dependent attenuation, dynamic focusing, and nonlinear propagation. Next, we compare chirp and Golay code performance with respect to image quality and system requirements, then we show clinical images that illustrate the current applications of coded excitation in B-mode, harmonic, and flow imaging.

Journal ArticleDOI
TL;DR: Computationally efficient, closed-form expressions for the gradient make possible efficient search algorithms to maximize sharpness.
Abstract: The technique of maximizing sharpness metrics has been used to estimate and compensate for aberrations with adaptive optics, to correct phase errors in synthetic-aperture radar, and to restore images. The largest class of sharpness metrics is the sum over a nonlinear point transformation of the image intensity. How the second derivative of the point nonlinearity varies with image intensity determines the effects of various metrics on the imagery. Some metrics emphasize making shadows darker, and other emphasize making bright points brighter. One can determine the image content needed to pick the best metric by computing the statistics of the image autocorrelation or of the Fourier magnitude, either of which is independent of the phase error. Computationally efficient, closed-form expressions for the gradient make possible efficient search algorithms to maximize sharpness.

Patent
13 Feb 2003
TL;DR: In this article, the image processing method and apparatus acquire input image data from the image recorded optically with a taking lens, acquire the information about the lens used to record the image, and perform image processing schemes on the input data using the acquired lens information.
Abstract: The image processing method and apparatus acquire input image data from the image recorded optically with a taking lens, acquire the information about the lens used to record the image and perform image processing schemes on the input image data using the acquired lens information, provided that the type of the lens used is identified from the acquired lens information and the intensity of sharpness enhancement of the corresponding image is altered in accordance with the identified lens type. The characteristics of the taking lens may also be acquired from the lens information and using the obtained lens characteristics as well as the position information for the recorded image, the input image data is subjected to aberration correction for correcting the deterioration in image quality due to the lens characteristics, provided that the information about the focal length of the taking lens effective at the time of image recording is additionally used, or image processing schemes are performed in two crossed directions of the recorded image, or parameters for correcting aberrations in the imaging plane of the taking lens are scaled such that they are related to the output image data on a pixel basis. High-quality prints reproducing high-quality images can be obtained from original images that were recorded with compact cameras, digital cameras and other conventional inexpensive cameras using low-performance lenses.

Journal ArticleDOI
TL;DR: A high-speed camera that combines a customized rotating mirror camera frame with charge coupled device (CCD) image detectors and is practically fully operated by computer control was constructed, dubbed Brandaris 128.
Abstract: A high-speed camera that combines a customized rotating mirror camera frame with charge coupled device (CCD) image detectors and is practically fully operated by computer control was constructed. High sensitivity CCDs are used so that image intensifiers, which would degrade image quality, are not necessary. Customized electronics and instruments were used to improve the flexibility and control precisely the image acquisition process. A full sequence of 128 consecutive image frames with 500×292 pixels each can be acquired at a maximum frame rate of 25 million frames/s. Full sequences can be repeated every 20 ms, and six full sequences can be stored on the in-camera memory buffer. A high-speed communication link to a computer allows each full sequence of about 20 Mbytes to be stored on a hard disk in less than 1 s. The sensitivity of the camera has an equivalent International Standards Organization number of 2500. Resolution was measured to be 36 lp/mm on the detector plane of the camera, while under a microscope a bar pattern of 400 nm spacing line pairs could be resolved. Some high-speed events recorded with this camera, dubbed Brandaris 128, are presented.

Journal ArticleDOI
TL;DR: The MAR algorithm led to a robust reduction of metal artifacts in Computed Tomography and may serve for an improvement in image quality in patients with metallic implants.
Abstract: Rationale and Objectives: To evaluate a newly developed algorithm for metal artifact reduction (MAR) in Computed Tomography (CT). Methods: A projection interpolation algorithm for MAR with threshold-based metal segmentation was developed. First, the algorithm was tested with a simulated hip phantom. On demand, the presence of metallic inserts was simulated, representing total hip endoprostheses. Second, CT data of 20 patient with total hip endoprosthesis were reconstructed with and without application of the MAR algorithm. Image quality was independently assessed by 2 experienced radiologists using a qualitative score. The results of the in vitro study were evaluated with the Student's t test. Results of the in vivo study were analyzed using a repeated-measure analysis of variance. Results: Applying the MAR algorithm the phantom study showed no significant difference between images with and without simulated metal contributions. The patient study revealed improved image quality using the MAR algorithm. Results were statistically significant for fat (P = 0.0097), vessels (P = 0.0091), and bone (P = 0.0005). Improvement of the image quality for muscle was not statistically significant (P = 0.0287). Conclusions: A new algorithm for metal artifact reduction was successfully introduced into clinical routine. The algorithm led to a robust reduction of metal artifacts. The MAR algorithm may serve for an improvement in image quality in patients with metallic implants.

Journal ArticleDOI
TL;DR: A foveation scalable video coding (FSVC) algorithm which supplies good quality-compression performance as well as effective rate scalability, and is adaptable to different applications, such as knowledge-based video coding and video communications over time-varying, multiuser and interactive networks.
Abstract: Image and video coding is an optimization problem. A successful image and video coding algorithm delivers a good tradeoff between visual quality and other coding performance measures, such as compression, complexity, scalability, robustness, and security. In this paper, we follow two recent trends in image and video coding research. One is to incorporate human visual system (HVS) models to improve the current state-of-the-art of image and video coding algorithms by better exploiting the properties of the intended receiver. The other is to design rate scalable image and video codecs, which allow the extraction of coded visual information at continuously varying bit rates from a single compressed bitstream. Specifically, we propose a foveation scalable video coding (FSVC) algorithm which supplies good quality-compression performance as well as effective rate scalability. The key idea is to organize the encoded bitstream to provide the best decoded video at an arbitrary bit rate in terms of foveated visual quality measurement. A foveation-based HVS model plays an important role in the algorithm. The algorithm is adaptable to different applications, such as knowledge-based video coding and video communications over time-varying, multiuser and interactive networks.

Journal ArticleDOI
TL;DR: The method demonstrates improved image quality in all cases when compared to the conventional FBP and EM methods presently used for clinical data (which do not include resolution modeling).
Abstract: Methodology for PET system modeling using image-space techniques in the expectation maximization (EM) algorithm is presented. The approach, applicable to both list-mode data and projection data, is of particular significance to EM algorithm implementations which otherwise only use basic system models (such as those which calculate the system matrix elements on the fly). A basic version of the proposed technique can be implemented using image-space convolution, in order to include resolution effects into the system matrix, so that the EM algorithm gradually recovers the modeled resolution with each update. The improved system modeling (achieved by inclusion of two convolutions per iteration) results in both enhanced resolution and lower noise, and there is often no need for regularization-other than to limit the number of iterations. Tests have been performed with simulated list-mode data and also with measured projection data from a GE Advance PET scanner, for both [/sup 18/F]-FDG and [/sup 124/I]-NaI. The method demonstrates improved image quality in all cases when compared to the conventional FBP and EM methods presently used for clinical data (which do not include resolution modeling). The benefits of this approach for /sup 124/I (which has a low positron yield and a large positron range, usually resulting in noisier and poorer resolution images) are particularly noticeable.

Journal ArticleDOI
TL;DR: Color-imaging methods with an integrated compound imaging system called TOMBO (Thin observation module by bound optics) are presented and two configurations for color imaging are described.
Abstract: Color-imaging methods with an integrated compound imaging system called TOMBO (Thin observation module by bound optics) are presented. The TOMBO is a compact optoelectronic imaging system for image capturing based on compound-eye imaging and post digital processing. First, a general description of the TOMBO system is given, and then two configurations for color imaging are described. Experimental comparison of these configurations is made by use of an experimental TOMBO system. The characteristics and the performance on the proposed methods are briefly discussed.

Journal ArticleDOI
TL;DR: A new approach to deal with the noise inherent in the microarray image processing procedure is presented, to denoise the image noises before further image processing using stationary wavelet transform (SWT), which is particularly useful in image denoising.
Abstract: Microarray imaging is considered an important tool for large scale analysis of gene expression. The accuracy of the gene expression depends on the experiment itself and further image processing. It's well known that the noises introduced during the experiment will greatly affect the accuracy of the gene expression. How to eliminate the effect of the noise constitutes a challenging problem in microarray analysis. Traditionally, statistical methods are used to estimate the noises while the microarray images are being processed. In this paper, we present a new approach to deal with the noise inherent in the microarray image processing procedure. That is, to denoise the image noises before further image processing using stationary wavelet transform (SWT). The time invariant characteristic of SWT is particularly useful in image denoising. The testing result on sample microarray images has shown an enhanced image quality. The results also show that it has a superior performance than conventional discrete wavelet transform and widely used adaptive Wiener filter in this procedure.

Journal ArticleDOI
TL;DR: Based on log-polar mapping (LPM) and phase correlation, the paper presents a novel digital image watermarking scheme that is invariant to rotation, scaling, and translation (RST).
Abstract: Based on log-polar mapping (LPM) and phase correlation, the paper presents a novel digital image watermarking scheme that is invariant to rotation, scaling, and translation (RST). We embed a watermark in the LPMs of the Fourier magnitude spectrum of an original image, and use the phase correlation between the LPM of the original image and the LPM of the watermarked image to calculate the displacement of watermark positions in the LPM domain. The scheme preserves the image quality by avoiding computing the inverse log-polar mapping (ILPM), and produces smaller correlation coefficients for unwatermarked images by using phase correlation to avoid exhaustive search. The evaluations demonstrate that the scheme is invariant to rotation and translation, invariant to scaling when the scale is in a reasonable range, and very robust to JPEG compression.

Journal ArticleDOI
TL;DR: The authors show that the smoothness of the transition between image stacks acquired in different cardiac cycles can be efficiently controlled with the proposed approach for ECG-synchronized image reconstruction, and develop an algorithm for scan data completion, extrapolating truncated data of detector (B) by using data of detectors (A).
Abstract: The authors present and evaluate concepts for image reconstruction in dual source CT (DSCT). They describe both standard spiral (helical) DSCT image reconstruction and electrocardiogram (ECG)-synchronized image reconstruction. For a compact mechanical design of the DSCT, one detector (A) can cover the full scan field of view, while the other detector (B) has to be restricted to a smaller, central field of view. The authors develop an algorithm for scan data completion, extrapolating truncated data of detector (B) by using data of detector (A). They propose a unified framework for convolution and simultaneous 3D backprojection of both (A) and (B) data, with similar treatment of standard spiral, ECG-gated spiral, and sequential (axial) scan data. In ECG-synchronized image reconstruction, a flexible scan data range per measurement system can be used to trade off temporal resolution for reduced image noise. Both data extrapolation and image reconstruction are evaluated by means of computer simulated data of anthropomorphic phantoms, by phantom measurements and patient studies. The authors show that a consistent filter direction along the spiral tangent on both detectors is essential to reduce cone-beam artifacts, requiring truncation of the extrapolated (B) data after convolution in standard spiral scans. Reconstructions of an anthropomorphic thorax phantom demonstrate good image quality and dose accumulation as theoretically expected for simultaneous 3D backprojection of the filtered (A) data and the truncated filtered (B) data into the same 3D image volume. In ECG-gated spiral modes, spiral slice sensitivity profiles (SSPs) show only minor dependence on the patient's heart rate if the spiral pitch is properly adapted. Measurements with a thin gold plate phantom result in effective slice widths (full width at half maximum of the SSP) of 0.63-0.69 mm for the nominal 0.6 mm slice and 0.82-0.87 mm for the nominal 0.75 mm slice. The visually determined through-plane (z axis) spatial resolution in a bar pattern phantom is 0.33-0.36 mm for the nominal 0.6 mm slice and 0.45 mm for the nominal 0.75 mm slice, again almost independent of the patient's heart rate. The authors verify the theoretically expected temporal resolution of 83 ms at 330 ms gantry rotation time by blur free images of a moving coronary artery phantom with 90 ms rest phase and demonstrate image noise reduction as predicted for increased reconstruction data ranges per measurement system. Finally, they show that the smoothness of the transition between image stacks acquired in different cardiac cycles can be efficiently controlled with the proposed approach for ECG-synchronized image reconstruction.

Proceedings ArticleDOI
18 Jun 2003
TL;DR: The fundamental tradeoff between spatial resolution and temporal resolution is exploited to construct a hybrid camera that can measure its own motion during image integration and show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem.
Abstract: Motion blur due to camera motion can significantly degrade the quality of an image. Since the path of the camera motion can be arbitrary, deblurring of motion blurred images is a hard problem. Previous methods to deal with this problem have included blind restoration of motion blurred images, optical correction using stabilized lenses, and special CMOS sensors that limit the exposure time in the presence of motion. In this paper, we exploit the fundamental tradeoff between spatial resolution and temporal resolution to construct a hybrid camera that can measure its own motion during image integration. The acquired motion information is used to compute a point spread function (PSF) that represents the path of the camera during integration. This PSF is then used to deblur the image. To verify the feasibility of hybrid imaging for motion deblurring, we have implemented a prototype hybrid camera. This prototype system was evaluated in different indoor and outdoor scenes using long exposures and complex camera motion paths. The results show that, with minimal resources, hybrid imaging outperforms previous approaches to the motion blur problem.

Journal ArticleDOI
S. Friedel1
TL;DR: In this paper, a low-contrast inversion scheme for electrical resistivity tomography was proposed to support the reconstructed image with estimates of model resolution, model covariance and data importance.
Abstract: SUMMARY Inconsistencies between an object and its image delivered by tomographical methods are inevitable. Loss of information occurs during the survey through incomplete and inaccurate data sampling and may also be introduced during the inverse procedure by smoothness constraints inadequate to the resolving power of the experimental setup. A quantitative appraisal of image quality (spatial resolution and image noise) is therefore not only required for successful interpretation of images but can be used together with measures of efficiency of the experimental design to optimize survey and inverse procedures. This paper introduces a low-contrast inversion scheme for electrical resistivity tomography that supports the reconstructed image with estimates of model resolution, model covariance and data importance. The algorithm uses a truncated pseudo-inverse and a line search approach to determine the maximum number of degrees of freedom necessary to fit the data to a prescribed target misfit. Though computationally expensive, the virtue of the method is that it reduces subjectivity by avoiding any empirically motivated model smoothness constraints. The method can be incorporated into a full non-linear inversion scheme for which a posteriori quality estimates can be calculated. In a numerical 2-D example the algorithm yielded reasonable agreement between object and image even for moderate resistivity contrasts of 10:100:1000. On the other hand, the resolving power of an exemplary four-electrode data set containing classical dipole–dipole and non-conventional configurations was shown to be severely affected by data inaccuracy. Insight into the resolving power as a function of space and data accuracy can be used as a guideline to designing optimized data sets, smoothness constraints and model parametrization.

Journal ArticleDOI
TL;DR: In this paper, a simplified reconstruction algorithm was used which modelled the head as a homogeneous sphere and incorporated realistic geometry and conductivity distributions using the finite element method, which significantly improved the quality of EIT images.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed bit-plane-wise unequal error protection algorithm is simple, fast and robust in hostile network conditions and, therefore, can provide reasonable picture quality for video applications under varying network conditions.
Abstract: This paper presents a new bit-plane-wise unequal error protection algorithm for progressive bitstreams transmitted over lossy networks. The proposed algorithm protects a compressed embedded bitstream generated by a 3-D SPIHT algorithm by assigning an unequal amount of forward error correction (FEC) to each bit-plane. The proposed algorithm reduces the amount of side information needed to send the size of each code to the decoder by limiting the number of quality levels to the number of bit-planes to be sent while providing a graceful degradation of picture quality as packet losses increase. We also apply our proposed algorithm to transmission of JPEG 2000 coded images over the Internet. To get additional error-resilience at high packet loss rates, we extend our algorithm to multiple-substream unequal error protection. Simulation results show that the proposed algorithm is simple, fast and robust in hostile network conditions and, therefore, can provide reasonable picture quality for video applications under varying network conditions.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a novel method to detect copied versions of digital images by reducing the original image to an 8×8 sub-image by intensity averaging, and the AC coefficients of its discrete cosine transform (DCT) are used to compute distance from those generated from the query image, of which a user wants to find copies.
Abstract: This paper proposes a novel method to detect copied versions of digital images. The proposed copy detection scheme can be used as either an alternative approach or a complementary approach to watermarking. A test image is reduced to an 8×8 sub-image by intensity averaging, and the AC coefficients of its discrete cosine transform (DCT) are used to compute distance from those generated from the query image, of which a user wants to find copies. A challenge is that the replicated image may be processed to elude copy detection or enhance image quality. We show ordinal measure of DCT coefficients, which is based on relative ordering of AC magnitude values and using distance metrics between two rank permutations, is robust to various modifications of the original image. The optimal threshold selection scheme using the maximum a posteriori criterion is also described. The efficacy of the proposed method is extensively tested with both cluster-free and cluster-based detection scheme.