scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 2007"


Journal ArticleDOI
TL;DR: An algorithm based on an enhanced sparse representation in transform domain based on a specially developed collaborative Wiener filtering achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.
Abstract: We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call "groups." Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.

7,912 citations


Journal ArticleDOI
TL;DR: Enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies and clinical results illustrate the capabilities of the algorithm on real patient data.
Abstract: Multislice helical computed tomography scanning offers the advantages of faster acquisition and wide organ coverage for routine clinical diagnostic purposes. However, image reconstruction is faced with the challenges of three-dimensional cone-beam geometry, data completeness issues, and low dosage. Of all available reconstruction methods, statistical iterative reconstruction (IR) techniques appear particularly promising since they provide the flexibility of accurate physical noise modeling and geometric system description. In this paper, we present the application of Bayesian iterative algorithms to real 3D multislice helical data to demonstrate significant image quality improvement over conventional techniques. We also introduce a novel prior distribution designed to provide flexibility in its parameters to fine-tune image quality. Specifically, enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies. Clinical results also illustrate the capabilities of the algorithm on real patient data. Although computational load remains a significant challenge for practical development, superior image quality combined with advancements in computing technology make IR techniques a legitimate candidate for future clinical applications.

987 citations


Proceedings ArticleDOI
29 Jul 2007
TL;DR: This paper shows in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone, by combining information extracted from both blurred and noisy images.
Abstract: Taking satisfactory photos under dim lighting conditions using a hand-held camera is challenging. If the camera is set to a long exposure time, the image is blurred due to camera shake. On the other hand, the image is dark and noisy if it is taken with a short exposure time but with a high camera gain. By combining information extracted from both blurred and noisy images, however, we show in this paper how to produce a high quality image that cannot be obtained by simply denoising the noisy image, or deblurring the blurred image alone. Our approach is image deblurring with the help of the noisy image. First, both images are used to estimate an accurate blur kernel, which otherwise is difficult to obtain from a single blurred image. Second, and again using both images, a residual deconvolution is proposed to significantly reduce ringing artifacts inherent to image deconvolution. Third, the remaining ringing artifacts in smooth image regions are further suppressed by a gain-controlled deconvolution process. We demonstrate the effectiveness of our approach using a number of indoor and outdoor images taken by off-the-shelf hand-held cameras in poor lighting environments.

929 citations


Journal ArticleDOI
TL;DR: An iterative reconstruction method for undersampled radial MRI which is based on a nonlinear optimization, allows for the incorporation of prior knowledge with use of penalty functions, and deals with data from multiple coils is developed.
Abstract: The reconstruction of artifact-free images from radially encoded MRI acquisitions poses a difficult task for undersampled data sets, that is for a much lower number of spokes in k-space than data samples per spoke. Here, we developed an iterative reconstruction method for undersampled radial MRI which (i) is based on a nonlinear optimization, (ii) allows for the incorporation of prior knowledge with use of penalty functions, and (iii) deals with data from multiple coils. The procedure arises as a twostep mechanism which first estimates the coil profiles and then renders a final image that complies with the actual observations. Prior knowledge is introduced by penalizing edges in coil profiles and by a total variation constraint for the final image. The latter condition leads to an effective suppression of undersampling (streaking) artifacts and further adds a certain degree of denoising. Apart from simulations, experimental results for a radial spin-echo MRI sequence are presented for phantoms and human brain in vivo at 2.9 T using 24, 48, and 96 spokes with 256 data samples. In comparison to conventional reconstructions (regridding) the proposed method yielded visually improved image quality in all cases. Magn Reson Med 57:1086–1098, 2007. © 2007 Wiley-Liss, Inc.

794 citations


Journal ArticleDOI
TL;DR: A new decision-based algorithm is proposed for restoration of images that are highly corrupted by impulse noise that removes the noise effectively even at noise level as high as 90% and preserves the edges without any loss up to 80% of noise level.
Abstract: A new decision-based algorithm is proposed for restoration of images that are highly corrupted by impulse noise. The new algorithm shows significantly better image quality than a standard median filter (SMF), adaptive median filters (AMF), a threshold decomposition filter (TDF), cascade, and recursive nonlinear filters. The proposed method, unlike other nonlinear filters, removes only corrupted pixel by the median value or by its neighboring pixel value. As a result of this, the proposed method removes the noise effectively even at noise level as high as 90% and preserves the edges without any loss up to 80% of noise level. The proposed algorithm (PA) is tested on different images and is found to produce better results in terms of the qualitative and quantitative measures of the image

679 citations


Journal ArticleDOI
TL;DR: This paper study face hallucination, or synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high- resolution face images to generate photorealistic face images.
Abstract: In this paper, we study face hallucination, or synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high-resolution face images. Our theoretical contribution is a two-step statistical modeling approach that integrates both a global parametric model and a local nonparametric model. At the first step, we derive a global linear model to learn the relationship between the high-resolution face images and their smoothed and down-sampled lower resolution ones. At the second step, we model the residue between an original high-resolution image and the reconstructed high-resolution image after applying the learned linear model by a patch-based non-parametric Markov network to capture the high-frequency content. By integrating both global and local models, we can generate photorealistic face images. A practical contribution is a robust warping algorithm to align the low-resolution face images to obtain good hallucination results. The effectiveness of our approach is demonstrated by extensive experiments generating high-quality hallucinated face images from low-resolution input with no manual alignment.

450 citations


Proceedings ArticleDOI
29 Oct 2007
TL;DR: This work proposes a technique for fusing a bracketed exposure sequence into a high quality image, without converting to HDR first, which avoids camera response curve calibration and is computationally efficient.
Abstract: We propose a technique for fusing a bracketed exposure sequence into a high quality image, without converting to HDR first. Skipping the physically-based HDR assembly step simplifies the acquisition pipeline. This avoids camera response curve calibration and is computationally efficient. It also allows for including flash images in the sequence. Our technique blends multiple exposures, guided by simple quality measures like saturation and contrast. This is done in a multiresolution fashion to account for the brightness variation in the sequence. The resulting image quality is comparable to existing tone mapping operators.

447 citations


Journal ArticleDOI
TL;DR: An improved image acquisition and data‐processing strategy for assessing aortic vascular geometry and 3D blood flow at 3T is evaluated.
Abstract: Purpose To evaluate an improved image acquisition and data-processing strategy for assessing aortic vascular geometry and 3D blood flow at 3T. Materials and Methods In a study with five normal volunteers and seven patients with known aortic pathology, prospectively ECG-gated cine three-dimensional (3D) MR velocity mapping with improved navigator gating, real-time adaptive k-space ordering and dynamic adjustment of the navigator acceptance criteria was performed. In addition to morphological information and three-directional blood flow velocities, phase-contrast (PC)-MRA images were derived from the same data set, which permitted 3D isosurface rendering of vascular boundaries in combination with visualization of blood-flow patterns. Results Analysis of navigator performance and image quality revealed improved scan efficiencies of 63.6% ± 10.5% and temporal resolution (<50 msec) compared to previous implementations. Semiquantitative evaluation of image quality by three independent observers demonstrated excellent general image appearance with moderate blurring and minor ghosting artifacts. Results from volunteer and patient examinations illustrate the potential of the improved image acquisition and data-processing strategy for identifying normal and pathological blood-flow characteristics. Conclusion Navigator-gated time-resolved 3D MR velocity mapping at 3T in combination with advanced data processing is a powerful tool for performing detailed assessments of global and local blood-flow characteristics in the aorta to describe or exclude vascular alterations. J. Magn. Reson. Imaging 2007. © 2007 Wiley-Liss, Inc.

380 citations


Journal ArticleDOI
L Goldman1
TL;DR: The basic principles of CT are reviewed within the context of the evolution of CT, a natural progression of improvements and innovations in response to both engineering problems and clinical requirements.
Abstract: This article provides a review of the basic principles of CT within the context of the evolution of CT. Modern CT technology can be understood as a natural progression of improvements and innovations in response to both engineering problems and clinical requirements. Detailed discussions of multislice CT, CT image quality evaluation, and radiation doses in CT will be presented in upcoming articles in this series.

366 citations


Journal ArticleDOI
L Goldman1
TL;DR: The most commonly used dose descriptor is CT dose index, which represents the dose to a location in a scanned volume from a complete series of slices, and this value is often displayed on the operator's console.
Abstract: This article discusses CT radiation dose, the measurement of CT dose, and CT image quality. The most commonly used dose descriptor is CT dose index, which represents the dose to a location (e.g., depth) in a scanned volume from a complete series of slices. A weighted average of the CT dose index measured at the center and periphery of dose phantoms provides a convenient single-number estimate of patient dose for a procedure, and this value (or a related indicator that includes the scanned length) is often displayed on the operator's console. CT image quality, as in most imaging, is described in terms of contrast, spatial resolution, image noise, and artifacts. A strength of CT is its ability to visualize structures of low contrast in a subject, a task that is limited primarily by noise and is therefore closely associated with radiation dose: The higher the dose contributing to the image, the less apparent is image noise and the easier it is to perceive low-contrast structures. Spatial resolution is ultimately limited by sampling, but both image noise and resolution are strongly affected by the reconstruction filter. As a result, diagnostically acceptable image quality at acceptable doses of radiation requires appropriately designed clinical protocols, including appropriate kilovolt peaks, amperages, slice thicknesses, and reconstruction filters.

344 citations


Journal ArticleDOI
TL;DR: A lossless and reversible steganography scheme for hiding secret data in each block of quantized discrete cosine transformation (DCT) coefficients in JPEG images that can provide expected acceptable image quality of stego-images and successfully achieve reversibility.

Journal ArticleDOI
TL;DR: Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images, and it is shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio.
Abstract: We describe an innovative methodology for determining the quality of digital images. The method is based on measuring the variance of the expected entropy of a given image upon a set of predefined directions. Entropy can be calculated on a local basis by using a spatial/spatial-frequency distribution as an approximation for a probability density function. The generalized Renyi entropy and the normalized pseudo-Wigner distribution (PWD) have been selected for this purpose. As a consequence, a pixel-by-pixel entropy value can be calculated, and therefore entropy histograms can be generated as well. The variance of the expected entropy is measured as a function of the directionality, and it has been taken as an anisotropy indicator. For this purpose, directional selectivity can be attained by using an oriented 1-D PWD implementation. Our main purpose is to show how such an anisotropy measure can be used as a metric to assess both the fidelity and quality of images. Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images. Namely, in-focus, noise-free natural images have shown a maximum of this metric in comparison with other degraded, blurred, or noisy versions. This result provides a way of identifying in-focus, noise-free images from other degraded versions, allowing an automatic and nonreference classification of images according to their relative quality. It is also shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio.

Patent
25 Jun 2007
TL;DR: In this article, the authors present methods of digitally converting 2D motion pictures or any other 2D image sequences to stereoscopic 3D image data for 3D exhibition, which can be implemented within a highly efficient system comprising both software and computing hardware.
Abstract: The present invention discloses methods of digitally converting 2D motion pictures or any other 2D image sequences to stereoscopic 3D image data for 3D exhibition. In one embodiment, various types of image data cues can be collected from 2D source images by various methods and then used for producing two distinct stereoscopic 3D views. Embodiments of the disclosed methods can be implemented within a highly efficient system comprising both software and computing hardware. The architectural model of some embodiments of the system is equally applicable to a wide range of conversion, re-mastering and visual enhancement applications for motion pictures and other image sequences, including converting a 2D motion picture or a 2D image sequence to 3D, re-mastering a motion picture or a video sequence to a different frame rate, enhancing the quality of a motion picture or other image sequences, or other conversions that facilitate further improvement in visual image quality within a projector to produce the enhanced images.

Journal ArticleDOI
TL;DR: This paper presents the first detailed simulation approach to evaluate the proposed imaging method called 'magnetic particle imaging' with respect to resolution and sensitivity with good resolution, fast image acquisition and high sensitivity.
Abstract: This paper presents the first detailed simulation approach to evaluate the proposed imaging method called 'magnetic particle imaging' with respect to resolution and sensitivity. The simulated scanner is large enough to accept human bodies. Together with the choice of field strength and noise the setup is representative for clinical applications. Good resolution, fast image acquisition and high sensitivity are demonstrated for various tracer concentrations, acquisition times, tracer properties and fields of view. Scaling laws for the simple prediction of image quality under the variation of these parameters are derived.

Journal ArticleDOI
TL;DR: The area under the VGC curve is proposed as a single measure of the difference in image quality between two compared modalities and it is described how VGC analysis can be applied to data from an absolute visual grading analysis study.
Abstract: Visual grading of the reproduction of important anatomical structures is often used to determine clinical image quality in radiography. However, many visual grading methods incorrectly use statistical methods that require data belonging to an interval scale. The rating data from the observers in a visual grading study with multiple ratings is ordinal, meaning that non-parametric rank-invariant statistical methods are required. This paper describes such a method for determining the difference in image quality between two modalities called visual grading characteristics (VGC) analysis. In a VGC study, the task of the observer is to rate his confidence about the fulfilment of image quality criteria. The rating data for the two modalities are then analysed in a manner similar to that used in receiver operating characteristics (ROC) analysis. The resulting measure of image quality is the VGC curve, which--for all possible thresholds of the observer for a fulfilled criterion--describes the relationship between the proportions of fulfilled image criteria for the two compared modalities. The area under the VGC curve is proposed as a single measure of the difference in image quality between two compared modalities. It is also described how VGC analysis can be applied to data from an absolute visual grading analysis study.

Journal ArticleDOI
TL;DR: This paper presents a joint formulation for a complex super-resolution problem in which the scenes contain multiple independently moving objects, built upon the maximum a posteriori (MAP) framework, which judiciously combines motion estimation, segmentation, and super resolution together.
Abstract: Super resolution image reconstruction allows the recovery of a high-resolution (HR) image from several low-resolution images that are noisy, blurred, and down sampled. In this paper, we present a joint formulation for a complex super-resolution problem in which the scenes contain multiple independently moving objects. This formulation is built upon the maximum a posteriori (MAP) framework, which judiciously combines motion estimation, segmentation, and super resolution together. A cyclic coordinate descent optimization procedure is used to solve the MAP formulation, in which the motion fields, segmentation fields, and HR images are found in an alternate manner given the two others, respectively. Specifically, the gradient-based methods are employed to solve the HR image and motion fields, and an iterated conditional mode optimization method to obtain the segmentation fields. The proposed algorithm has been tested using a synthetic image sequence, the "Mobile and Calendar" sequence, and the original "Motorcycle and Car" sequence. The experiment results and error analyses verify the efficacy of this algorithm

Patent
26 Oct 2007
TL;DR: An OCT imaging system user interface for efficiently providing relevant image displays to the user is presented in this paper.These displays are used during image acquisition to align patients and verify acquisition image quality, and during image analysis, these displays indicate positional relationships between displayed data images, automatically display suspicious analysis, automatically displays diagnostic data, simultaneously display similar data from multiple visits, improve access to archived data, and provide other improvements for efficient data presentation of relevant information
Abstract: The present invention is an OCT imaging system user interface for efficiently providing relevant image displays to the user. These displays are used during image acquisition to align patients and verify acquisition image quality. During image analysis, these displays indicate positional relationships between displayed data images, automatically display suspicious analysis, automatically display diagnostic data, simultaneously display similar data from multiple visits, improve access to archived data, and provide other improvements for efficient data presentation of relevant information.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.
Abstract: We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features

Journal ArticleDOI
TL;DR: Improved video quality assessment algorithms are obtained by incorporating a recent model of human visual speed perception and incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty in video signals.
Abstract: Motion is one of the most important types of information contained in natural video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to incorporate a recent model of human visual speed perception [Nat. Neurosci. 9, 578 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.

Journal ArticleDOI
TL;DR: In this article, a new quantitative metric called the ratio of spatial frequency error (rSFe) is proposed to objectively evaluate the quality of fused imagery, where the measured value of the proposed metric is used as feedback to a fusion algorithm such that the image quality of the fused image can potentially be improved.

Journal ArticleDOI
TL;DR: A non-blind, shift-invariant image processing technique that fuses multi-view three-dimensional image data sets into a single, high quality three- dimensional image is presented, effective for improving the resolution and isotropy in images of transparent specimens, and improving the uniformity of the image quality of partially opaque samples.
Abstract: A non-blind, shift-invariant image processing technique that fuses multi-view three-dimensional image data sets into a single, high quality three-dimensional image is presented. It is effective for 1) improving the resolution and isotropy in images of transparent specimens, and 2) improving the uniformity of the image quality of partially opaque samples. This is demonstrated with fluorescent samples such as Drosophila melanogaster and Medaka embryos and pollen grains imaged by Selective Plane Illumination Microscopy (SPIM). The application of the algorithm to SPIM data yields high-resolution images of organ structure and gene expression, in some cases at a sub-cellular level, throughout specimens ranging from several microns up to a millimeter in size.

Journal ArticleDOI
TL;DR: A promising method to incorporate tissue structural information into the reconstruction of diffusion-based fluorescence imaging is introduced, which regularizes the inversion problem with a Laplacian-type matrix, which inherently smoothes pre-defined tissue, but allows discontinuities between adjacent regions.
Abstract: A promising method to incorporate tissue structural information into the reconstruction of diffusion-based fluorescence imaging is introduced. The method regularizes the inversion problem with a Laplacian-type matrix, which inherently smoothes pre-defined tissue, but allows discontinuities between adjacent regions. The technique is most appropriately used when fluorescence tomography is combined with structural imaging systems. Phantom and simulation studies were used to illustrate significant improvements in quantitative imaging and linearity of response with the new algorithm. Images of an inclusion containing the fluorophore Lutetium Texaphyrin (Lutex) embedded in a cylindrical phantom are more accurate than in situations where no structural information is available, and edge artifacts which are normally prevalent were almost entirely suppressed. Most importantly, spatial priors provided a higher degree of sensitivity and accuracy to fluorophore concentration, though both techniques suffer from image bias caused by excitation signal leakage. The use of spatial priors becomes essential for accurate recovery of fluorophore distributions in complex tissue volumes. Simulation studies revealed an inability of the “no-priors” imaging algorithm to recover Lutex fluorescence yield in domains derived from T1 weighted images of a human breast. The same domains were reconstructed accurately to within 75% of the true values using prior knowledge of the internal tissue structure. This algorithmic approach will be implemented in an MR-coupled fluorescence spectroscopic tomography system, using the MR images for the structural template and the fluorescence data for region quantification.

Journal ArticleDOI
TL;DR: This paper proposes an image compression framework towards visual quality rather than pixel-wise fidelity, and constructs a practical system to verify the effectiveness of the compression approach in which edge map serves as assistant information and the edge extraction and region removal approaches are developed accordingly.
Abstract: In this paper, image compression utilizing visual redundancy is investigated. Inspired by recent advancements in image inpainting techniques, we propose an image compression framework towards visual quality rather than pixel-wise fidelity. In this framework, an original image is analyzed at the encoder side so that portions of the image are intentionally and automatically skipped. Instead, some information is extracted from these skipped regions and delivered to the decoder as assistant information in the compressed fashion. The delivered assistant information plays a key role in the proposed framework because it guides image inpainting to accurately restore these regions at the decoder side. Moreover, to fully take advantage of the assistant information, a compression-oriented edge-based inpainting algorithm is proposed for image restoration, integrating pixel-wise structure propagation and patch-wise texture synthesis. We also construct a practical system to verify the effectiveness of the compression approach in which edge map serves as assistant information and the edge extraction and region removal approaches are developed accordingly. Evaluations have been made in comparison with baseline JPEG and standard MPEG-4 AVC/H.264 intra-picture coding. Experimental results show that our system achieves up to 44% and 33% bits-savings, respectively, at similar visual quality levels. Our proposed framework is a promising exploration towards future image and video compression.

Journal ArticleDOI
TL;DR: The numerical values of the image quality metrics along with the qualitative analysis results indicated the good feature preservation performance of the complex diffusion process, as desired for better diagnosis in medical imaging processing.
Abstract: A comparison between two nonlinear diffusion methods for denoising OCT images is performed. Specifically, we compare and contrast the performance of the traditional nonlinear Perona-Malik filter with a complex diffusion filter that has been recently introduced by Gilboa . The complex diffusion approach based on the generalization of the nonlinear scale space to the complex domain by combining the diffusion and the free Schrodinger equation is evaluated on synthetic images and also on representative OCT images at various noise levels. The performance improvement over the traditional nonlinear Perona-Malik filter is quantified in terms of noise suppression, image structural preservation and visual quality. An average signal-to-noise ratio (SNR) improvement of about 2.5 times and an average contrast to noise ratio (CNR) improvement of 49% was obtained while mean structure similarity (MSSIM) was practically not degraded after denoising. The nonlinear complex diffusion filtering can be applied with success to many OCT imaging applications. In summary, the numerical values of the image quality metrics along with the qualitative analysis results indicated the good feature preservation performance of the complex diffusion process, as desired for better diagnosis in medical imaging processing

Proceedings ArticleDOI
15 Apr 2007
TL;DR: A novel method for the detection of image tampering operations in JPEG images by exploiting the blocking artifact characteristics matrix (BACM) to train a support vector machine (SVM) classifier for recognizing whether an image is an original JPEG image or it has been cropped from another JPEG image and re-saved as a JPEG image.
Abstract: One of the most common practices in image tampering involves cropping a patch from a source and pasting it onto a target. In this paper, we present a novel method for the detection of such tampering operations in JPEG images. The lossy JPEG compression introduces inherent blocking artifacts into the image and our method exploits such artifacts to serve as a 'watermark' for the detection of image tampering. We develop the blocking artifact characteristics matrix (BACM) and show that, for the original JPEG images, the BACM exhibits regular symmetrical shape; for images that are cropped from another JPEG image and re-saved as JPEG images, the regular symmetrical property of the BACM is destroyed. We fully exploit this property of the BACM and derive representation features from the BACM to train a support vector machine (SVM) classifier for recognizing whether an image is an original JPEG image or it has been cropped from another JPEG image and re-saved as a JPEG image. We present experiment results to show the efficacy of our method.

Journal ArticleDOI
TL;DR: It is shown, by representing aberrations as an expansion in Lukosz modes, that the effects of different modes can be separated and the optimisation of each mode becomes independent and can be performed as the maximization of a quadratic function, requiring only three image measurements per mode.
Abstract: We present a wave front sensorless adaptive optics scheme for an incoherent imaging system. Aberration correction is performed through the optimisation of an image quality metric based upon the low spatial frequency content of the image. A sequence of images is acquired, each with a different aberration bias applied and the correction aberration is estimated from the information in this image sequence. It is shown, by representing aberrations as an expansion in Lukosz modes, that the effects of different modes can be separated. The optimisation of each mode becomes independent and can be performed as the maximization of a quadratic function, requiring only three image measurements per mode. This efficient correction scheme is demonstrated experimentally in an incoherent transmission microscope. We show that the sensitivity to different aberration magnitudes can be tuned by changing the range of spatial frequencies used in the metric.We also explain how the optimization scheme is related to other methods that use image sharpness metrics.

Proceedings ArticleDOI
12 Nov 2007
TL;DR: The results show that applying the visual attention to image quality assessment is not trivial, even with the ground truth, and that an artefact is likely more annoying in a salient region than in other areas.
Abstract: The aim of an objective image quality assessment is to find an automatic algorithm that evaluates the quality of pictures or video as a human observer would do. To reach this goal, researchers try to simulate the Human Visual System (HVS). Visual attention is a main feature of the HVS, but few studies have been done on using it in image quality assessment. In this work, we investigate the use of the visual attention information in their final pooling step. The rationale of this choice is that an artefact is likely more annoying in a salient region than in other areas. To shed light on this point, a quality assessment campaign has been conducted during which eye movements have been recorded. The results show that applying the visual attention to image quality assessment is not trivial, even with the ground truth.

Journal ArticleDOI
TL;DR: Introductory Digital Image Processing: A Remote Sensing Perspective (3rd ed.), by John R. Jenson, is a complete and thorough introduction to and reference for remote sensing technologies and digital image processing techniques.
Abstract: Introductory Digital Image Processing: A Remote Sensing Perspective (3rd ed), by John R Jenson, is a complete and thorough introduction to and reference for remote sensing technologies and digital image processing techniques The author writes a detailed account of a wide spectrum of remote sensing and digital image processing needs including data collection, processing hardware and software, image quality and statistical evaluation, visualization, principles of electromagnetic radiation, radiometric and geometric correction of remotely sensed data, image enhancement, information extraction, change detection analyses, and accuracy assessments As stated by the author, “This book Author: Should the three line quotation in the first paragraph be a display quotation? If so, please provide stylewas …

Journal ArticleDOI
TL;DR: A novel steganographic method, based on JPEG and Particle Swarm Optimization algorithm (PSO), that has larger message capacity and better image quality than Chang et al.'s and has a high security level is proposed.

Journal ArticleDOI
TL;DR: The methods and results for predicting, measuring and correcting geometric distortions in a 3 T clinical magnetic resonance (MR) scanner for the purpose of image guidance in radiation treatment planning can be predicted negating the need for individual distortion calculation for a variety of other imaging sequences.
Abstract: The work presented herein describes our methods and results for predicting, measuring and correcting geometric distortions in a 3 T clinical magnetic resonance (MR) scanner for the purpose of image guidance in radiation treatment planning. Geometric inaccuracies due to both inhomogeneities in the background field and nonlinearities in the applied gradients were easily visualized on the MR images of a regularly structured three-dimensional (3D) grid phantom. From a computed tomography scan, the locations of just under 10 000 control points within the phantom were accurately determined in three dimensions using a MATLAB-based computer program. MR distortion was then determined by measuring the corresponding locations of the control points when the phantom was imaged using the MR scanner. Using a reversed gradient method, distortions due to gradient nonlinearities were separated from distortions due to inhomogeneities in the background B0 field. Because the various sources of machine-related distortions can be individually characterized, distortions present in other imaging sequences (for which 3D distortion cannot accurately be measured using phantom methods) can be predicted negating the need for individual distortion calculation for a variety of other imaging sequences. Distortions were found to be primarily caused by gradient nonlinearities and maximum image distortions were reported to be less than those previously found by other researchers at 1.5 T. Finally, the image slices were corrected for distortion in order to provide geometrically accurate phantom images.