scispace - formally typeset
Search or ask a question

Showing papers on "Image quality published in 2008"


Journal ArticleDOI
TL;DR: Experimental data are presented that clearly demonstrate the scope of application of peak signal-to-noise ratio (PSNR) as a video quality metric and it is shown that as long as the video content and the codec type are not changed, PSNR is a valid quality measure.
Abstract: Experimental data are presented that clearly demonstrate the scope of application of peak signal-to-noise ratio (PSNR) as a video quality metric. It is shown that as long as the video content and the codec type are not changed, PSNR is a valid quality measure. However, when the content is changed, correlation between subjective quality and PSNR is highly reduced. Hence PSNR cannot be a reliable method for assessing the video quality across different video contents.

1,899 citations


Journal ArticleDOI
TL;DR: Three new algorithms for 2D translation image registration to within a small fraction of a pixel that use nonlinear optimization and matrix-multiply discrete Fourier transforms are compared to evaluate a translation-invariant error metric.
Abstract: Three new algorithms for 2D translation image registration to within a small fraction of a pixel that use nonlinear optimization and matrix-multiply discrete Fourier transforms are compared. These algorithms can achieve registration with an accuracy equivalent to that of the conventional fast Fourier transform upsampling approach in a small fraction of the computation time and with greatly reduced memory requirements. Their accuracy and computation time are compared for the purpose of evaluating a translation-invariant error metric.

1,715 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: It is shown that a small set of randomly chosen raw patches from training images of similar statistical nature to the input image generally serve as a good dictionary, in the sense that the computed representation is sparse and the recovered high-resolution image is competitive or even superior in quality to images produced by other SR methods.
Abstract: This paper addresses the problem of generating a super-resolution (SR) image from a single low-resolution input image. We approach this problem from the perspective of compressed sensing. The low-resolution image is viewed as downsampled version of a high-resolution image, whose patches are assumed to have a sparse representation with respect to an over-complete dictionary of prototype signal-atoms. The principle of compressed sensing ensures that under mild conditions, the sparse representation can be correctly recovered from the downsampled signal. We will demonstrate the effectiveness of sparsity as a prior for regularizing the otherwise ill-posed super-resolution problem. We further show that a small set of randomly chosen raw patches from training images of similar statistical nature to the input image generally serve as a good dictionary, in the sense that the computed representation is sparse and the recovered high-resolution image is competitive or even superior in quality to images produced by other SR methods.

1,546 citations


Journal ArticleDOI
TL;DR: The results show that the optimized NL-means filter outperforms the classical implementation of the NL- means filter, as well as two other classical denoising methods and total variation minimization process in terms of accuracy with low computation time.
Abstract: A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3-D optimized blockwise version of the nonlocal (NL)-means filter (Buades, , 2005). The NL-means filter uses the redundancy of information in the image under study to remove the noise. The performance of the NL-means filter has been already demonstrated for 2-D images, but reducing the computational burden is a critical aspect to extend the method to 3-D images. To overcome this problem, we propose improvements to reduce the computational complexity. These different improvements allow to drastically divide the computational time while preserving the performances of the NL-means filter. A fully automated and optimized version of the NL-means filter is then presented. Our contributions to the NL-means filter are: 1) an automatic tuning of the smoothing parameter; 2) a selection of the most relevant voxels; 3) a blockwise implementation; and 4) a parallelized computation. Quantitative validation was carried out on synthetic datasets generated with BrainWeb (Collins, , 1998). The results show that our optimized NL-means filter outperforms the classical implementation of the NL-means filter, as well as two other classical denoising methods [anisotropic diffusion (Perona and Malik, 1990)] and total variation minimization process (Rudin, , 1992) in terms of accuracy (measured by the peak signal-to-noise ratio) with low computation time. Finally, qualitative results on real data are presented.

1,113 citations


Journal ArticleDOI
TL;DR: An overview of the fundamental principles of operation of this technology and the influence of geometric and software parameters on image quality and patient radiation dose are provided.

919 citations


Journal ArticleDOI
TL;DR: The main standardization activities are summarized, such as the work of the video quality experts group (VQEG), and a look at emerging trends in quality measurement, including image preference, visual attention, and audiovisual quality.
Abstract: This paper reviews the evolution of video quality measurement techniques and their current state of the art. We start with subjective experiments and then discuss the various types of objective metrics and their uses. We also introduce V-Factor, a "hybrid" metric using both transport- and bitstream information. Finally, we summarize the main standardization activities, such as the work of the video quality experts group (VQEG), and we take a look at emerging trends in quality measurement, including image preference, visual attention, and audiovisual quality.

635 citations


Journal ArticleDOI
TL;DR: In this paper, Wang and Bovik's image quality index (QI) was used to evaluate the quality of pan-chromatic multispectral images without resorting to reference originals.
Abstract: This paper introduces a novel approach for evaluating the quality of pansharpened multispectral (MS) imagery without resorting to reference originals. Hence, evaluations are feasible at the highest spatial resolution of the panchromatic (PAN) sensor. Wang and Bovik’s image quality index (QI) provides a statistical similarity measurement between two monochrome images. The QI values between any couple of MS bands are calculated before and after fusion and used to define a measurement of spectral distortion. Analogously, QI values between each MS band and the PAN image are calculated before and after fusion to yield a measurement of spatial distortion. The rationale is that such QI values should be unchanged after fusion, i.e., when the spectral information is translated from the coarse scale of the MS data to the fine scale of the PAN image. Experimental results, carried out on very high-resolution Ikonos data and simulated Pleiades data, demonstrate that the results provided by the proposed approach are consistent and in trend with analysis performed on spatially degraded data. However, the proposed method requires no reference originals and is therefore usable in all practical cases.

630 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: A novel local image descriptor designed for dense wide-baseline matching purposes, inspired from earlier ones such as SIFT and GLOH but can be computed much faster for its purposes, and does not introduce artifacts that degrade the matching performance.
Abstract: We introduce a novel local image descriptor designed for dense wide-baseline matching purposes. We feed our descriptors to a graph-cuts based dense depth map estimation algorithm and this yields better wide-baseline performance than the commonly used correlation windows for which the size is hard to tune. As a result, unlike competing techniques that require many high-resolution images to produce good reconstructions, our descriptor can compute them from pairs of low-quality images such as the ones captured by video streams. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance. Our approach was tested with ground truth laser scanned depth maps as well as on a wide variety of image pairs of different resolutions and we show that good reconstructions are achieved even with only two low quality images.

575 citations


Journal ArticleDOI
TL;DR: A quality metric for the assessment of stereopairs is proposed using the fusion of 2D quality metrics and of the depth information and is evaluated using the SAMVIQ methodology for subjective assessment.
Abstract: Several metrics have been proposed in literature to assess the perceptual quality of two-dimensional images. However, no similar effort has been devoted to quality assessment of stereoscopic images. Therefore, in this paper, we review the different issues related to 3D visualization, and we propose a quality metric for the assessment of stereopairs using the fusion of 2D quality metrics and of the depth information. The proposed metric is evaluated using the SAMVIQ methodology for subjective assessment. Specifically, distortions deriving from coding are taken into account and the quality degradation of the stereopair is estimated by means of subjective tests.

391 citations


Journal ArticleDOI
TL;DR: Images fusion procedures for the fusion of multi-spectral ASTER data and a RadarSAT-1 SAR scene are explored to determine which fusion procedure merged the largest amount of SAR texture into the ASTER scenes, while also preserving the spectral content.
Abstract: The use of disparate data sources within a pixel level image fusion procedure has been well documented for pan-sharpening studies. The present paper explores various image fusion procedures for the fusion of multi-spectral ASTER data and a RadarSAT-1 SAR scene. The research sought to determine which fusion procedure merged the largest amount of SAR texture into the ASTER scenes, while also preserving the spectral content. An additional application based maximum likelihood classification assessment was also undertaken. Three SAR scenes were tested namely, one backscatter scene and two textural measures calculated using grey level co-occurrence matrices (GLCM). Each of these were fused to the ASTER data using the following established approaches; Brovey transformation, Intensity Hue and Saturation, Principal Component Substitution, Discrete wavelet transformation, and a modified discrete wavelet transformation using the IHS approach. Resulting data sets were assessed using qualitative and quantitative (entropy, universal image quality index, maximum likelihood classification) approaches. Results from the study indicated that while all post fusion data sets contained more information (entropy analysis), only the frequency-based fusion approaches managed to preserve the spectral quality of the original imagery. Furthermore results also indicated that the textural (mean, contrast) SAR scenes did not add any significant amount of information to the post-fusion imagery. Classification accuracy was not improved when comparing ASTER optical data and pseudo optical bands generated from the fusion analysis. Accuracies range from 68.4% for the ASTER data to well below 50% for the component substitution methods. Frequency based approaches also returned lower accuracies when compared to the unfused optical data. The present study essentially replicated (pan-sharpening) studies using the high resolution SAR scene as a pseudo panchromatic band.

318 citations


Journal ArticleDOI
TL;DR: This review provides a brief summary of the materials, methods, and results involved in multiple investigations of the correction for respiratory motion in PET/CT imaging of the thorax, with the goal of improving image quality and quantitation.

Journal ArticleDOI
01 Aug 2008
TL;DR: This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition.
Abstract: This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

Journal ArticleDOI
TL;DR: Experimental results show that the amount of TI is closely related to both image noise and image blurring, which demonstrates the usefulness of the proposed method for evaluation of physical image quality in medical imaging.
Abstract: This paper presents a simple and straightforward method for synthetically evaluating digital radiographic images by a single parameter in terms of transmitted information (TI). The features of our proposed method are (1) simplicity of computation, (2) simplicity of experimentation, and (3) combined assessment of image noise and resolution (blur). Two acrylic step wedges with 0–1–2–3–4–5 and 0–2–4–6–8–10 mm in thickness were used as phantoms for experiments. In the present study, three experiments were conducted. First, to investigate the relation between the value of TI and image noise, various radiation doses by changing exposure time were employed. Second, we examined the relation between the value of TI and image blurring by shifting the phantoms away from the center of the X-ray beam area toward the cathode end when imaging was performed. Third, we analyzed the combined effect of deteriorated blur and noise on the images by employing three smoothing filters. Experimental results show that the amount of TI is closely related to both image noise and image blurring. The results demonstrate the usefulness of our method for evaluation of physical image quality in medical imaging.

Journal ArticleDOI
TL;DR: A novel algorithm is proposed for segmenting an image into multiple levels using its mean and variance, making use of the fact that a number of distributions tend towards Dirac delta function, peaking at the mean, in the limiting condition of vanishing variance.

Journal ArticleDOI
01 Aug 2008
TL;DR: In this article, the human visual system is used to detect and classify visible changes in the image structure, and a new metric for image quality assessment is proposed based on the detection and classification of visible changes.
Abstract: The diversity of display technologies and introduction of high dynamic range imagery introduces the necessity of comparing images of radically different dynamic ranges. Current quality assessment metrics are not suitable for this task, as they assume that both reference and test images have the same dynamic range. Image fidelity measures employed by a majority of current metrics, based on the difference of pixel intensity or contrast values between test and reference images, result in meaningless predictions if this assumption does not hold. We present a novel image quality metric capable of operating on an image pair where both images have arbitrary dynamic ranges. Our metric utilizes a model of the human visual system, and its central idea is a new definition of visible distortion based on the detection and classification of visible changes in the image structure. Our metric is carefully calibrated and its performance is validated through perceptual experiments. We demonstrate possible applications of our metric to the evaluation of direct and inverse tone mapping operators as well as the analysis of the image appearance on displays with various characteristics.

Journal ArticleDOI
TL;DR: The high-resolution velocity estimates used for restoring the image are obtained by global motion estimation, Bezier curve fitting, and local motion estimation without resort to correspondence identification.
Abstract: Due to the sequential-readout structure of complementary metal-oxide semiconductor image sensor array, each scanline of the acquired image is exposed at a different time, resulting in the so-called electronic rolling shutter that induces geometric image distortion when the object or the video camera moves during image capture. In this paper, we propose an image processing technique using a planar motion model to address the problem. Unlike previous methods that involve complex 3-D feature correspondences, a simple approach to the analysis of inter- and intraframe distortions is presented. The high-resolution velocity estimates used for restoring the image are obtained by global motion estimation, Bezier curve fitting, and local motion estimation without resort to correspondence identification. Experimental results demonstrate the effectiveness of the algorithm.

Journal ArticleDOI
TL;DR: Early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.
Abstract: Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques.

Proceedings ArticleDOI
05 Nov 2008
TL;DR: A new image database for testing full-reference image quality assessment metrics is presented, based on 1700 test images, which can be used for evaluating the performances of visual quality metrics as well as for comparison and for the design of new metrics.
Abstract: In this contribution, a new image database for testing full-reference image quality assessment metrics is presented. It is based on 1700 test images (25 reference images, 17 types of distortions for each reference image, 4 levels for each type of distortion). Using this image database, 654 observers from three different countries (Finland, Italy, and Ukraine) have carried out about 400000 individual human quality judgments (more than 200 judgments for each distorted image). The obtained mean opinion scores for the considered images can be used for evaluating the performances of visual quality metrics as well as for comparison and for the design of new metrics. The database, with testing results, is freely available.

Journal ArticleDOI
TL;DR: In this paper, a simulation study on the use of a field free line in magnetic particle imaging is presented, and a major improvement in the image quality is demonstrated, the reason for this image quality improvement is discussed, and routes for the technical implementation of the effect are sketched.
Abstract: This paper presents a simulation study on the use of a field free line in magnetic particle imaging. A major improvement in the image quality is demonstrated. The reason for this image quality improvement is discussed, and routes for the technical implementation of the effect are sketched.

Journal ArticleDOI
TL;DR: A dual‐echo trajectory is proposed with a novel trajectory calibration from prescan data coupled with a multi‐frequency reconstruction to correct for off‐resonance and trajectory errors, allowing for highly accelerated PC acquisitions without sacrifice in image quality.
Abstract: Phase contrast (PC) magnetic resonance imaging with a three-dimensional, radially undersampled acquisition allows for the acquisition of high resolution angiograms and velocimetry in dramatically reduced scan times. However, such an acquisition is sensitive to blurring and artifacts from off-resonance and trajectory errors. A dual-echo trajectory is proposed with a novel trajectory calibration from prescan data coupled with a multi-frequency reconstruction to correct for these errors. Comparisons of phantom data and in vivo results from volunteer, and patients with arteriovenous malformations patients are presented with and without these corrections and show significant improvement of image quality when both corrections are applied. The results demonstrate significantly improved visualization of vessels, allowing for highly accelerated PC acquisitions without sacrifice in image quality. Magn Reson Med 60:1329–1336, 2008. © 2008 Wiley-Liss, Inc.

Journal ArticleDOI
TL;DR: Regardless of patient size, shape, anatomical site, and field of view, the bowtie filter results in an overall improvement in CT number accuracy, image uniformity, low-contrast detectability, and imaging dose.
Abstract: The large variation of x-ray fluence at the detector in cone-beam CT (CBCT) poses a significant challenge to detectors' limited dynamic range, resulting in the loss of skinline as well as reduction of CT number accuracy, contrast-to-noise ratio, and image uniformity. The authors investigate the performance of a bowtie filter implemented in a system for image-guided radiation therapy (Elekta oncology system, XVI) as a compensator for improved image quality through fluence modulation, reduction in x-ray scatter, and reduction in patient dose. Dose measurements with and without the bowtie filter were performed on a CTDI Dose phantom and an empirical fit was made to calculate dose for any radial distance from the central axis of the phantom. Regardless of patient size, shape, anatomical site, and field of view, the bowtie filter results in an overall improvement in CT number accuracy, image uniformity, low-contrast detectability, and imaging dose. The implemented bowtie filter offers a significant improvement in imaging performance and is compatible with the current clinical system for image-guided radiation therapy.

Journal ArticleDOI
TL;DR: The results show that, VQM quality measures of individual left and right views can be effectively used in predicting the overall image quality and statistical measures like PSNR and SSIM of left andright views illustrate good correlations with depth perception of 3D video.
Abstract: The 3D (3-dimensional) video technologies are emerging to provide more immersive media content compared to conventional 2D (2-dimensional) video applications. More often 3D video quality is measured using rigorous and time-consuming subjective evaluation test campaigns. This is due to the fact that 3D video quality can be described as a combination of several perceptual attributes such as overall image quality, perceived depth, presence, naturalness and eye strain, etc. Hence this paper investigates the relationship between subjective quality measures and several objective quality measures like PSNR, SSIM, and VQM for 3D video content. The 3D video content captured using both stereo camera pair (two cameras for left and right views) and colour-and-depth special range cameras are considered in this study. The results show that, VQM quality measures of individual left and right views (rendered left and right views for colour-and-depth sequences) can be effectively used in predicting the overall image quality and statistical measures like PSNR and SSIM of left and right views illustrate good correlations with depth perception of 3D video.

Journal ArticleDOI
TL;DR: This analysis shows that a curved image surface provides a way to lower the number of optical elements, reduce aberrations including astigmatism and coma, and increase off-axis brightness and sharpness.
Abstract: The design of optical systems for digital cameras is complicated by the requirement that the image surface be planar, which results in complex and expensive optics. We analyze a compact optical system with a curved image surface and compare its performance to systems with planar image surfaces via optics analysis and image system simulation. Our analysis shows that a curved image surface provides a way to lower the number of optical elements, reduce aberrations including astigmatism and coma, and increase off-axis brightness and sharpness. A method to fabricate curved image focal plane arrays using monolithic silicon is demonstrated.

Proceedings ArticleDOI
TL;DR: The design and implementation of a new stereoscopic image quality metric is described and it is suggested that it is a better predictor of human image quality preference than PSNR and could be used to predict a threshold compression level for stereoscope image pairs.
Abstract: We are interested in metrics for automatically predicting the compression settings for stereoscopic images so that we can minimize file size, but still maintain an acceptable level of image quality. Initially we investigate how Peak Signal to Noise Ratio (PSNR) measures the quality of varyingly coded stereoscopic image pairs. Our results suggest that symmetric, as opposed to asymmetric stereo image compression, will produce significantly better results. However, PSNR measures of image quality are widely criticized for correlating poorly with perceived visual quality. We therefore consider computational models of the Human Visual System (HVS) and describe the design and implementation of a new stereoscopic image quality metric. This, point matches regions of high spatial frequency between the left and right views of the stereo pair and accounts for HVS sensitivity to contrast and luminance changes at regions of high spatial frequency, using Michelson's Formula and Peli's Band Limited Contrast Algorithm. To establish a baseline for comparing our new metric with PSNR we ran a trial measuring stereoscopic image encoding quality with human subjects, using the Double Stimulus Continuous Quality Scale (DSCQS) from the ITU-R BT.500-11 recommendation. The results suggest that our new metric is a better predictor of human image quality preference than PSNR and could be used to predict a threshold compression level for stereoscopic image pairs.

Journal ArticleDOI
TL;DR: A new approach for image matching and the related software package is developed and used in 3D tree modelling, showing results from analogue and digital aerial images and high‐resolution satellite images (IKONOS).
Abstract: Image matching is a key procedure in the process of generation of Digital Surface Models (DSM). We have developed a new approach for image matching and the related software package. This technique has proved its good performance in many applications. Here, we demonstrate its use in 3D tree modelling. After a brief description of our image matching technique, we show results from analogue and digital aerial images and high-resolution satellite images (IKONOS). In some cases, comparisons with manual measurements and/or airborne laser data have been performed. The evaluation of the results, qualitative and quantitative, indicate the very good performance of our matcher. Depending on the data acquisition parameters, the photogrammetric DSM can be denser than a DSM generated by laser, and its accuracy may be better than that from laser, as in these investigations. The tree canopy is well modelled, without smoothing of small details and avoiding the canopy penetration occurring with laser. Depending on the image scale, not only dense forest areas but also individual trees can be modelled.

Journal ArticleDOI
TL;DR: This review summarizes the most recent technical developments with regard to new detector techniques, options for dose reduction and optimized image processing, and explains the meaning of the exposure indicator or the dose reference level as tools for the radiologist to control the dose.
Abstract: The introduction of digital radiography not only has revolutionized communication between radiologists and clinicians, but also has improved image quality and allowed for further reduction of patient exposure. However, digital radiography also poses risks, such as unnoticed increases in patient dose and suboptimum image processing that may lead to suppression of diagnostic information. Advanced processing techniques, such as temporal subtraction, dual-energy subtraction and computer-aided detection (CAD) will play an increasing role in the future and are all targeted to decrease the influence of distracting anatomic background structures and to ease the detection of focal and subtle lesions. This review summarizes the most recent technical developments with regard to new detector techniques, options for dose reduction and optimized image processing. It explains the meaning of the exposure indicator or the dose reference level as tools for the radiologist to control the dose. It also provides an overview over the multitude of studies conducted in recent years to evaluate the options of these new developments to realize the principle of ALARA. The focus of the review is hereby on adult applications, the relationship between dose and image quality and the differences between the various detector systems.

Proceedings ArticleDOI
23 Apr 2008
TL;DR: Preliminary process to enhance the image quality worsened by light effect and noise produced by the web camera, then segment the vein pattern by using adaptive threshold method and matched them using improved template matching to achieve up to 100% identification accuracy.
Abstract: Finger vein authentication can be a leading biometric technology nowadays in terms of security and convenience, since it introduces the features inside the human body. An image of a finger captured by the web camera under the IR light transmission contains not only the vein pattern itself, but also shade produced by various thickness of the finger muscles, bones, and tissue networks surrounding the vein. In this paper, we introduce preliminary process to enhance the image quality worsened by light effect and noise produced by the web camera, then segment the vein pattern by using adaptive threshold method and matched them using improved template matching. The experimental result shows that even the image quality is not good, as long as our veins are clear and also with some appropriate process it still can be used as the means of personal identification. Hence it still can achieve up to 100% identification accuracy.

Journal ArticleDOI
TL;DR: This work evaluates SSIM metrics and proposes a perceptually weighted multiscale variant of SSIM, which introduces a viewing distance dependence and provides a natural way to unify the structural similarity approach with the traditional JND-based perceptual approaches.
Abstract: Perceptual image quality metrics have explicitly accounted for human visual system (HVS) sensitivity to subband noise by estimating just noticeable distortion (JND) thresholds. A recently proposed class of quality metrics, known as structural similarity metrics (SSIM), models perception implicitly by taking into account the fact that the HVS is adapted for extracting structural information from images. We evaluate SSIM metrics and compare their performance to traditional approaches in the context of realistic distortions that arise from compression and error concealment in video compression/transmission applications. In order to better explore this space of distortions, we propose models for simulating typical distortions encountered in such applications. We compare specific SSIM implementations both in the image space and the wavelet domain; these include the complex wavelet SSIM (CWSSIM), a translation-insensitive SSIM implementation. We also propose a perceptually weighted multiscale variant of CWSSIM, which introduces a viewing distance dependence and provides a natural way to unify the structural similarity approach with the traditional JND-based perceptual approaches.

Journal ArticleDOI
TL;DR: A set of aberration modes ideally suited to this application are derived and used as the basis for an efficient aberration correction scheme.
Abstract: We implement wave front sensor-less adaptive optics in a structured illumination microscope. We investigate how the image formation process in this type of microscope is affected by aberrations. It is found that aberrations can be classified into two groups, those that affect imaging of the illumination pattern and those that have no influence on this pattern. We derive a set of aberration modes ideally suited to this application and use these modes as the basis for an efficient aberration correction scheme. Each mode is corrected independently through the sequential optimisation of an image quality metric. Aberration corrected imaging is demonstrated using fixed fluorescent specimens. Images are further improved using differential aberration imaging for reduction of background fluorescence.

Journal ArticleDOI
TL;DR: Results show that the quality scores that result from the proposed algorithm are well correlated with the human perception of quality, as those resulting from JPEG or MPEG encoding.