scispace - formally typeset
Search or ask a question

Showing papers on "Peak signal-to-noise ratio published in 2009"


Proceedings ArticleDOI
19 Aug 2009
TL;DR: A design methodology for algorithm/architecture co-design of a voltage-scalable, process variation aware motion estimator based on significance driven computation and an adaptive quality compensation block which "tunes" the algorithm and architecture depending on the magnitude of voltage over-scaling and severity of process variations.
Abstract: In this paper we present a design methodology for algorithm/architecture co-design of a voltage-scalable, process variation aware motion estimator based on significance driven computation. The fundamental premise of our approach lies in the fact that all computations are not equally significant in shaping the output response of video systems. We use a statistical technique to intelligently identify these significant/not-so-significant computations at the algorithmic level and subsequently change the underlying architecture such that the significant computations are computed in an error free manner under voltage over-scaling. Furthermore, our design includes an adaptive quality compensation (AQC) block which "tunes" the algorithm and architecture depending on the magnitude of voltage over-scaling and severity of process variations. Simulation results show average power savings of ~ 33% for the proposed architecture when compared to conventional implementation in the 90 nm CMOS technology. The maximum output quality loss in terms of Peak Signal to Noise Ratio (PSNR) was ~ 1 dB without incurring any throughput penalty.

101 citations


Journal ArticleDOI
TL;DR: A new singular value decomposition (SVD) and discrete wavelet transformation (DWT) based technique is proposed for hiding watermark in full frequency band of color images (DSFW) and it is observed that the quality of the watermark is maintained with the value of 36dB.
Abstract: Due to the advancement in Computer technology and readily available tools, it is very easy for the unknown users to produce illegal copies of multimedia data which are floating across the Internet. In order to protect those multimedia data on the Internet many techniques are available including various encryption techniques, steganography techniques, watermarking techniques and information hiding techniques. Digital watermarking is a technique in which a piece of digital information is embedded into an image and extracted later for ownership verification. Secret digital data can be embedded either in spatial domain or in frequency domain of the cover data. In this paper, a new singular value decomposition (SVD) and discrete wavelet transformation (DWT) based technique is proposed for hiding watermark in full frequency band of color images (DSFW). The quality of the watermarked image and extracted watermark is measured using peak signal to noise ratio (PSNR) and normalized correlation (NC) respectively. It is observed that the quality of the watermarked image is maintained with the value of 36dB. Robustness of proposed algorithm is tested for various attacks including salt and pepper noise and Gaussian noise, cropping and JPEG compression.

67 citations


Journal ArticleDOI
TL;DR: Three different wavelet shrinkage methods, namely NeighShrink, NeighSure and NeighLevel, are presented, which give comparatively higher peak signal to noise ratio (PSNR), are much more efficient and have less visual artifacts compared to other methods.
Abstract: Since Donoho et al. proposed the wavelet thresholding method for signal denoising, many different denoising approaches have been suggested. In this paper, we present three different wavelet shrinkage methods, namely NeighShrink, NeighSure and NeighLevel. NeighShrink thresholds the wavelet coefficients based on Donoho's universal threshold and the sum of the squares of all the wavelet coefficients within a neighborhood window. NeighSure adopts Stein's unbiased risk estimator (SURE) instead of the universal threshold of NeighShrink so as to obtain the optimal threshold with minimum risk for each subband. NeighLevel uses parent coefficients in a coarser level as well as neighbors in the same subband. We also apply a multiplying factor for the optimal universal threshold in order to get better denoising results. We found that the value of the constant is about the same for different kinds and sizes of images. Experimental results show that our methods give comparatively higher peak signal to noise ratio (PSNR), are much more efficient and have less visual artifacts compared to other methods.

63 citations


Journal ArticleDOI
TL;DR: A new approach to dynamic gray-level error control for global dimming of liquid crystal display (LCD) devices that dynamically chooses the maximum luminance for a given image, based on final image integrity, has an advantage of preserving the worst image quality at any desired level, while reducing power consumption as much as possible.
Abstract: In this paper, we present a new approach to dynamic gray-level error control for global dimming of liquid crystal display (LCD) devices. In the LCD devices, global dimming is used to reduce power consumption by limiting the maximum luminance and lowering the brightness of backlight. The existing approaches, based on the fixed rate of clipped pixels, deteriorate the image quality seriously for some images even after gray level compensation. The proposed approach, on the other hand, dynamically chooses the maximum luminance for a given image, based on final image integrity. Thus, it has an advantage of preserving the worst image quality at any desired level, while reducing power consumption as much as possible. In the experiments, the proposed approach successfully maintained the minimum target peak signal to noise ratio (PSNR) for test sequences.

49 citations


Journal ArticleDOI
01 Feb 2009
TL;DR: A structural information-based image quality assessment algorithm, in which LU factorization is used for representation of the structural information of an image, which effectively replaces the peak signal to noise ratio or the mean square error.
Abstract: The goal of the objective image quality assessment is to quantitatively measure the image quality of an arbitrary image. The objective image quality measure is desirable if it is close to the subjective image quality assessment such as the mean opinion score. Image quality assessment algorithms are generally classified into two methodologies: perceptual and structural information-based. This paper proposes a structural information-based image quality assessment algorithm, in which LU factorization is used for representation of the structural information of an image. The proposed algorithm performs LU factorization of each of reference and distorted images, from which the distortion map is computed for measuring the quality of the distorted image. Finally, the proposed image quality metric is computed from the two-dimensional distortion map. Experimental results with the laboratory for image and video engineering database images show the efficiency of the proposed method, calibrated by linear and logistic regressions, in terms of the Pearson correlation coefficient and root mean square error. In commercial systems, the proposed algorithm can be used for quality assessment of mobile contents and video coding, which effectively replaces the peak signal to noise ratio or the mean square error.

45 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: The proposed quality metric (MOSp) predicts perceptual quality of compressed video using sequence characteristics and the mean squared error between the original and compressed video sequences, and has better correlation with subjective quality compared to popular metrics such as PSNR, SSIM and PSNRplus.
Abstract: This paper presents a new video quality metric for automatically estimating the perceptual quality of compressed video sequences. Distortion measures such as the mean squared error (MSE) and the peak signal to noise ratio (PSNR) have been found to poorly correlate with visual quality at lower bit-rates. The proposed quality metric (MOSp) predicts perceptual quality of compressed video using sequence characteristics and the mean squared error (MSE) between the original and compressed video sequences. The metric has been tested on various video sequences compressed using the H.264 video compression standard at different bit-rates. Results show that the proposed metric has better correlation with subjective quality compared to popular metrics such as PSNR, SSIM and PSNRplus. The new metric is simple to compute and hence suitable for incorporation into real-time applications such as the standard video compression codecs inorder to improve the visual quality of compressed video sequences.

42 citations


Journal ArticleDOI
TL;DR: A new coding/decoding scheme based on the properties and operations of rough fuzzy sets, called rough fuzzy vector quantization (RFVQ), relies on the representation capabilities of the vector to be quantized and not on the quantization algorithm, to determine optimal codevectors.

31 citations


Proceedings ArticleDOI
16 Mar 2009
TL;DR: Two algorithms for finding additional parameter settings over the previous algorithm are proposed and shown to improve the PSNR by up to 0.71 dB and 0.43 dB, respectively.
Abstract: The H.264 encoder has input parameters that determine the bit rate and distortion of the compressed video and the encoding complexity. A set of encoder parameters is referred to as a parameter setting. We previously proposed two offline algorithms for choosing H.264 encoder parameter settings that have distortion-complexity performance close to the parameter settings obtained from an exhaustive search, but take significantly fewer encodings. However they generate only a few parameter settings. If there is no available parameter settings for a given encode time, the encoder will need to use a lower complexity parameter setting resulting in a decrease in peak-signal-to-noise-ratio (PSNR). In this paper, we propose two algorithms for finding additional parameter settings over our previous algorithm and show that they improve the PSNR by up to 0.71 dB and 0.43 dB, respectively. We test both our algorithms on Linux and PocketPC platforms.

27 citations


Proceedings ArticleDOI
09 Dec 2009
TL;DR: A novel tri-layer random stego technique has been proposed for enhanced security against any type of brute force and experimental results show that the proposed multi stage process remarkably improves the security level.
Abstract: War between the good and evil in the internet ground is increasingly tougher as more and more refined evolution takes place in the digital arms and ammunitions The intrusion of digital terrorists in the territory of original digital source is always very hard to combat In this sensitive battle many patriotic weapons such as cryptography are defeated by the interlopers As a consequence of this defeat, a black commando namely steganography has declared an effective encounter by hidden assault on the strangers In this paper a novel tri-layer random stego technique has been proposed for enhanced security against any type of brute force The proposed method consists of three stages The first part involves embedding of data in binary image with pixel statistics conservation method In the second level, the binary-stego image is embedded into a gray image employing Moore Space Filling Curve by adapting LSB technique Finally, the gray stego image is infixed into a color image using Hilbert Space Filling Curve to produce the final color stego Experimental results show that the proposed multi stage process remarkably improves the security level The effectiveness of the proposed stego system has been estimated by computing bit error rate (BER), Mean square error (MSE), Peak Signal to Noise Ratio (PSNR) and Mean Structural Similarity index (MSSIM) This paper also illustrates how security has been enhanced using this algorithm

26 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new approach to enhance peak signal to noise ratio of highly corrupted image affected by impulse noise using an adaptive median Alter and the bacterial foraging optimization (BFO).
Abstract: This paper proposes a new approach to enhance peak signal to noise ratio of highly corrupted image affected by impulse noise. The proposed technique is implemented using an adaptive median Alter and the bacterial foraging optimization (BFO) technique. The adaptive median Alter is used to identify pixels affected by noise and replace them with median value to keep the information uncorrupted. The BFO technique minimizes errors between adaptive median filter output image and noisy image to maintain an error percentage of 0.0001. It has been observed that results of the proposed method are superior to conventional methods in terms of perceptual image quality as well as clarity and smoothness in edge regions of the resultant image. This technique can remove the salt-and-pepper noise of highly corrupted images with a noise as high as 90%.

26 citations


Proceedings ArticleDOI
01 Dec 2009
TL;DR: In this paper, the authors proposed an approach of removing random valued impulsive noise from images, which works in two phases, the first phase detects contaminated pixels and the second phase filters only those pixels keeping others intact, the detection scheme utilizes second order difference of pixels in a test window and the filtering scheme is a variation median filter based on the edge information.
Abstract: The proposed approach of removal of random valued impulsive noise from images works in two phases. The first phase detects contaminated pixels and the second phase filters only those pixels keeping others intact. The detection scheme utilizes second order difference of pixels in a test window and the filtering scheme is a variation median filter based on the edge information. The proposed scheme is simulated extensively on standard images and comparison with existing schemes reveal that our scheme outperforms them in terms of Peak Signal to Noise Ratio (PSNR), number of false detection and miss detection. The proposed scheme is also good at preserving finer details. Further, the computational complexity and number of iterations needed by the proposed scheme is less than the existing counterparts.

Posted Content
TL;DR: This paper compares the robustness of three different watermarking schemes against brightness and rotation attacks and verification on the parameters of PSNR, RMSE and MAE proves the watermarked images to be robust against these attacks.
Abstract: The recent advent in the field of multimedia proposed a many facilities in transport, transmission and manipulation of data. Along with this advancement of facilities there are larger threats in authentication of data, its licensed use and protection against illegal use of data. A lot of digital image watermarking techniques have been designed and implemented to stop the illegal use of the digital multimedia images. This paper compares the robustness of three different watermarking schemes against brightness and rotation attacks. The robustness of the watermarked images has been verified on the parameters of PSNR (Peak Signal to Noise Ratio), RMSE (Root Mean Square Error) and MAE (Mean Absolute Error).

Journal ArticleDOI
TL;DR: A mixture-based estimator and a least squares approach for solving the spatio-temporal error concealment problem and the proposed method outperforms the GMM-based scheme in terms of computation-performance tradeoff.
Abstract: A Gaussian mixture model (GMM)-based spatio-temporal error concealment approach has recently been proposed for packet video. The method improves peak signal-to-noise ratio (PSNR) compared to several famous error concealment methods, and it is asymptotically optimal when the number of mixture components goes to infinity. There are also drawbacks, however. The estimator has high online computational complexity, which implies that fewer surrounding pixels to the lost area than desired are used for error concealment. Moreover, GMM parameters are estimated without considering maximization of the error concealment PSNR. In this paper, we propose a mixture-based estimator and a least squares approach for solving the spatio-temporal error concealment problem. Compared to the GMM scheme, the new method may base error concealment on more surrounding pixels to the loss, while maintaining low computational complexity, and model parameters are found by an algorithm that increases PSNR in each iteration. The proposed method outperforms the GMM-based scheme in terms of computation-performance tradeoff.

Proceedings ArticleDOI
13 May 2009
TL;DR: An evaluation platform combining NS-2 and real video coding has been designed in order to evaluate the performance of H.264/SVC over 802.16e networks in railway environments, finding that it outperforms other existing standards in terms of the considered metrics when used in railway communications.
Abstract: This work considers the transmission of multimedia streams over broadband networks using the latest video coding standard H.264/SVC. The Scalable Video Coding for Heterogeneous Media Delivery allows temporal, spatial and SNR scalability in video flows, adapting and optimizing the quality of the received media according to the wireless channel conditions. In this paper an evaluation platform combining NS-2 and real video coding has been designed in order to evaluate the performance of H.264/SVC over 802.16e networks in railway environments. Extensive computer simulations have been done to compare the quality of service (QoS) of our scheme with respect to other video coding techniques in terms of jitter, delay or Peak Signal to Noise Ratio (PSNR). The obtained simulation results state that, in terms of the considered metrics, H.264/SVC outperforms other existing standards (i.e. MPEG-4, H.263 and H.264) in terms of the considered metrics when used in railway communications.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: The proposed spatially adaptive TV model has been applied to partially parallel MRI (PP-MRI) image reconstructed using GRAPPA and SENSE and verifies that the proposed model provides higher peak signal to noise ratio (PSNR) and results closer to ground truth.
Abstract: The widely adopted total variation (TV) filter is not optimal for MRI images with spatially varying noise levels, not to say those with also artifacts. To better preserve edges and fine structures while sufficiently removing noise and artifacts, we first use local mutual information together with k-means segmentation to automatically locate most of the reliable edges from the noisy input; noise and artifacts distribution at other regions are then studied using local variance; all obtained transparent information in turn guides fully automatic local adjustment of the TV filter. The proposed spatially adaptive TV model has been applied to partially parallel MRI (PP-MRI) image reconstructed using GRAPPA and SENSE. Comparison with Perona-Malik anisotropic diffusion and another adaptive TV verifies that the proposed model provides higher peak signal to noise ratio (PSNR) and results closer to ground truth. Numerical results on many in vivo clinical data sets demonstrate the robustness and viability of the unsupervised method.

Journal ArticleDOI
TL;DR: A novel oblivious and robust multiple image watermarking scheme using Multiple Descriptions (MD) and Quantization Index Modulation (QIM) of the host image to achieve robustness to both local and global attacks.
Abstract: A novel oblivious and robust multiple image watermarking scheme using Multiple Descriptions (MD) and Quantization Index Modulation (QIM) of the host image is presented in this paper. Watermark embedding is done at two stages. In the first stage, Discrete Cosine Transform (DCT) of odd description of the host image is computed. The watermark image is embedded in the resulting DC coefficients. In the second stage, a copy of the watermark image is embedded in the watermarked image generated at the first stage. This enables us to achieve robustness to both local and global attacks. This algorithm is highly robust for different attacks on the watermarked image and superior in terms of Peak Signal to Noise Ratio (PSNR) and Normalized Cross correlation (NC).

Journal ArticleDOI
TL;DR: Three novel techniques are proposed to effectively speed up the ME process and a smart prediction technique for effectively deciding an initial search center is proposed, and a zero motion prejudgment technique is proposed to accurately decide whether the pre-estimated ISC can be considered as a best match motion vector (MV) and consequently save the required computations for the MV refinement process.
Abstract: Motion estimation (ME) plays an important role in modern video coders since it consumes approximately 60-80% of the entire encoder's computations. In this paper, three novel techniques are proposed to effectively speed up the ME process. First, a smart prediction technique for effectively deciding an initial search center is proposed. Second, a zero motion prejudgment technique is proposed to accurately decide whether the pre-estimated ISC can be considered as a best match motion vector (MV) and consequently save the required computations for the MV refinement process. Finally, a variable padding pixels ME technique is proposed to adaptively determine the number of padding pixels required for the search window for more computational cost savings. The three techniques are combined and applied to the block-based ME for a superior computational complexity savings in the ME process. The performance of the proposed techniques is tested in both the pixel domain ME and the frequency domain ME in terms of their quantitative visual quality (peak signal-to-noise ratio, PSNR), their computational complexity, and their bit rate. Experimental results demonstrate that the proposed fast ME technique is able to achieve approximately a 99.4% reduction in ME time compared to the conventional full search block-based ME (FSSBB-ME) with negligible degradation in both the PSNR and the bit rate. Additionally, the experimental results prove the effectiveness of the proposed techniques if they are combined with any block-based ME technique such as the fast extended diamond enhanced predictive zonal search. Experimental results also demonstrate that there is at least an additional savings of 72% in ME time using the conventional discrete cosine transform phase correlation ME (DCT-PC-ME) in the frequency domain compared to the conventional FSBB-ME technique in pixel domain. Compared to the conventional DCT-PC-ME, applying the proposed novel techniques to the DCT-PC-ME saves up to 89% in ME time.

Proceedings ArticleDOI
14 Mar 2009
TL;DR: A comparative study of JPEG and SPIHT compression algorithms is presented and it is shown that SPIHT based compression achieves better results as compared to JPEG for all compressions.
Abstract: In this paper, a comparative study of JPEG and SPIHT compression algorithms is presented. A set of objective picture quality measures like Peak Signal to Noise Ratio (PSNR), Maximum Difference (MD), Least Mean Square Error (LMSE), Structural Similarity Index (SSIM) and Picture Quality Scale(PQS) are used to measure the picture quality and comparison is done based upon the results of these quality measures. Different kind of standard test images are assessed with different compression ratios. SPIHT based compression achieves better results as compared to JPEG for all compressions.

07 Nov 2009
TL;DR: In this article, five different image filtering algorithms are compared based on their ability to reconstruct the images affected by noise and the purpose of these algorithms is to remove different type of noise that might occur during transmission of image or capturing an image.
Abstract: In this paper five different image filtering algorithms are compared based on their ability to reconstruct the images affected by noise. The purpose of these algorithms is to remove different type of noise that might occur during transmission of image or capturing an image. The Spatial Median Filter is compared with current image smoothing techniques. Experimental results demonstrate that the Spatial Median Filter method is giving desirable results when compared to other filters. A modification to this algorithm is introduced by Church, J.C., Yixin Chen, achieve more accurate reconstructions of underwater images with suppressing of Gaussian noise over other popular techniques and the results are compared by two parameter evaluation namely Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE).

02 Feb 2009
TL;DR: This PSNR calculation method has the advantage of automatically determining the highest possible PSNR value for a given video sequence over the range of spatial and temporal shifts.
Abstract: Peak Signal to Noise Ratio (PSNR) has been used as a benchmark to evaluate new objective perceptual video quality metrics. For example, PSNR has been used as a benchmark for both the Multimedia (MM) and Reduced Reference Television (RRTV) test programs recently completed by the Video Quality Experts Group (VQEG). However, there is not currently an international Recommendation specifying exactly how to perform this critical measurement. Since the calculation of PSNR is highly dependent upon proper calculation of spatial alignment, temporal alignment, gain, and level offset between the processed video sequence and the original video sequence, one must also specify the method of performing these calibration procedures. The past two validation tests (MM and RRTV) performed by VQEG utilized the exhaustive search PSNR algorithm that is the subject of this contribution. Members of VQEG agreed to use this PSNR method as a benchmark for assessing the effectiveness of perceptual video quality metrics after extensive discussions. This PSNR calculation method has the advantage of automatically determining the highest possible PSNR value for a given video sequence over the range of spatial and temporal shifts. Only one temporal shift is allowed for all frames in the entire processed video sequence (i.e., constant delay).

Proceedings ArticleDOI
28 Dec 2009
TL;DR: A fast Inter mode decision scheme for the Skip mode and the Inter sub modes is proposed based on the Rate- Distortion cost correlation between neighboring views, and the RD cost of different textural segmentation regions to reduce other modes' estimation.
Abstract: Multiview video coding (MVC) plays a critical role in reducing the ultra high data bandwidth of 3-D video, and it has attracted great attention from industries and research institutes. However, with the increasing number of views, the complexity of MVC increases greatly which affects its realistic applications. In this paper, a fast Inter mode decision scheme for the Skip mode and the Inter sub modes is proposed. Based on the Rate- Distortion (RD) cost correlation between neighboring views, and the RD cost of different textural segmentation regions, a pre- decision of the Skip mode is introduced to reduce other modes' estimation. In addition, the estimated direction of Inter sub modes is predicted based on the optimal direction of the Inter16×16 mode. For the Peak Signal to Noise Ratio, experimental results show that an average 55% reduction of the total computation time with degradation of less than 0.01 dB is achieved as compared to the MVC reference software.

Book ChapterDOI
29 Aug 2009
TL;DR: A discrete technique for image magnification is presented, which produces the resulting image in one scan of the input image and does not require any threshold.
Abstract: A discrete technique for image magnification is presented, which produces the resulting image in one scan of the input image and does not require any threshold. The technique allows the user to magnify an image with any integer zooming factor. The performance of the algorithm is evaluated by using the standard criterion based on the Peak Signal to Noise Ratio PSNR. The obtained results are visually good, since artifacts do not significantly affect the magnified images.

Proceedings ArticleDOI
24 May 2009
TL;DR: A set of computationally efficient algorithms that can be applied to any block matching algorithm and is applied to the DS as a study case achieves higher complexity reduction than DS algorithm without further relative PSNR degradation compared to Full Search.
Abstract: In this paper, a Modified Diamond Search (MDS) algorithm is proposed for fast motion estimation based on the well known Diamond Search (DS) algorithm. A set of computationally efficient algorithms that can be applied to any block matching algorithm and is applied to the DS as a study case achieves higher complexity reduction than DS algorithm without further relative PSNR (peak signal to noise ratio) degradation compared to Full Search (FS). First, Dynamic Internal Stop Search (DISS) algorithm is used to reduce the internal redundant SAD (Sum of Absolute Difference) operations between the current and the candidate blocks using an accurate dynamic threshold. Second, a Dynamic External Stop Search (DESS) greatly reduces the unnecessary operations by skipping all the irrelevant blocks in the search area. In addition, early search termination and adaptive pattern selections techniques are applied to the proposed MDS as initialization steps to achieve even higher complexity reduction. The accuracy of the proposed model threshold equations guarantee not to fall into a local minima. Experiments show that the proposed MDS algorithm reduces the computations greatly up to 99% and 20% compared with the conventional FS algorithm and DS respectively with no significant degradation in both the PSNR and the bit-rate.

01 Jan 2009
TL;DR: Experimental results of the proposed scheme imperceptibility, undetectability and robustness against large blurring attacks measured by peak signal to noise ratio, normalized cross correlation and similarity function values showed a significant improvement with respect to a previous works.
Abstract: Summary Digital watermarking has been considered as a solution to the problem of copy protection in multimedia objects and many algorithms has been proposed. One of the problems in digital watermarking is that the three requirements of imperceptibility, capacity, and robustness that are must be satisfied but they almost conflict with each other. In this paper we propose a new digital watermarking technique in the spatial domain capable of embedding a totally indistinguishable in original image by the human eye. In addition by applying falling-off-boundary in corners board of cover image with the random pixel manipulation set of the most significant bit-6 (MSB6) leads to undetectability, imperceptibility and increase robustness. Experimental results of the proposed scheme imperceptibility, undetectability and robustness against large blurring attacks measured by peak signal to noise ratio, normalized cross correlation and similarity function values showed a significant improvement with respect to a previous works.

Journal ArticleDOI
TL;DR: An efficient rate control algorithm based on the content-adaptive initial quantisation parameter (QP) setting scheme and the peak signal-to-noise ratio (PSNR) variation-limited bit-allocation strategy for low-complexity mobile applications is presented.
Abstract: An efficient rate control algorithm based on the content-adaptive initial quantisation parameter (QP) setting scheme and the peak signal-to-noise ratio (PSNR) variation-limited bit-allocation strategy for low-complexity mobile applications is presented. This algorithm can efficiently measure the residual complexity of intra-pictures without performing the computation-intensive intra-prediction and mode decision in H.264/AVC, based on the structural and statistical features of local textures. This can adaptively set proper initial QP values for versatile video contents. In addition, this bit-allocation strategy can effectively distribute bit-rate budgets based on the monotonic property to enhance overall coding efficiency while maintaining the consistency of visual quality by limiting the variation of quantisation distortion. The experimental results reveal that the proposed algorithm surpasses the conventional rate control approaches in terms of the average PSNR from 0.34 to 0.95 dB. Moreover, this algorithm provides more impressive visual quality and more robust buffer controllability when compared with other algorithms.

Proceedings ArticleDOI
15 May 2009
TL;DR: A suitable method for finding correlation between PSNR and Structural Similarity index objective image quality parameters with subjective MOS for SPIHT compressed medical images based on six independent observers is presented and can be potentially used for deciding upper compression thresholds for medical images.
Abstract: Correlating objective and subjective quality assessment parameters of compressed digital medical images has been an open challenging problem in tele-radiology. Establishing this correlation is crucial in determining the upper limit of image compression threshold for preserving diagnostically relevant information based on mean opinion score (MOS). This paper presents a suitable method for finding correlation between PSNR and Structural Similarity (SSIM) index objective image quality parameters with subjective MOS for SPIHT [4] compressed medical images based on six independent observers. The suggested method can be potentially used for deciding upper compression thresholds for medical images. It is found that correlation coefficient (CC) between the PSNR and MOS for CT scan and MRI images are 0.979 and 0.960 respectively whereas their corresponding values are 0.868 and 0.955 considering SSIM. Further, MOS prediction models have been proposed considering PSNR and SSIM which closely match with the subjective MOS

Journal Article
TL;DR: An optimization technique using the genetic algorithms to search for optimal quantization steps to improve the quality of watermarked image and robustness of the watermark and analyze the performance of the proposed algorithm in terms of peak signal to noise ratio and normalized correlation.
Abstract: In this paper, we propose digital image watermarking algorithm in the multiwavelet transform domain. The embedding technique is based on the quantization index modulation technique and this technique does not require the original image in the watermark extraction. We have developed an optimization technique using the genetic algorithms to search for optimal quantization steps to improve the quality of watermarked image and robustness of the watermark. In addition, we analyze the performance of the proposed algorithm in terms of peak signal to noise ratio and normalized correlation. The experimental results show that our proposed method can improve the quality of the watermarked image and give more robustness of the watermark as compared to previous works.

Journal ArticleDOI
TL;DR: A novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations is developed.
Abstract: Power dissipation and robustness to process variation have conflicting design requirements. Scaling of voltage is associated with larger variations, while Vdd upscaling or transistor up-sizing for parametric-delay variation tolerance can be detrimental for power dissipation. However, for a class of signal-processing systems, effective tradeoff can be achieved between Vdd scaling, variation tolerance, and ldquooutput quality.rdquo In this paper, we develop a novel low-power variation-tolerant algorithm/architecture for color interpolation that allows a graceful degradation in the peak-signal-to-noise ratio (PSNR) under aggressive voltage scaling as well as extreme process variations. This feature is achieved by exploiting the fact that all computations used in interpolating the pixel values do not equally contribute to PSNR improvement. In the presence of Vdd scaling and process variations, the architecture ensures that only the ldquoless important computationsrdquo are affected by delay failures. We also propose a different sliding-window size than the conventional one to improve interpolation performance by a factor of two with negligible overhead. Simulation results show that, even at a scaled voltage of 77% of nominal value, our design provides reasonable image PSNR with 40% power savings.

Proceedings Article
04 Oct 2009
TL;DR: An efficient multilevel reversible data hiding algorithm for video sequences is presented and it is shown that the peak signal to noise ratio (PSNR) lower bound of the proposed algorithm outperforms one by applying the previous best reversible data hide algorithm the to each image frame in the video sequence directly.
Abstract: Reversible data hiding can guarantee that the origi- nal image can be recovered from the marked image without any distortion. In this paper, an efficient multilevel reversible data hiding algorithm for video sequences is presented. Since the gray level distribution of the difference map is Laplacian, the peak point of the distribution thus leads into high data hiding capacity and good image quality. We also show that the peak signal to noise ratio (PSNR) lower bound of our proposed algorithm outperforms one by applying the previous best reversible data hiding algorithm the to each image frame in the video sequence directly. Based on four popular test video sequences, experimental results demonstrate the data hiding capacity and image quality advantages of our proposed data hiding algorithm.

Proceedings ArticleDOI
29 Jul 2009
TL;DR: It is shown from this subjective experiment that PSNRf=90% correlates much better with the delivered perceptual video quality than the average PSNR across all frames of a video, and is a good representation of perceptual quality of aVideo transmitted over networks with possible transmission errors.
Abstract: In this paper, we propose a new statistical objective perceptual video quality measure PSNR r,f -MOSr. PSNR r,f is defined as the PSNR achieved by f% of the frames in each one of the r% of the transmissions over a network. This quantity has the potential to capture the performance loss due to damaged frames in a particular video sequence (f%), as well asto indicate the probablity of a user experiencing a specified quality over the channel (r%). The percentage of transmissions also has the interpretation as what percentage out of many video users who access the same channel, would experience a given video quality. A subjective experiment is conducted to establish a linear equation connecting PSNR r,f=90% and MOS r , the mean opinion score (MOS) achieved by r% of the transmissions. It is shown from this subjective experiment that PSNR f=90% correlates much better with the delivered perceptual video quality than the average PSNR across all frames of a video, and is a good representation of perceptual quality of a video transmitted over networks with possible transmission errors.