scispace - formally typeset
Search or ask a question

Showing papers on "Quantization (image processing) published in 2011"


Proceedings ArticleDOI
25 Jul 2011
TL;DR: The use of 1D operations leads to considerable speedups over existing techniques and potential memory savings; its computational cost is not affected by the choice of the filter parameters; and it is the first edge-preserving filter to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization.
Abstract: We present a new approach for performing high-quality edge-preserving filtering of images and videos in real time. Our solution is based on a transform that defines an isometry between curves on the 2D image manifold in 5D and the real line. This transform preserves the geodesic distance between points on these curves, adaptively warping the input signal so that 1D edge-preserving filtering can be efficiently performed in linear time. We demonstrate three realizations of 1D edge-preserving filters, show how to produce high-quality 2D edge-preserving filters by iterating 1D-filtering operations, and empirically analyze the convergence of this process. Our approach has several desirable features: the use of 1D operations leads to considerable speedups over existing techniques and potential memory savings; its computational cost is not affected by the choice of the filter parameters; and it is the first edge-preserving filter to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. We demonstrate the versatility of our domain transform and edge-preserving filters on several real-time image and video processing tasks including edge-preserving filtering, depth-of-field effects, stylization, recoloring, colorization, detail enhancement, and tone mapping.

738 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: This paper derives a direct matching framework based on visual vocabulary quantization and a prioritized correspondence search that efficiently handles large datasets and outperforms current state-of-the-art methods.
Abstract: Recently developed Structure from Motion (SfM) reconstruction approaches enable the creation of large scale 3D models of urban scenes. These compact scene representations can then be used for accurate image-based localization, creating the need for localization approaches that are able to efficiently handle such large amounts of data. An important bottleneck is the computation of 2D-to-3D correspondences required for pose estimation. Current stateof- the-art approaches use indirect matching techniques to accelerate this search. In this paper we demonstrate that direct 2D-to-3D matching methods have a considerable potential for improving registration performance. We derive a direct matching framework based on visual vocabulary quantization and a prioritized correspondence search. Through extensive experiments, we show that our framework efficiently handles large datasets and outperforms current state-of-the-art methods.

522 citations


Journal ArticleDOI
TL;DR: These anti-forensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image's visual quality and can be used to render several forms of image tampering such as double JPEG compression, cut-and-paste image forgery, and image origin falsification undetectable through compression-history-based forensic means.
Abstract: As society has become increasingly reliant upon digital images to communicate visual information, a number of forensic techniques have been developed to verify the authenticity of digital images. Amongst the most successful of these are techniques that make use of an image's compression history and its associated compression fingerprints. Little consideration has been given, however, to anti-forensic techniques capable of fooling forensic algorithms. In this paper, we present a set of anti-forensic techniques designed to remove forensically significant indicators of compression from an image. We do this by first developing a generalized framework for the design of anti-forensic techniques to remove compression fingerprints from an image's transform coefficients. This framework operates by estimating the distribution of an image's transform coefficients before compression, then adding anti-forensic dither to the transform coefficients of a compressed image so that their distribution matches the estimated one. We then use this framework to develop anti-forensic techniques specifically targeted at erasing compression fingerprints left by both JPEG and wavelet-based coders. Additionally, we propose a technique to remove statistical traces of the blocking artifacts left by image compression algorithms that divide an image into segments during processing. Through a series of experiments, we demonstrate that our anti-forensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image's visual quality. Furthermore, we show how these techniques can be used to render several forms of image tampering such as double JPEG compression, cut-and-paste image forgery, and image origin falsification undetectable through compression-history-based forensic means.

214 citations


Journal ArticleDOI
TL;DR: A camera signature is extracted from a JPEG image consisting of information about quantization tables, Huffman codes, thumbnails, and exchangeable image file format (EXIF) and it is shown that this signature is highly distinct across 1.3 million images spanning 773 different cameras and cell phones.
Abstract: It is often desirable to determine if an image has been modified in any way from its original recording. The JPEG format affords engineers many implementation trade-offs which give rise to widely varying JPEG headers. We exploit these variations for image authentication. A camera signature is extracted from a JPEG image consisting of information about quantization tables, Huffman codes, thumbnails, and exchangeable image file format (EXIF). We show that this signature is highly distinct across 1.3 million images spanning 773 different cameras and cell phones. Specifically, 62% of images have a signature that is unique to a single camera, 80% of images have a signature that is shared by three or fewer cameras, and 99% of images have a signature that is unique to a single manufacturer. The signature of Adobe Photoshop is also shown to be unique relative to all 773 cameras. These signatures are simple to extract and offer an efficient method to establish the authenticity of a digital image.

198 citations


Journal ArticleDOI
TL;DR: It is found that the temporal correction factor follows closely an inverted falling exponential function, whereas the quantization effect on the coded frames can be captured accurately by a sigmoid function of the peak signal-to-noise ratio.
Abstract: In this paper, we explore the impact of frame rate and quantization on perceptual quality of a video. We propose to use the product of a spatial quality factor that assesses the quality of decoded frames without considering the frame rate effect and a temporal correction factor, which reduces the quality assigned by the first factor according to the actual frame rate. We find that the temporal correction factor follows closely an inverted falling exponential function, whereas the quantization effect on the coded frames can be captured accurately by a sigmoid function of the peak signal-to-noise ratio. The proposed model is analytically simple, with each function requiring only a single content-dependent parameter. The proposed overall metric has been validated using both our subjective test scores as well as those reported by others. For all seven data sets examined, our model yields high Pearson correlation (higher than 0.9) with measured mean opinion score (MOS). We further investigate how to predict parameters of our proposed model using content features derived from the original videos. Using predicted parameters from content features, our model still fits with measured MOS with high correlation.

174 citations


Proceedings ArticleDOI
22 May 2011
TL;DR: A statistical test to discriminate between original and forged regions in JPEG images, under the hypothesis that the former are doubly compressed while the latter are singly compressed, demonstrates a better discriminating behavior with respect to previously proposed methods.
Abstract: In this paper, we propose a statistical test to discriminate between original and forged regions in JPEG images, under the hypothesis that the former are doubly compressed while the latter are singly compressed. New probability models for the DCT coefficients of singly and doubly compressed regions are proposed, together with a reliable method for estimating the primary quantization factor in the case of double compression. Based on such models, the probability for each DCT block to be forged is derived. Experimental results demonstrate a better discriminating behavior with respect to previously proposed methods.

172 citations


Journal ArticleDOI
TL;DR: This paper designs a robust detection approach which is able to detect either block-aligned or misaligned recompression in JPEG images, and shows it outperforms existing methods.
Abstract: Due to the popularity of JPEG as an image compression standard, the ability to detect tampering in JPEG images has become increasingly important. Tampering of compressed images often involves recompression and tends to erase traces of tampering found in uncompressed images. In this paper, we present a new technique to discover traces caused by recompression. We assume all source images are in JPEG format and propose to formulate the periodic characteristics of JPEG images both in spatial and transform domains. Using theoretical analysis, we design a robust detection approach which is able to detect either block-aligned or misaligned recompression. Experimental results demonstrate the validity and effectiveness of the proposed approach, and also show it outperforms existing methods.

138 citations


Journal ArticleDOI
TL;DR: A new spectral multiple image fusion analysis based on the discrete cosine transform (DCT) and a specific spectral filtering method which is based on an adapted spectral quantization and provides a viable solution for simultaneous compression and encryption of multiple images.
Abstract: We report a new spectral multiple image fusion analysis based on the discrete cosine transform (DCT) and a specific spectral filtering method. In order to decrease the size of the multiplexed file, we suggest a procedure of compression which is based on an adapted spectral quantization. Each frequency is encoded with an optimized number of bits according its importance and its position in the DC domain. This fusion and compression scheme constitutes a first level of encryption. A supplementary level of encryption is realized by making use of biometric information. We consider several implementations of this analysis by experimenting with sequences of gray scale images. To quantify the performance of our method we calculate the MSE (mean squared error) and the PSNR (peak signal to noise ratio). Our results consistently improve performances compared to the well-known JPEG image compression standard and provide a viable solution for simultaneous compression and encryption of multiple images.

104 citations


Journal ArticleDOI
TL;DR: A set of domain specific loss-less compression schemes that achieve over 40× compression of fragments, outperforming bzip2 by over 6× are introduced and the study of using "lossy" quality values is initiated.
Abstract: With the advent of next generation sequencing technologies, the cost of sequencing whole genomes is poised to go below $1000 per human individual in a few years. As more and more genomes are sequenced, analysis methods are undergoing rapid development, making it tempting to store sequencing data for long periods of time so that the data can be re-analyzed with the latest techniques. The challenging open research problems, huge influx of data, and rapidly improving analysis techniques have created the need to store and transfer very large volumes of data. Compression can be achieved at many levels, including trace level (compressing image data), sequence level (compressing a genomic sequence), and fragment-level (compressing a set of short, redundant fragment reads, along with quality-values on the base-calls). We focus on fragment-level compression, which is the pressing need today. Our article makes two contributions, implemented in a tool, SlimGene. First, we introduce a set of domain specific loss-less compression schemes that achieve over 40× compression of fragments, outperforming bzip2 by over 6×. Including quality values, we show a 5× compression using less running time than bzip2. Second, given the discrepancy between the compression factor obtained with and without quality values, we initiate the study of using “lossy” quality values. Specifically, we show that a lossy quality value quantization results in 14× compression but has minimal impact on downstream applications like SNP calling that use the quality values. Discrepancies between SNP calls made between the lossy and loss-less versions of the data are limited to low coverage areas where even the SNP calls made by the loss-less version are marginal.

101 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed semi-fragile watermarking scheme outperforms four peer schemes and is capable of identifying intentional tampering and incidental modification, and localizing tampered regions.

93 citations


Patent
Gary A. Demos1
03 Aug 2011
TL;DR: In this article, the authors present methods, systems, and computer programs for improving compressed image chroma information by utilizing a lower or higher value of a quantization parameter for one or more chroma channels as compared to the luminance channel.
Abstract: Methods, systems, and computer programs for improving compressed image chroma information. In one aspect of the invention, a resolution for a red color component of a color video image is used that is higher than the resolution for a blue color component of the color video image. Another aspect includes utilizing a lower or higher value of a quantization parameter (QP) for one or more chroma channels as compared to the luminance channel. Another aspect is use of a logarithmic representation of a video image to benefit image coding. Another aspect uses more than two chroma channels to represent a video image.

Journal ArticleDOI
TL;DR: The efficiency of the proposed scheme is demonstrated by results, especially, when faced to the method presented in the recently published paper based on the block truncation coding using pattern fitting principle.
Abstract: This paper considers the design of a lossy image compression algorithm dedicated to color still images. After a preprocessing step (mean removing and RGB to YCbCr transformation), the DCT transform is applied and followed by an iterative phase (using the bisection method) including the thresholding, the quantization, dequantization, the inverse DCT, YCbCr to RGB transform and the mean recovering. This is done in order to guarantee that a desired quality (fixed in advance using the well known PSNR metric) is checked. For the aim to obtain the best possible compression ratio CR, the next step is the application of a proposed adaptive scanning providing, for each (n, n) DCT block a corresponding (n×n) vector containing the maximum possible run of zeros at its end. The last step is the application of a modified systematic lossless encoder. The efficiency of the proposed scheme is demonstrated by results, especially, when faced to the method presented in the recently published paper based on the block truncation coding using pattern fitting principle.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: Since the method accurately preserves the finest details while enhancing the chromatic contrast, the utility and versatility of the operator have been proved for several other challenging applications such as video decolorization, detail enhancement, single image dehazing and segmentation under different illuminants.
Abstract: This paper introduces an effective decolorization algorithm that preserves the appearance of the original color image. Guided by the original saliency, the method blends the luminance and the chrominance information in order to conserve the initial color disparity while enhancing the chromatic contrast. As a result, our straightforward fusing strategy generates a new spatial distribution that discriminates better the illuminated areas and color features. Since we do not employ quantization or a per-pixel optimization (computationally expensive), the algorithm has a linear runtime, and depending on the image resolution it could be used in real-time applications. Extensive experiments and a comprehensive evaluation against existing state-of-the-art methods demonstrate the potential of our grayscale operator. Furthermore, since the method accurately preserves the finest details while enhancing the chromatic contrast, the utility and versatility of our operator have been proved for several other challenging applications such as video decolorization, detail enhancement, single image dehazing and segmentation under different illuminants.

Journal ArticleDOI
TL;DR: An adaptive quantization scheme based on fast boundary adaptation rule (FBAR) and differential pulse code modulation (DPCM) procedure followed by an online, least storage quadrant tree decomposition (QTD) processing is proposed enabling a robust and compact image compression processor.
Abstract: This paper presents the architecture, algorithm, and VLSI hardware of image acquisition, storage, and compression on a single-chip CMOS image sensor. The image array is based on time domain digital pixel sensor technology equipped with nondestructive storage capability using 8-bit Static-RAM device embedded at the pixel level. The pixel-level memory is used to store the uncompressed illumination data during the integration mode as well as the compressed illumination data obtained after the compression stage. An adaptive quantization scheme based on fast boundary adaptation rule (FBAR) and differential pulse code modulation (DPCM) procedure followed by an online, least storage quadrant tree decomposition (QTD) processing is proposed enabling a robust and compact image compression processor. A prototype chip including 64×64 pixels, read-out and control circuitry as well as an on-chip compression processor was implemented in 0.35 μm CMOS technology with a silicon area of 3.2×3.0 mm2 and an overall power of 17 mW. Simulation and measurements results show compression figures corresponding to 0.6-1 bit-per-pixel (BPP), while maintaining reasonable peak signal-to-noise ratio levels.

Proceedings ArticleDOI
19 Dec 2011
TL;DR: Wavelet compression in JPEG 2000 is revisited by using a standards-based method to reduce large-scale data sizes for production scientific computing and to quantify compression effects, measuring bit rate versus maximum error as a quality metric to provide precision guarantees for scientific analysis on remotely compressed POP data.
Abstract: We revisit wavelet compression by using a standards-based method to reduce large-scale data sizes for production scientific computing. Many of the bottlenecks in visualization and analysis come from limited bandwidth in data movement, from storage to networks. The majority of the processing time for visualization and analysis is spent reading or writing large-scale data or moving data from a remote site in a distance scenario. Using wavelet compression in JPEG 2000, we provide a mechanism to vary data transfer time versus data quality, so that a domain expert can improve data transfer time while quantifying compression effects on their data. By using a standards-based method, we are able to provide scientists with the state-of-the-art wavelet compression from the signal processing and data compression community, suitable for use in a production computing environment. To quantify compression effects, we focus on measuring bit rate versus maximum error as a quality metric to provide precision guarantees for scientific analysis on remotely compressed POP (Parallel Ocean Program) data.

Proceedings Article
16 Jun 2011
TL;DR: An improved algorithm based on Discrete Wavelet Transform and Discrete Cosine Transform Quantization Coefficients Decomposition (DCT-QCD) to detect cloning forgery is presented and Experimental results show that the proposed scheme accurately detects such specific image manipulations.
Abstract: Due to rapid advances and availabilities of powerful image processing software, digital images are easy to manipulate and modify for ordinary people. This makes it more and more difficult for a viewer to check the authenticity of a given digital image. Copy-move forgery is a specific type of image tampering where a part of the image is copied and pasted on another part generally to conceal unwanted portions of the image. Hence, the goal in detection of copy-move forgeries is to detect image areas that are same or extremely similar. In this paper we present an improved algorithm based on Discrete Wavelet Transform (DWT) and Discrete Cosine Transform Quantization Coefficients Decomposition (DCT-QCD) to detect such cloning forgery. Furthermore, for academic purposes and via a simplified, toy image we demonstrate how such algorithm works in detecting cloning forgery. Experimental results show that the proposed scheme accurately detects such specific image manipulations as long as the copied region is not rotated or scaled and copied area pasted as far as possible in specific position from original portion.

Proceedings ArticleDOI
01 Dec 2011
TL;DR: An offline ECG compression technique, based on encoding of successive sample differences is proposed, which is presently being implemented in a wireless telecardiology system using a standalone embedded system.
Abstract: An offline ECG compression technique, based on encoding of successive sample differences is proposed. The encoded elements are generated through four stages, viz., down sampling of raw samples, normalization of successive sample differences; data grouping; magnitude and sign encoding; and finally zero element compression. Initially, the compression algorithm is validated with short duration raw ECG samples from PTB database under Physionet. MATLAB simulation results using ptb-db data with 8-bit quantization results a compression ratio (CR) of 9.02 and percentage root mean square difference (PRD) of 2.51. With mit-db these figures are 4.68 and 0.739 respectively. The algorithm is presently being implemented in a wireless telecardiology system using a standalone embedded system.

Journal Article
TL;DR: Two image compression techniques are simulated based on Discrete Cosine Transform and Discrete Wavelet Transform and the results are shown and different quality parameters of its by applying on various images are compared.
Abstract: Image compression is a method through which we can reduce the storage space of images, videos which will helpful to increase storage and transmission process's performance. In image compression, we do not only concentrate on reducing size but also concentrate on doing it without losing quality and information of image. In this paper, two image compression techniques are simulated. The first technique is based on Discrete Cosine Transform (DCT) and the second one is based on Discrete Wavelet Transform (DWT). The results of simulation are shown and compared different quality parameters of its by applying on various images Keywords: DCT, DWT, Image compression, Image processing

Patent
31 Aug 2011
TL;DR: In this article, a back-end pixel processing unit 120 that receives pixel data after being processed by at least one of the front-end pixels processing unit 80 and a pixel processing pipeline 82 is described.
Abstract: Disclosed embodiments provide for a an image signal processing system 32 that includes back-end pixel processing unit 120 that receives pixel data after being processed by at least one of a front-end pixel processing unit 80 and a pixel processing pipeline 82. In certain embodiments, the back-end processing unit 120 receives luma/chroma image data and may be configured to apply face detection operations, local tone mapping, bright, contrast, color adjustments, as well as scaling. Further, the back-end processing unit 120 may also include a back-end statistics unit 2208 that may collect frequency statistics. The frequency statistics may be provided to an encoder 118 and may be used to determine quantization parameters that are to be applied to an image frame.

Journal ArticleDOI
TL;DR: A new approach based on feature mining on the discrete cosine transform (DCT) domain and machine learning for steganalysis of JPEG images is proposed and prominently outperforms the well-known Markov-process based approach.
Abstract: The threat posed by hackers, spies, terrorists, and criminals, etc. using steganography for stealthy communications and other illegal purposes is a serious concern of cyber security. Several steganographic systems that have been developed and made readily available utilize JPEG images as carriers. Due to the popularity of JPEG images on the Internet, effective steganalysis techniques are called for to counter the threat of JPEG steganography. In this article, we propose a new approach based on feature mining on the discrete cosine transform (DCT) domain and machine learning for steganalysis of JPEG images. First, neighboring joint density features on both intra-block and inter-block are extracted from the DCT coefficient array and the absolute array, respectively; then a support vector machine (SVM) is applied to the features for detection. An evolving neural-fuzzy inference system is employed to predict the hiding amount in JPEG steganograms. We also adopt a feature selection method of support vector machine recursive feature elimination to reduce the number of features. Experimental results show that, in detecting several JPEG-based steganographic systems, our method prominently outperforms the well-known Markov-process based approach.

Journal ArticleDOI
TL;DR: An image authentication scheme which detects illegal modifications for image vector quantization (VQ) by using the pseudo random sequence and achieves acceptable image quality of the embedded image while keeping good detecting accuracy.

Journal ArticleDOI
TL;DR: A new fast local feature detector coined Harris-Hessian (H-H) is designed according to the characteristics of GPU to accelerate the local feature detection and a new pairwise weak geometric consistency constraint (P-WGC) algorithm is proposed to refine the search result.
Abstract: State-of-the-art near-duplicate image search systems mostly build on the bag-of-local features (BOF) representation. While favorable for simplicity and scalability, these systems have three shortcomings: 1) high time complexity of the local feature detection; 2) discriminability reduction of local descriptors due to BOF quantization; and 3) neglect of the geometric relationships among local features after BOF representation. To overcome these shortcomings, we propose a novel framework by using graphics processing units (GPU). The main contributions of our method are: 1) a new fast local feature detector coined Harris-Hessian (H-H) is designed according to the characteristics of GPU to accelerate the local feature detection; 2) the spatial information around each local feature is incorporated to improve its discriminability, supplying semi-local spatial coherent verification (LSC); and 3) a new pairwise weak geometric consistency constraint (P-WGC) algorithm is proposed to refine the search result. Additionally, part of the system is implemented on GPU to improve efficiency. Experiments conducted on reference datasets and a dataset of one million images demonstrate the effectiveness and efficiency of H-H, LSC, and P-WGC.

Proceedings ArticleDOI
29 Dec 2011
TL;DR: It is shown that it is possible to detect this kind of attack by measuring the noisiness of images obtained by re-compressing the forged image at different quality factors, and was able to correctly detect forged images in 97% of the cases.
Abstract: JPEG coding leaves characteristic footprints that can be leveraged to reveal doctored images, e.g. providing the evidence for local tampering, copy-move forgery, etc. Recently, it has been shown that a knowledgeable attacker might attempt to remove such footprints by adding a suitable anti-forensic dithering signal to the image in the DCT domain. Such noise-like signal restores the distribution of the DCT coefficients of the original picture, at the cost of affecting image quality. In this paper we show that it is possible to detect this kind of attack by measuring the noisiness of images obtained by re-compressing the forged image at different quality factors. When tested on a large set of images, our method was able to correctly detect forged images in 97% of the cases. In addition, the original quality factor could be accurately estimated.

Patent
22 Mar 2011
TL;DR: In this article, the image frame has a region of interest (ROI) and a non ROI (non-ROI), and quantization scale for ROI and non-ROIs based on ROI priorities and ROI statistics is calculated.
Abstract: A method of encoding an image frame in a video encoding system. The image frame has a region of interest (ROI) and a non region of interest (non-ROI). In the method, quantization scale for the image frame based on rate control information is determined. ROI statistics based on residual energy of the ROI and non-ROI is then calculated. Quantization scale for the image frame based on ROI priorities and ROI statistics is calculated. Further, quantization scales for ROI and non-ROI based on ROI priorities are determined.

Journal ArticleDOI
TL;DR: A passive-blind scheme for detecting forged images that can estimate quantization tables and identify tampered regions effectively and three common forgery techniques, copy-paste tampering, inpainting, and composite tampering are used.
Abstract: In this paper, we propose a passive-blind scheme for detecting forged images. The scheme leverages quantization table estimation to measure the inconsistency among images. To improve the accuracy of the estimation process, each AC DCT coefficient is first classified into a specific type; then the corresponding quantization step size is measured adaptively from its energy density spectrum (EDS) and the EDS's Fourier transform. The proposed content-adaptive quantization table estimation scheme is comprised of three phases: pre-screening, candidate region selection, and tampered region identification. In the pre-screening phase, we determine whether an input image has been JPEG compressed, and count the number of quantization steps whose size is equal to one. To select candidate regions for estimating the quantization table, we devise a candidate region selection algorithm based on seed region generation and region growing. First, the seed region generation operation finds a suitable region by removing suspect regions, after which the selected seed region is merged with other suitable regions to form a candidate region. To avoid merging suspect regions, a candidate region refinement operation is performed in the region growing step. After estimating the quantization table from the candidate region, an maximum-likelihood-ratio classifier exploits the inconsistency of the quantization table to identify tampered regions block by block. To evaluate the scheme's performance in terms of tampering detection, three common forgery techniques, copy-paste tampering, inpainting, and composite tampering, are used. Experiment results demonstrate that the proposed scheme can estimate quantization tables and identify tampered regions effectively.

Book ChapterDOI
18 May 2011
TL;DR: Inspired by Fridrich et al's perturbed quantization (PQ) steganography, a technique called perturbed motion estimation (PME) is introduced to perform motion estimation and message hiding in one step to minimize the embedding impacts.
Abstract: In this paper, we propose an adaptive video steganography tightly bound to video compression. Unlike traditional approaches utilizing spatial/transformed domain of images or raw videos which are vulnerable to certain existing steganalyzers, our approach targets the internal dynamics of video compression. Inspired by Fridrich et al's perturbed quantization (PQ) steganography, a technique called perturbed motion estimation (PME) is introduced to perform motion estimation and message hiding in one step. Intending to minimize the embedding impacts, the perturbations are optimized with the hope that these perturbations will be confused with normal estimation deviations. Experimental results show that, satisfactory levels of visual quality and security are achieved with adequate payloads.

Proceedings ArticleDOI
22 May 2011
TL;DR: The conclusion is that removing the traces of the JPEG compression history could be much more challenging than it might appear, as anti-forensic methods are bound to leave characteristic traces.
Abstract: The statistical footprint left by JPEG compression can be a valuable source of information for the forensic analyst. Recently, it has been shown that a suitable anti-forensic method can be used to destroy these traces, by properly adding a noise-like signal to the quantized DCT coefficients. In this paper we analyze the cost of this technique in terms of introduced distortion and loss of image quality. We characterize the dependency of the distortion on the image statistics in the DCT domain and on the quantization step used in JPEG compression. We also evaluate the loss of quality as measured by means of a perceptual metric, showing that a perceptually-optimized version of the anti-forensic method fails to completely conceal the forgery. Our conclusion is that removing the traces of the JPEG compression history could be much more challenging than it might appear, as anti-forensic methods are bound to leave characteristic traces.

Proceedings ArticleDOI
16 Sep 2011
TL;DR: A spatial division design shows a speedup of 72x in the four-GPU-based implementation of the PPVQ compression scheme, which consists of linear prediction, bit depth partitioning, vector quantization, and entropy coding.
Abstract: For the ultraspectral sounder data which features thousands of channels at each observation location, lossless compression is desirable to save storage space and transmission time without losing precision in retrieval of geophysical parameters. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression scheme for ultraspectral sounder data. It consists of linear prediction, bit-depth partitioning, vector quantization, and entropy coding. In our previous work, the two most time consuming stages of linear prediction and vector quantization were identified for GPU implementation. For GIFTS data, using a spectral division strategy for sharing the compression workload among four GPUs, a speedup of ~42x was achieved. To further enhance the speedup, this work will explore a spatial division strategy for sharing workload in processing the six parts of a GIFTS datacube. As result, the total processing time of a GIFTS datacube on four GPUs can be less than 13 seconds which is equivalent to a speedup of ~72x. The use of multiple GPUs for PPVQ compression is thus promising as a low-cost and effective compression solution for ultraspectral sounder data for rebroadcast use.

Journal ArticleDOI
TL;DR: By designing an adaptive threshold value in the extraction process, the proposed blind watermarking scheme is more robust for resisting common attacks such as median filtering, average filtering, and Gaussian noise.
Abstract: This paper proposes a blind watermarking scheme based on wavelet tree quantization for copyright protection. In such a quantization scheme, there exists a large significant difference while embedding a watermark bit 1 and a watermark bit 0; it then does not require any original image or watermark during watermark extraction process. As a result, the watermarked images look lossless in comparison with the original ones, and the proposed method can effectively resist common image processing attacks; especially for JPEG compression and low-pass filtering. Moreover, by designing an adaptive threshold value in the extraction process, our method is more robust for resisting common attacks such as median filtering, average filtering, and Gaussian noise. Experimental results show that the watermarked image looks visually identical to the original, and the watermark can be effectively extracted.

Journal ArticleDOI
TL;DR: A CMOS image sensor that exploits the possibility of reconfiguring pixel photodiodes for energy harvesting for wireless image sensor applications is proposed, based on the logarithmic sensor architecture.
Abstract: In this brief, a CMOS image sensor that exploits the possibility of reconfiguring pixel photodiodes for energy harvesting for wireless image sensor applications is proposed. Based on the logarithmic sensor architecture, each pixel photodiode of the proposed sensor array can be configured into either photosensing or energy-harvesting element to accomplish image-capturing or energy-harvesting purpose, respectively. An addressing scheme for reduced resolution readout and improved energy-harvesting efficiency, and a two-level quantization scheme for reduced readout dynamic power consumption are also proposed.