scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2013"


Journal ArticleDOI
TL;DR: This paper takes the perspective of the forensic analyst, and shows how it is possible to counter the aforementioned anti-forensic method revealing the traces of JPEG compression, regardless of the quantization matrix being used.
Abstract: Due to the lossy nature of transform coding, JPEG introduces characteristic traces in the compressed images. A forensic analyst might reveal these traces by analyzing the histogram of discrete cosine transform (DCT) coefficients and exploit them to identify local tampering, copy-move forgery, etc. At the same time, it has been recently shown that a knowledgeable adversary can possibly conceal the traces of JPEG compression, by adding a dithering noise signal in the DCT domain, in order to restore the histogram of the original image. In this paper, we study the processing chain that arises in the case of JPEG compression anti-forensics. We take the perspective of the forensic analyst, and we show how it is possible to counter the aforementioned anti-forensic method revealing the traces of JPEG compression, regardless of the quantization matrix being used. Tests on a large image dataset demonstrated that the proposed detector was able to achieve an average accuracy equal to 93%, rising above 99% when excluding the case of nearly lossless JPEG compression.

68 citations


Journal ArticleDOI
TL;DR: A new and effective image indexing technique that extracts features from JPEG compressed images using vector quantization techniques and a codebook generated using a K -means clustering algorithm that can accelerate the work of indexing images.

62 citations


Journal ArticleDOI
TL;DR: A lossless data hiding scheme which directly embeds data into the bitstream of JPEG images is presented and can achieve a significant improvement of embedding capacity.

52 citations


Proceedings ArticleDOI
26 May 2013
TL;DR: Experimental results show that the proposed method outperforms the state-of-the-art methods in a better trade-off between forensic undetectability and visual quality of processed images.
Abstract: The objective of JPEG anti-forensics is to remove all the possible footprints left by JPEG compression. By contrary, there exist detectors that attempt to identify any telltale of the image tampering operation of JPEG compression and JPEG anti-forensic processing. This paper makes contribution on improving the undetectability of JPEG anti-forensics, with a higher visual quality of processed images. The employment of constrained total variation based minimization for deblocking successfully fools the forensic methods detecting JPEG blocking, and another advanced JPEG forensic detector. Calibration-based detector is also defeated by conducting a further feature value optimization. Experimental results show that the proposed method outperforms the state-of-the-art methods in a better trade-off between forensic undetectability and visual quality of processed images.

51 citations


Journal ArticleDOI
TL;DR: Visibility thresholds (VTs) are measured and used for quantization of subband signals in JPEG2000 in order to hide coding artifacts caused by quantization, and are experimentally determined from statistically modeled quantization distortion.
Abstract: Due to exponential growth in image sizes, visually lossless coding is increasingly being considered as an alternative to numerically lossless coding, which has limited compression ratios. This paper presents a method of encoding color images in a visually lossless manner using JPEG2000. In order to hide coding artifacts caused by quantization, visibility thresholds (VTs) are measured and used for quantization of subband signals in JPEG2000. The VTs are experimentally determined from statistically modeled quantization distortion, which is based on the distribution of wavelet coefficients and the dead-zone quantizer of JPEG2000. The resulting VTs are adjusted for locally changing backgrounds through a visual masking model, and then used to determine the minimum number of coding passes to be included in the final codestream for visually lossless quality under the desired viewing conditions. Codestreams produced by this scheme are fully JPEG2000 Part-I compliant.

51 citations


Proceedings ArticleDOI
01 Sep 2013
TL;DR: A novel quantization table for the widely-used JPEG compression standard which leads to improved feature detection performance and is based on the observed impact of scale-space processing on the DCT basis functions.
Abstract: Keypoint or interest point detection is the first step in many computer vision algorithms. The detection performance of the state-of-the-art detectors is, however, strongly influenced by compression artifacts, especially at low bit rates. In this paper, we design a novel quantization table for the widely-used JPEG compression standard which leads to improved feature detection performance. After analyzing several popular scale-space based detectors, we propose a novel quantization table which is based on the observed impact of scale-space processing on the DCT basis functions. Experimental results show that the novel quantization table outperforms the JPEG default quantization table in terms of feature repeatability, number of correspondences, matching score, and number of correct matches.

40 citations


Proceedings ArticleDOI
26 May 2013
TL;DR: This work studies convergence and block stability for JPEG images compressed with quality factor 100 and derive methods to detect such compression in grayscale bitmap images, to estimate the number of recompressions, to identify the DCT implementation used for compression, and to uncover local tampering if image parts have been compressed with JPEG-100 at least once.
Abstract: Repeated rounding of sample blocks in alternating domains creates complex convergence paths. We study convergence and block stability for JPEG images compressed with quality factor 100 and derive methods to detect such compression in grayscale bitmap images, to estimate the number of recompressions, to identify the DCT implementation used for compression, and to uncover local tampering if image parts have been compressed with JPEG-100 at least once.

34 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: The recently introduced Sample-based Weighted Prediction (SWP) for HEVC lossless coding is investigated and improved and can be further improved for screen content by using a directional template predictor in cases where the SWP algorithm yields worse prediction.
Abstract: The recently introduced High Efficiency Video Coding (HEVC) standard is currently further investigated for potential use in professional applications. The considered Range Extensions should on the one hand introduce higher bit depths and additional color formats, and on the other hand the coding efficiency of HEVC for high fidelity compression as well as lossless compression is to be improved. In this paper we investigate and improve the recently introduced Sample-based Weighted Prediction (SWP) for HEVC lossless coding. Although being very efficient for natural video content, the SWP algorithm can be further improved for screen content by using a directional template predictor in cases where the SWP algorithm yields worse prediction. The mainly introduced predictor improves the lossless coding results by up to 9.9% compared to the unmodified HEVC reference software for lossless compression.

30 citations


Proceedings ArticleDOI
08 Mar 2013
TL;DR: Experimental results show that the new quantization table from psychovisual error threshold for DCT basis functions gives better quality image at lower average bit length of Huffman code than standard JPEG image compression.
Abstract: The quantization process is a main part of image compression to control visual quality and the bit rate of the image output. The JPEG quantization tables are obtained from a series of psychovisual experiments to determine a visual threshold. The visual threshold is useful in handling the intensity level of the colour image that can be perceived visually by the human visual system. This paper will investigate a psychovisual error threshold at DCT frequency on the grayscale image. The DCT coefficients are incremented one by one for each frequency order. Whereby, the contribution of DCT coefficients to the error reconstruction will be a primitive pyschovisual error. At certain threshold being set on this psychovisual error, the new quantization table can be generated. The experimental results show that the new quantization table from psychovisual error threshold for DCT basis functions gives better quality image at lower average bit length of Huffman code than standard JPEG image compression.

30 citations


01 Jan 2013
TL;DR: Each step in the compression and decompression of the JPEG algorithm is examined to the extent that an acceptable approximation of the original space-amplitude samples can be reconstructed from the compressed form.
Abstract: The basis for the JPEG algorithm is the Discrete Cosine Transform (DCT) which extracts spatial frequency information from the spatial amplitude samples. These frequency components are then quantized to eliminate the visual data from the image that is least perceptually apparent, thereby reducing the amount of information that must be stored. The redundant properties of the quantized frequency samples are exploited through quantization, run-length and huffman coding to produce the compressed representation. Each of these steps is reversible to the extent that an acceptable approximation of the original space-amplitude samples can be reconstructed from the compressed form. This paper examines each step in the compression and decompression. KEYWORD: Image Compression, JPEG, DCT, Quantization, Run-Length Coding.

29 citations


Journal ArticleDOI
TL;DR: It is demonstrated how the power of commodity graphics processing units can be used for efficient implementation of JPEG and DXT compression and the use of auxiliary indexes for efficient decompression, which are backward compatible with the JPEG standard.

Journal ArticleDOI
Yonggang Fu1
01 Mar 2013-Optik
TL;DR: A novel DCT based image watermarking scheme is proposed, where the watermark bits are encoded by BCH code, and then embedded into the host by modulating the relationships between the selected DCT coefficients.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: This work proposes a learning-based post-processing method to improve the alpha mattes extracted from JPEG images, and demonstrates that this method can produces superior results over existing state-of-the-art matting algorithms on a variety of inputs and compression levels.
Abstract: Single image matting techniques assume high-quality input images. The vast majority of images on the web and in personal photo collections are encoded using JPEG compression. JPEG images exhibit quantization artifacts that adversely affect the performance of matting algorithms. To address this situation, we propose a learning-based post-processing method to improve the alpha mattes extracted from JPEG images. Our approach learns a set of sparse dictionaries from training examples that are used to transfer details from high-quality alpha mattes to alpha mattes corrupted by JPEG compression. Three different dictionaries are defined to accommodate different object structure (long hair, short hair, and sharp boundaries). A back-projection criteria combined within an MRF framework is used to automatically select the best dictionary to apply on the object's local boundary. We demonstrate that our method can produces superior results over existing state-of-the-art matting algorithms on a variety of inputs and compression levels.

Journal ArticleDOI
01 Jun 2013-Optik
TL;DR: Experimental results show that the Improved Wavelet Lossless Compression Algorithm has high encoding efficiency, which can also effectively reduce encoding bit rate of lossless image compression.

Proceedings ArticleDOI
17 Jun 2013
TL;DR: While applying generated distortion functions of joint photographic experts group (JPEG) steganography with uncompressed side-image, the intrinsic statistical characteristics of the carrier image will be preserved better than the prior-art, and consequently the security performance of the corresponding JPEG steganographic can be improved significantly.
Abstract: In this paper, we present a new framework for designing distortion functions of joint photographic experts group (JPEG) steganography with uncompressed side-image. In our framework, the discrete cosine transform (DCT) coefficients, including all direct current (DC) coefficients and alternating current (AC) coefficients, are divided into two groups: first-priority group (FPG) and second-priority group (SPG). Different strategies are established to associate the distortion values to the coefficients in FPG and SPG, respectively. In this paper, three scenarios for dividing the coefficients into FPG and SPG are exemplified, which can be utilized to form a series of new distortion functions. Experimental results demonstrate that while applying these generated distortion functions to JPEG steganography, the intrinsic statistical characteristics of the carrier image will be preserved better than the prior-art, and consequently the security performance of the corresponding JPEG steganography can be improved significantly.

Journal ArticleDOI
TL;DR: The proposed joint-probability-based adaptive Golomb coding (JPBAGC) improves the efficiency of many image and video compression standards, such as the joint photographic experts group (JPEG) compression scheme and the H.264-intra JPEG-based image coding system.
Abstract: This paper proposes joint-probability-based adaptive Golomb coding (JPBAGC) to improve the performances of the Golomb family of codes, including Golomb coding (GC), Golomb–Rice coding (GRC), exp-Golomb coding (EGC), and hybrid Golomb coding (HGC), for image compression. The Golomb family of codes is ideally suited to the processing of data with geometric distribution. Since it does not require a coding table, it has higher coding efficiency than Huffman coding. In this paper, we find that there are many situations in which the probability distribution of data is not only geometric, but also depends on the probability distribution of the other data. Accordingly, we used the joint probability of generalizing the Golomb family of codes and exploiting the dependence between neighboring image data. The proposed JPBAGC improves the efficiency of many image and video compression standards, such as the joint photographic experts group (JPEG) compression scheme and the H.264-intra JPEG-based image coding system. Simulation results demonstrate the superior coding efficiency of the proposed scheme over those of Huffman coding, GC, GRC, EGC, and HGC.

Proceedings ArticleDOI
20 Mar 2013
TL;DR: It is seen that, despite its simplicity, the proposed extension performs close to JPEG 2000 and JPEG XR on the HDR test image set of the JPEG for high bit-rates.
Abstract: In its Paris meeting, the JPEG committee decided to work on a backwards compatible extension of the popular JPEG (10918-1) standard enabling lossy and lossless coding of high-dynamic range (HDR) images, the new standard shall allow legacy applications to decompress new code streams into a tone mapped version of the HDR image while codecs aware of the extensions will decompress the stream with full dynamic range. This paper proposes a set of extensions that have rather low implementation complexity, and use - whenever possible - functional design blocks already present in 10918-1. It is seen that, despite its simplicity, the proposed extension performs close to JPEG 2000 (15444-2) and JPEG XR (29199-2) on the HDR test image set of the JPEG for high bit-rates.

Journal ArticleDOI
TL;DR: Two approaches to adaptive JPEG-based compression of color images inside digital cameras are presented and it is demonstrated that the second approach provides more accurate estimate of degrading factor characteristics, and thus, a larger compression ratio increase compared to super-high quality (SHQ) mode used in consumer digital cameras.
Abstract: The paper presents two approaches to adaptive JPEG-based compression of color images inside digital cameras. Compression for both approaches, although lossy, is organized in such a manner that introduced distortions are not visible. This is done taking into account quality of each original image before it is subject to lossy compression. Noise characteristics and blur are assumed to be the main factors determining visual quality of original images. They are estimated in a fast and blind (automatic) manner for images in RAW format (first approach) and in Bitmap (second approach). The dominant distorting factor which can be either noise or blur is determined. Then, the scaling factor (SF) of JPEG quantization table is adaptively adjusted to preserve valuable information in a compressed image with taking into account estimated noise and blur influence. The advantages and drawbacks of the proposed approaches are discussed. Both approaches are intensively tested for real-life images. It is demonstrated that the second approach provides more accurate estimate of degrading factor characteristics, and thus, a larger compression ratio (CR) increase compared to super-high quality (SHQ) mode used in consumer digital cameras. The first approach mainly relies on the prediction of noise and blur characteristics to be observed in Bitmap images after a set of nonlinear operations applied to RAW data in image processing chain. It is simpler and requires less memory but appeared to be slightly less beneficial. Both approaches are shown to provide, on the average, more than two times increase in average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images. This is proven by the analysis of modern visual quality metrics able to adequately characterize compressed image quality.

Proceedings ArticleDOI
21 Sep 2013
TL;DR: Experimental results show that JPEG image compression encryption algorithm is effectively guaranteed for the actual engineering applications and will be widely used in secure communication.
Abstract: With the maturity of communication technology, digital image for its values, has become an important carrier of information. However, digital image also faces the huge pressure of mass data storage and transmission, and it may be attacked or falsified during the transmission processes. Thus, it is necessary to focus our attention on the image compression encryption technology. First and foremost, this article mainly talks about the necessity and classification of image compression technology, then we make a depth analysis of JEPG image compression algorithm. Moreover, we focus on the JPEG encoding algorithm and make a detailed description of JEPG encoder, decoder control processes. We select the original image to complete the Mat lab simulation analysis based on JEPG algorithm. Thirdly, by using the DSP host processor, we can complete the hardware implementation of image acquisition and compression easily. Last but not least, this article selects a better compressed image to finish image encryption process. Experimental results show that JPEG image compression encryption algorithm is effectively guaranteed for the actual engineering applications and will be widely used in secure communication.

01 Jan 2013
TL;DR: In this paper, a detailed analysis and performance comparison of HEVC intra coding with JPEG and JPEG 2000 (both 4:2:0 and 4:4:4 configurations) via a series of subjective and objective evaluations is presented.
Abstract: High Efficiency Video Coding (HEVC) demonstrates a significant improvement in compression efficiency compared to H.264/MPEG-4 AVC, especially for video with resolution beyond HD, such as 4K UHDTV. One advantage of HEVC is the improved intra coding of video frames. Hence, it is natural to question how such intra coding compares to state of the art compression codecs for still images. This paper attempts to answer this question by providing a detailed analysis and performance comparison of HEVC intra coding with JPEG and JPEG 2000 (both 4:2:0 and 4:4:4 configurations) via a series of subjective and objective evaluations. The evaluation results demonstrate that HEVC intra coding outperforms standard codecs for still images with the average bit rate reduction ranging from 16% (compared to JPEG 2000 4:4:4) up to 43% (compared to JPEG). These findings imply that both still images and moving pictures can be efficiently compressed by the same coding algorithm with higher compression efficiency.

Journal ArticleDOI
G. Lakhani1
TL;DR: Four modifications to the JPEG arithmetic coding (JAC) algorithm are presented, which obtain extra-ordinary amount of code reduction without adding any kind of losses, and the compression performance of the modified JPEG with JPEG XR, the latest block-based image coding standard is compared.
Abstract: This article presents four modifications to the JPEG arithmetic coding (JAC) algorithm, a topic not studied well before. It then compares the compression performance of the modified JPEG with JPEG XR, the latest block-based image coding standard. We first show that the bulk of inter/intra-block redundancy, caused due to the use of the block-based approach by JPEG, can be captured by applying efficient prediction coding. We propose the following modifications to JAC to take advantages of our prediction approach. 1) We code a totally different DC difference. 2) JAC tests a DCT coefficient by considering its bits in the increasing order of significance for coding the most significant bit position. It causes plenty of redundancy because JAC always begins with the zeroth bit. We modify this coding order and propose alternations to the JPEG coding procedures. 3) We predict the sign of significant DCT coefficients, a problem is not addressed from the perspective of the JPEG decoder before. 4) We reduce the number of binary tests that JAC codes to mark end-of-block. We provide experimental results for two sets of eight-bit gray images. The first set consists of nine classical test images mostly of size 512 ntn512 pixels. The second set consists of 13 images of size 2000ntn3000 pixels or more. Our modifications to JAC obtain extra-ordinary amount of code reduction without adding any kind of losses. More specifically, when we quantize the images using the default quantizers, our modifications reduce the total JAC code size of the images of these two sets by about 8.9 and 10.6%, and the JPEG Huffman code size by about 16.3 and 23.4%, respectively, on the average. Gains are even higher for coarsely quantized images. Finally, we compare the modified JAC with two settings of JPEG XR, one with no block overlapping and the other with the default transform (we denote them by JXR0 and JXR1, respectively). Our results show that for the finest quality rate image coding, the modified JAC compresses the large set images by about 5.8% more than JXR1 and by 6.7% more than JXR0, on the average. We provide some rate-distortion plots on lossy coding, which show that the modified JAC distinctly outperforms JXR0, but JXR1 beats us by about a similar margin.

Proceedings ArticleDOI
11 Nov 2013
TL;DR: A novel anti-forensic procedure, aimed at concealing the traces of single JPEG compression by recovering the original distribution of first significant digits (FSD) of the DCT coefficients, is proposed.
Abstract: Traces left by lossy compression processes have been widely studied in digital image forensics. In particular, the artifacts produced by JPEG compression have been characterized and exploited both in forensic methods and counter-forensic attacks. In this paper, we propose a novel anti-forensic procedure, aimed at concealing the traces of single JPEG compression by recovering the original distribution of first significant digits (FSD) of the DCT coefficients. We analyze the performance of our method and compare it with anti-forensic attacks reported in the literature in terms of quality of the resulting image. In addition, we prove the effectiveness of our approach as counter-forensic processing by measuring its impact on the performance of two different forensic tools, applied after the anti-forensic action.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: In this paper, a comparison between the JPEG XR and JPEG 2000 as tools for re-encoding JPEG images taken by smart phone camera is presented and experimental tests on image data set are performed in order to collect numerical evidence of the distortion introduced by the re- Encoding process.
Abstract: Capturing, storing and browsing images from smart phones is a task performed by any user every day. ISO/IEC JPEG is the commonly used standard for lossy compression of digital images. During the more than twenty years from JPEG standardization, novel image coding algorithms have been developed providing superior compression capabilities. In particular, the recent JPEG XR, providing high compression ratios and low computational complexity is an interesting candidate for the development of novel applications for smart phones. In this paper, a comparison between the JPEG XR and JPEG 2000 as tools for re-encoding JPEG images taken by smart phone camera is presented. Experimental tests on image data set are performed in order to collect numerical evidence of the distortion introduced by the re-encoding process.

Journal ArticleDOI
TL;DR: The experimental results show that the new lossless intra-coding method reduces the bit rate in comparison with the lossless-intra coding method in the HEVC standard and the proposed method results in slightly better compression ratio than the JPEG200 lossless coding.
Abstract: A new lossless intra-coding method based on a cross residual transform is applied to the next generation video coding standard HEVC (High Efficiency Video Coding). HEVC includes a multi-directional spatial prediction method to reduce spatial redundancy by using neighboring pixels as a prediction for the pixels in a block of data to be encoded. In the new lossless intra-coding method, the spatial prediction is performed as pixelwise DPCM but is implemented as block-based manner by using cross residual transform on the HEVC standard. The experimental results show that the new lossless intra-coding method reduces the bit rate of approximately 8.43% in comparison with the lossless-intra coding method in the HEVC standard and the proposed method results in slightly better compression ratio than the JPEG200 lossless coding.

Journal ArticleDOI
TL;DR: This paper finds that, in addition to rectangular blocks, the 2-D DCT is also orthogonal in the trapezoid and triangular blocks, and therefore, instead of eight by eight blocks, this paper can generalize the JPEG algorithm and divide an image according to the shapes of objects and achieve higher compression ratio.
Abstract: In the conventional JPEG algorithm, an image is divided into eight by eight blocks and then the 2-D DCT is applied to encode each block. In this paper, we find that, in addition to rectangular blocks, the 2-D DCT is also orthogonal in the trapezoid and triangular blocks. Therefore, instead of eight by eight blocks, we can generalize the JPEG algorithm and divide an image into trapezoid and triangular blocks according to the shapes of objects and achieve higher compression ratio. Compared with the existing shape adaptive compression algorithms, as we do not try to match the shape of each object exactly, the number of bytes used for encoding the edges can be less and the error caused from the high frequency component at the boundary can be avoided. The simulations show that, when the bit rate is fixed, our proposed algorithm can achieve higher PSNR than the JPEG algorithm and other shape adaptive algorithms. Furthermore, in addition to the 2-D DCT, we can also use our proposed method to generate the 2-D complete and orthogonal sine basis, Hartley basis, Walsh basis, and discrete polynomial basis in a trapezoid or a triangular block.

Book ChapterDOI
25 Sep 2013
TL;DR: This paper first identifies features that can discriminate RD-JPEG images from JPEG images and then uses Support Vector Machines (SVM) as a classification tool and shows that this technique for detecting resized double JPEG compressed images works well.
Abstract: Since JPEG is the most widely used compression standard, detection of forgeries in JPEG images is necessary. In order to create a forged JPEG image, the image is usually loaded into a photo editing software, manipulated and then re-saved as JPEG. This yields to double JPEG compression artifacts, which can possibly reveal the forgery. Many techniques for the detection of double JPEG compressed images have been proposed. However, when the image is resized before the second compression step, the blocking artifacts of the first JPEG compression are destroyed. Therefore, most reported techniques for detecting double JPEG compression do not work for this case. In this paper, we propose a technique for detecting resized double JPEG compressed (called RD-JPEG) images. We first identify features that can discriminate RD-JPEG images from JPEG images and then use Support Vector Machines (SVM) as a classification tool. Experiments with many RD-JPEG images with different quality and scaling factors indicate that our technique works well.

Proceedings ArticleDOI
17 Jun 2013
TL;DR: Compared to the state-of-the-art methods the proposed JPEG anti-forensic method is able to achieve a higher image visual quality while being undetectable under existing detectors.
Abstract: This paper proposes an anti-forensic method that disguises the footprints left by JPEG compression, whose objective is to fool existing JPEG forensic detectors while keeping a high visual quality of the processed image. First we examine the reliability of existing detectors and point out the potential vulnerability of the quantization table estimation based detector. Then we construct a new, non-parametric method to DCT histogram smoothing without any histogram statistical model. Finally JPEG forensic detectors are fooled by optimizing an objective function considering both the anti-forensic terms and a natural image statistical model. We show that compared to the state-of-the-art methods the proposed JPEG anti-forensic method is able to achieve a higher image visual quality while being undetectable under existing detectors.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: Under an infinite variance assumption, the expression of the optimal detector is derived together with a practical approximation formula based on multidimensional Fourier series, which outperforms existing state-of-art detectors for nonaligned double JPEG compression.
Abstract: In this paper, we investigate the problem of deciding whether a multidimensional signal has been quantized according to a given lattice or not Under an infinite variance assumption, we derive the expression of the optimal detector, together with a practical approximation formula based on multidimensional Fourier series As a forensic case study, the proposed detector is applied to the detection of nonaligned double JPEG compression Results on both synthetic signals and real JPEG images show interesting properties of the proposed detector Namely, the detector outperforms existing state-of-art detectors for nonaligned double JPEG compression The application of the proposed scheme to other forensic problems seems a natural extension of this work

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A novel framework to obtain an artifact-free enlarged image from a given JPEG image, based on a newly introduced JPEG image acquisition model, that realizes decompression and super-resolution interpolation simultaneously using multi-order total variation.
Abstract: We propose a novel framework to obtain an artifact-free enlarged image from a given JPEG image. The proposed formulation based on a newly introduced JPEG image acquisition model realizes decompression and super-resolution interpolation simultaneously using multi-order total variation, so that we can drastically reduce artifacts appearing in JPEG images such as block noise and mosquito noise, without generating staircasing effect, which is typical in existing total variation-based JPEG decompression methods. We also present a computationally-efficient optimization scheme, derived as a special case of a primal-dual splitting type algorithm, for solving the convex optimization problem associated with the proposed formulation. Numerical examples show that the proposed method works effectively compared with existing methods.

Proceedings ArticleDOI
TL;DR: An efficient method to locate the forged parts in a tampered JPEG image by estimating the shift of NA-DJPEG compression and it doesn't need an image dataset to train a machine learning based classifier or to get a proper threshold.
Abstract: In this paper, we present an efficient method to locate the forged parts in a tampered JPEG image The forged region usually undergoes a different JPEG compression with the background region in JPEG image forgeries When a JPEG image is cropped to another host JPEG image and resaved in JPEG format, the JPEG block grid of the tampered region often mismatches the JPEG block grid of the host image with a certain shift This phenomenon is called non-aligned double JPEG compression (NA-DJPEG) In this paper, we identify different JPEG compression forms by estimating the shift of NA-DJPEG compression Our shift estimating approach is based on the percentage of non zeros of JPEG coefficients in different situations Compared to previous work, our tampering location method (i) performances better when dealing with small image size, (ii) is robust to common tampering processing such as resizing, rotating, blurring and so on, (iii) doesn't need an image dataset to train a machine learning based classifier or to get a proper threshold