scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2016"


Journal ArticleDOI
TL;DR: A new histogram shifting-based RDH scheme for JPEG images is proposed, in which the zero coefficients remain unchanged and only coefficients with values 1 and -1 are expanded to carry message bits, and a block selection strategy based on the number of zero coefficients in each 8 × 8 block can be utilized to adaptively choose DCT coefficients for data hiding.
Abstract: Among various digital image formats used in daily life, the Joint Photographic Experts Group (JPEG) is the most popular. Therefore, reversible data hiding (RDH) in JPEG images is important and useful for many applications such as archive management and image authentication. However, RDH in JPEG images is considerably more difficult than that in uncompressed images because there is less information redundancy in JPEG images than that in uncompressed images, and any modification in the compressed domain may introduce more distortion in the host image. Furthermore, along with the embedding capacity and fidelity (visual quality), which have to be considered for uncompressed images, the storage size of the marked JPEG file should be considered. In this paper, based on the philosophy behind the JPEG encoder and the statistical properties of discrete cosine transform (DCT) coefficients, we present some basic insights into how to select quantized DCT coefficients for RDH. Then, a new histogram shifting-based RDH scheme for JPEG images is proposed, in which the zero coefficients remain unchanged and only coefficients with values 1 and −1 are expanded to carry message bits. Moreover, a block selection strategy based on the number of zero coefficients in each $8\,\times \,8$ block is proposed, which can be utilized to adaptively choose DCT coefficients for data hiding. Experimental results demonstrate that by using the proposed method we can easily realize high embedding capacity and good visual quality. The storage size of the host JPEG file can also be well preserved.

174 citations


Journal ArticleDOI
TL;DR: The proposed selection-channel-aware features can be efficiently computed and provide a substantial detection gain across all the tested algorithms especially for small payloads.
Abstract: All the modern steganographic algorithms for digital images are content adaptive in the sense that they restrict the embedding modifications to complex regions of the cover, which are difficult to model for the steganalyst. The probabilities with which the individual cover elements are modified (the selection channel) are jointly determined by the size of the embedded payload and the content complexity. The most accurate detection of content-adaptive steganography is currently achieved with the detectors built as classifiers trained on cover and stego features that incorporate the knowledge of the selection channel. While the selection-channel-aware features have been proposed for detection of spatial domain steganography, an equivalent for the JPEG domain does not exist. Since modern steganographic algorithms for JPEG images are currently best detected with the features formed by the histograms of the noise residuals split by their JPEG phase, we use such feature sets as a starting point in this paper and extend their design to incorporate the knowledge of the selection channel. This is achieved by accumulating in the histograms a quantity that bounds the expected absolute distortion of the residual. The proposed features can be efficiently computed and provide a substantial detection gain across all the tested algorithms especially for small payloads.

155 citations


Journal ArticleDOI
TL;DR: This paper proposes a double JPEG compression detection algorithm based on a convolutional neural network (CNN) designed to classify histograms of discrete cosine transform (DCT) coefficients, which differ between single-compressed areas (tampered areas) and double-compared areas (untampered Areas).
Abstract: Double JPEG compression detection has received considerable attention in blind image forensics. However, only few techniques can provide automatic localization. To address this challenge, this paper proposes a double JPEG compression detection algorithm based on a convolutional neural network (CNN). The CNN is designed to classify histograms of discrete cosine transform (DCT) coefficients, which differ between single-compressed areas (tampered areas) and double-compressed areas (untampered areas). The localization result is obtained according to the classification results. Experimental results show that the proposed algorithm performs well in double JPEG compression detection and forgery localization, especially when the first compression quality factor is higher than the second.

143 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: For any type of image, this method performs as good or better (on average) than any of the existing image formats for lossless compression.
Abstract: We present a novel lossless image compression algorithm. It achieves better compression than popular lossless image formats like PNG and lossless JPEG 2000. Existing image formats have specific strengths and weaknesses: e.g. JPEG works well for photographs, PNG works well for line drawings or images with few distinct colors. For any type of image, our method performs as good or better (on average) than any of the existing image formats for lossless compression. Interlacing is improved compared to PNG, making the format suitable for progressive decoding and responsive web design.

116 citations


Posted Content
TL;DR: In this article, a CNN was used for JPEG compression artifacts reduction, which can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods.
Abstract: This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level.

83 citations


Journal ArticleDOI
TL;DR: Results show that the proposed iDTT algorithm not only has higher compression ratio than iDCT method, but also is compatible with the widely used JPEG standard, and a framework of lossless image compression based on integer DTT is proposed.

59 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information, and jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains.
Abstract: The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared with the JPEG coded image collections, our method achieves average bit savings of more than 31%.

45 citations


Journal ArticleDOI
TL;DR: The authors shed some light on the recent developments of the JPEG committee and discuss both the current status of JPEG XT and its future plans.
Abstract: The Joint Photographic Experts Group recently produced a new standard, JPEG eXTension. JPEG XT is both backward-compatible with legacy JPEG and offers the ability to encode images of higher precision and higher dynamic range, and in lossy or lossless modes. Here, the authors shed some light on the recent developments of the JPEG committee and discuss both the current status of JPEG XT and its future plans.

43 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: A novel hybrid scan order to rearrange subaperture images into an image sequence is proposed and verified its importance to coding performance of light field image format.
Abstract: A Light field image contains shear amount of data as it keeps the full spatio-angular information of the real scene. In this paper we propose a light field image coding scheme based on the latest JEM coding technologies. We propose a novel hybrid scan order to rearrange subaperture images into an image sequence and verify its importance to coding performance of light field image format. The experiment on EPFL light field image dataset demonstrates that our scheme achieves 7.06 dB gain compared with directly encoding the image by the JPEG standard. With the QP set to 50, our scheme achieves an average compression ratio of 7107, and still provides larger PSNRs and better viewing experience than JPEG at a compression ratio of 100.

41 citations


Journal ArticleDOI
01 Jan 2016-Optik
TL;DR: A block based lossless image compression algorithm using Hadamard transform and Huffman encoding which is a simple algorithm with less complexity that yields better results in terms of compression ratio when compared with existing lossless compression algorithms such as JPEG 2000.

36 citations


Journal ArticleDOI
Yi Zhang, Xiangyang Luo, Chunfang Yang, Dengpan Ye1, Fenlin Liu 
TL;DR: An adaptive steganography algorithm resisting JPEG compression and detection is designed, which utilizes the relationship between coefficients in a DCT block and the means of that in three adjacent DCT blocks and has a good JPEG compression resistant ability and a strong detection resistant performance.
Abstract: Current typical adaptive steganography algorithms take the detection resistant capability into account adequately but usually cannot extract the embedded secret messages correctly when stego images suffer from compression attack. In order to solve this problem, a framework of adaptive steganography resisting JPEG compression and detection is proposed. Utilizing the relationship between Discrete Cosine Transformation DCT coefficients, the domain of messages embedding is determined; for the maximum of the JPEG compression resistant ability, the modifying magnitude of different DCT coefficients caused by messages embedding can be determined; in order to ensure the completely correct extraction of embedded messages after JPEG compression, error correct codes are used to encode the messages to be embedded; on the basis of the current distortion functions, the distortion value of DCT coefficients corresponding to the modifying magnitude in the embedding domain can be calculated; to improve the detection resistant ability of the stego images and realize the minimum distortion embedding, syndrome-trellis codes are used to embed the encoded messages into the DCT coefficients that have a smaller distortion value. Based on the proposed framework, an adaptive steganography algorithm resisting JPEG compression and detection is designed, which utilizes the relationship between coefficients in a DCT block and the means of that in three adjacent DCT blocks. The experimental results that demonstrate the proposed algorithm not only has a good JPEG compression resistant ability but also has a strong detection resistant performance. Comparing with current J-UNIWARD steganography under quality factor 85 of JPEG compression, the extraction error rates without pre-compression decrease from about 50% to nearly 0, while the stego images remain a good detection resistant ability comparing with a typical robust watermarking algorithm, which shows the validity of the proposed framework. Copyright © 2016 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
05 Jun 2016
TL;DR: This work analyzes the error propagation sensitivity in the DCT network and uses this information to model the impact of introduced errors on the output quality of JPEG, and formulate a novel optimization problem that maximizes power savings under an error budget.
Abstract: JPEG compression based on the discrete cosine transform (DCT) is a key building block in low-power multimedia applications. We use approximate computing to exploit the error tolerance of JPEG and formulate a novel optimization problem that maximizes power savings under an error budget. We analyze the error propagation sensitivity in the DCT network and use this information to model the impact of introduced errors on the output quality. Simulations show up to 15% reduction in area and delay which corresponds to 40% power savings at iso-delay.

Journal ArticleDOI
TL;DR: A novel near-lossless color filter array (CFA) image compression algorithm based on JPEG-LS is proposed for VLSI implementation that consists of a pixel restoration, a prediction, a run mode, and entropy coding modules.
Abstract: In this paper, a novel near-lossless color filter array (CFA) image compression algorithm based on JPEG-LS is proposed for VLSI implementation. It consists of a pixel restoration, a prediction, a run mode, and entropy coding modules. According to the information of the previous research, a context table and row memory consumed more than 81% hardware cost in a JPEG-LS encoder design. Hence, in this paper, a novel context-free and near-lossless image compression algorithm is presented. Since removing the context model causes decreasing of the compression performance, a novel prediction, run mode, and modified Golomb-Rice coding techniques were used to improve the compression efficiency. The VLSI architecture of the proposed image compressor consists of a register bank, a pixel restoration module, a predictor, a run mode module, and an entropy encoder. A pipeline technique was used to improve the performance of this. It contains only 10.9k gate count, and the core area is 30625 μm 2 , synthesized by using a 90-nm CMOS process. Compared with the previous JPEG-LS designs, this paper reduces the gate counts by 44.1% and 41.7%, respectively, for five standard and eight endoscopy testing images in CFA format. It also improves the average PSNR values by 0.96 and 0.43 dB, respectively, for the same test images.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: For the first time, this paper focuses on applying the JPEG XR standard, which supports lossy and lossless coding for various kinds of images including high dynamic range images, and shows that the proposed encryption method can provide approximately the same compression performance as that of JPEGXR compression without any encryption.
Abstract: In many multimedia applications, image encryption has to be conducted prior to image compression. This paper proposes an Encryption-then-Compression system using a JPEG XR friendly perceptual encryption method, which enables to be conducted prior to JPEG XR compression. The proposed encryption method can provide approximately the same compression performance as that of JPEG XR compression without any encryption. It is also shown that the proposed system consists of four block-based encryption steps, and provides a reasonably high level of security. Most of conventional perceptual encryption methods have not been designed for international compression standards, but for the first time this paper focuses on applying the JPEG XR standard, which supports lossy and lossless coding for various kinds of images including high dynamic range images.

Journal ArticleDOI
TL;DR: A new double compression detection algorithm is proposed that exploits footprints introduced by all non-zero and zero AC modes based on Benford’s law in a low-dimensional representation via PCA and is applicable to detect double compression from a JPEG file and localize tampered regions in actual image forgery scenarios.
Abstract: The current double JPEG compression detection techniques identify whether or not an JPEG image file has undergone the compression twice, by knowing its embedded quantization table. This paper addresses another forensic scenario in which the quantization table of a JPEG file is not explicitly or reliably known, which may compel the forensic analyst to blindly reveal the recompression clues. To do this, we first statistically analyze the theory behind quantized alternating current (AC) modes in JPEG compression and show that the number of quantized AC modes required to detect double compression is a function of both the image's block texture and the compression's quality level in a fresh formulation. Consequently, a new double compression detection algorithm is proposed that exploits footprints introduced by all non-zero and zero AC modes based on Benford's law in a low-dimensional representation via PCA. Then, some evaluation frameworks are constructed to assess the robustness and generalization of the proposed method on various textured images belonging to three standard databases as well as different compression quality level settings. The average $$F_{1}\text {-measure}$$F1-measure score on all tested databases in the proposed method is about 74 % much better than the state-of-the-art performance of 67.7 %. The proposed algorithm is also applicable to detect double compression from a JPEG file and localize tampered regions in actual image forgery scenarios. An implementation of our algorithms and used databases are available upon request to fellow researchers.

Journal ArticleDOI
TL;DR: The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression, which has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.
Abstract: The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of $8\times 8$ non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.

Journal ArticleDOI
TL;DR: A new tool for forensic recovery of single and multi-fragment JPEG/JFIF data files can significantly outperform Adroit Photo Forensics and is compared with the well-known Adroit photo Forensics state-of-the art tool.
Abstract: In this paper, we present a new tool for forensic recovery of single and multi-fragment JPEG/JFIF data files. First, we discuss the basic design and the technical methods composing our proposed data carving algorithm. Next, we compare the performance of our method with the well-known Adroit Photo Forensics (APF) state-of-the art tool. This comparison is centered on both the carving results as well as the obtained data processing speed, and is evaluated in terms of the results that can be obtained for several well-known reference data sets. Important to note is that we specifically focus on the fundamental recovery and fragment matching performance of the tools by forcing them to use various assumed cluster sizes. We show that on all accounts our new tool can significantly outperform APF. This improvement in data processing speed and carving results can be mostly attributed to novel methods to iterate and reduce the data search space and to a novel parameterless method to determine the end of a fragment based on the pixel data. Finally, we discuss several options for future research.

Book ChapterDOI
29 Jul 2016
TL;DR: An improved distortion function for the generalized uniform embedding strategy, called improved UERD (IUERD), which gains favorable performance in terms of secure embedding capacity against steganalysis.
Abstract: With the wide application of the minimal distortion embedding framework, a well-designed distortion function is of vital importance In this paper, we propose an improved distortion function for the generalized uniform embedding strategy, called improved UERD (IUERD) Although the UERD has made great success, there still exists room for improvement in designing the distortion function As a result, the mutual correlations among DCT blocks are utilized more efficiently in the proposed distortion function, which leads to less statistical detectability The effectiveness of the proposed IUERD is verified with the state-of-the-art steganalyzer - JRM on the BOSSbase database Compared with prior arts, the proposed scheme gains favorable performance in terms of secure embedding capacity against steganalysis

Journal ArticleDOI
TL;DR: This paper proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream, which allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolutioncodestream.
Abstract: Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG 2000 codestream. This codestream is JPEG 2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG 2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results.

Proceedings ArticleDOI
20 Jun 2016
TL;DR: This work addresses the known problem of detecting a previous compression in JPEG images, focusing on the challenging case of high and very high quality factors (>= 90) as well as repeated compression with identical or nearly identical quality factors.
Abstract: We address the known problem of detecting a previous compression in JPEG images, focusing on the challenging case of high and very high quality factors (>= 90) as well as repeated compression with identical or nearly identical quality factors. We first revisit the approaches based on Benford--Fourier analysis in the DCT domain and block convergence analysis in the spatial domain. Both were originally conceived for specific scenarios. Leveraging decision tree theory, we design a combined approach complementing the discriminatory capabilities. We obtain a set of novel detectors targeted to high quality grayscale JPEG images.


Proceedings ArticleDOI
01 Sep 2016
TL;DR: A short review of the known technologies of JPEG and how they are evaluated on the basis of the JPEG XT demo implementation, which puts the compression gains into perspective of more modern compression formats such as JPEG 2000.
Abstract: Despite its age, JPEG (formally, Rec. ITU-T T.81 — ISO/IEC 10918-1) is still the omnipresent image file format for lossy compression of photographic images. While its rate-distortion performance is not competitive with state-of-the-art schemes like JPEG 2000 or HEVC, manifold techniques have been developed over the years to improve its compression performance. This article provides a short review of the known technologies and evaluates them on the basis of the JPEG XT demo implementation available on the home page of the JPEG committee. It also puts the compression gains into perspective of more modern compression formats such as JPEG 2000.

Journal ArticleDOI
TL;DR: Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way.
Abstract: In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: The experimental results show the proposed scheme is effective for not only still images, but also video sequences in terms of the querying such as false positive, false negative and true positive matches, while keeping a high level of the security.
Abstract: A secure identification scheme for JPEG images is proposed in this paper. The aim is to robustly identify JPEG images which are generated from the same original image under various compression levels in security. A property of the positive and negative signs of DCT coefficients is employed to achieve a robust scheme. The proposed scheme is robust against a difference in compression levels, and does not produce false negative matches in any compression level. Conventional schemes that have this property are not secure. To construct a secure identification system, we propose a novel identification scheme that consists of new error correction technique with 1-bit parity and a fuzzy commitment scheme, which is a well-known biometric cryptosystem. The experimental results show the proposed scheme is effective for not only still images, but also video sequences in terms of the querying such as false positive, false negative and true positive matches, while keeping a high level of the security.

Journal ArticleDOI
TL;DR: An overview of the objective quality assessment that will be conducted as part of JPEG XS evaluation procedures is given and the most complex algorithm, HEVC SCC intra, achieves the highest compression efficiency on screen content.
Abstract: Today, many existing types of video transmission and storage infrastructure are not able to handle UHD uncompressed video in real time. To reduce the required bit rates, a low-latency lightweight compression scheme is needed. To this end, several standardization efforts, such as Display Stream Compression, Advanced DSC, and JPEG XS, are currently being made. Focusing on screen content use cases, this paper provides a comparison of existing codecs suited for this field of application. In particular, the performance of DSC, VC-2, JPEG 2000 (in low-latency and low-complexity configurations), JPEG and HEVC Screen Content Coding Extension (SCC) in intra mode are evaluated. First, quality is assessed in single and multiple generations. Then, error robustness is evaluated by inserting one-bit errors at random positions in the compressed bitstreams. Unsurprisingly, the most complex algorithm, HEVC SCC intra, achieves the highest compression efficiency on screen content. JPEG 2000 performs well in the three experiments while HEVC SCC does not provide multi-generation robustness. DSC guarantees quality preservation in single generation at high bit rates and VC-2 provides very high error resilience. This work gives the reader an overview of the objective quality assessment that will be conducted as part of JPEG XS evaluation procedures.

Proceedings ArticleDOI
01 Jul 2016
TL;DR: A binary tree based lossless depth coding scheme that arranges the residual frame into integer or binary residual bitmap that enables avoiding rendering artifacts in synthesized views due to depth compression artifacts.
Abstract: Depth maps are becoming increasingly important in the context of emerging video coding and processing applications. Depth images represent the scene surface and are characterized by areas of smoothly varying grey levels separated by sharp edges at the position of object boundaries. To enable high quality view rendering at the receiver side, preservation of these characteristics is important. Lossless coding enables avoiding rendering artifacts in synthesized views due to depth compression artifacts. In this paper, we propose a binary tree based lossless depth coding scheme that arranges the residual frame into integer or binary residual bitmap. High spatial correlation in depth residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard 3D video sequences, the proposed lossless depth coding has achieved compression ratio in the range of 20 to 80.

Posted Content
TL;DR: In this paper, the authors proposed lossless and near-lossless compression algorithms for multi-channel biomedical signals, which make use of information theory and signal processing tools (such as universal coding, universal prediction, and fast online implementations of multivariate recursive least squares), combined with simple methods to exploit spatial and temporal redundancies typically present in biomedical signals.
Abstract: This work proposes lossless and near-lossless compression algorithms for multi-channel biomedical signals The algorithms are sequential and efficient, which makes them suitable for low-latency and low-power signal transmission applications We make use of information theory and signal processing tools (such as universal coding, universal prediction, and fast online implementations of multivariate recursive least squares), combined with simple methods to exploit spatial as well as temporal redundancies typically present in biomedical signals The algorithms are tested with publicly available electroencephalogram and electrocardiogram databases, surpassing in all cases the current state of the art in near-lossless and lossless compression ratios

Proceedings ArticleDOI
01 Jan 2016
TL;DR: This work shows that the saliency-driven variable quantization JPEG coding method significantly improves perceived image quality, and devised an approach to equate Likert-type opinions to bitrate differences.
Abstract: Saliency-driven image coding is well worth pursuing. Previous studies on JPEG and JPEG2000 have suggested that region-of-interest coding brings little overall benefit compared to the standard implementation. We show that our saliency-driven variable quantization JPEG coding method significantly improves perceived image quality. To validate our findings, we performed large crowdsourcing experiments involving several hundred contributors, on 44 representative images. To quantify the level of improvement, we devised an approach to equate Likert-type opinions to bitrate differences. Our saliency-driven coding showed 11% bpp average benefit over the standard JPEG.

Journal ArticleDOI
22 Dec 2016-PLOS ONE
TL;DR: For a large and diverse set of images, it is found that SS-DWT significantly improves bitrates of non-photographic images, and the compression scheme is compliant with the JPEG 2000 part 2 standard.
Abstract: In order to improve bitrates of lossless JPEG 2000, we propose to modify the discrete wavelet transform (DWT) by skipping selected steps of its computation. We employ a heuristic to construct the skipped steps DWT (SS-DWT) in an image-adaptive way and define fixed SS-DWT variants. For a large and diverse set of images, we find that SS-DWT significantly improves bitrates of non-photographic images. From a practical standpoint, the most interesting results are obtained by applying entropy estimation of coding effects for selecting among the fixed SS-DWT variants. This way we get the compression scheme that, as opposed to the general SS-DWT case, is compliant with the JPEG 2000 part 2 standard. It provides average bitrate improvement of roughly 5% for the entire test-set, whereas the overall compression time becomes only 3% greater than that of the unmodified JPEG 2000. Bitrates of photographic and non-photographic images are improved by roughly 0.5% and 14%, respectively. At a significantly increased cost of exploiting a heuristic, selecting the steps to be skipped based on the actual bitrate instead of an estimated one, and by applying reversible denoising and lifting steps to SS-DWT, we have attained greater bitrate improvements of up to about 17.5% for non-photographic images.

Journal ArticleDOI
TL;DR: Rigorous simulations show the proposed framework and compression algorithm outperform several recent popular compression algorithms for WSNs such as Lossless Entropy Compression, S-Lempel-Ziv-Welch (LZW), and Lightweight Temporal Compression (LTC) using various real-world sensor datasets, demonstrating the merit of the proposed frameworks for unified temporal lossless and lossy data compression.
Abstract: Energy efficiency is one of the most critical issues in the design and deployment of Wireless Sensor Networks (WSNs). Data compression is an important approach to reducing energy consumption of data gathering in multihop sensor networks. Existing compression algorithms only apply to either lossless or lossy data compression, but not to both. This article presents a generalized predictive coding framework for unified lossless and lossy data compression. In addition, we devise a novel algorithm for lossless compression to significantly improve data compression performance for various data collections and applications in WSNs. Rigorous simulations show our proposed framework and compression algorithm outperform several recent popular compression algorithms for WSNs such as Lossless Entropy Compression (LEC), S-Lempel-Ziv-Welch (LZW), and Lightweight Temporal Compression (LTC) using various real-world sensor datasets, demonstrating the merit of the proposed framework for unified temporal lossless and lossy data compression in WSNs.