scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2015"


Journal ArticleDOI
TL;DR: A novel feature set for steganalysis of JPEG images engineered as first-order statistics of quantized noise residuals obtained from the decompressed JPEG image using 64 kernels of the discrete cosine transform (DCT) (the so-called undecimated DCT).
Abstract: This paper introduces a novel feature set for steganalysis of JPEG images. The features are engineered as first-order statistics of quantized noise residuals obtained from the decompressed JPEG image using 64 kernels of the discrete cosine transform (DCT) (the so-called undecimated DCT). This approach can be interpreted as a projection model in the JPEG domain, forming thus a counterpart to the projection spatial rich model. The most appealing aspect of this proposed steganalysis feature set is its low computational complexity, lower dimensionality in comparison with other rich models, and a competitive performance with respect to previously proposed JPEG domain steganalysis features.

350 citations


Journal ArticleDOI
TL;DR: A thorough objective investigation of the performance-complexity trade-offs offered by these techniques on medical data is carried out and a comparison of the presented techniques to H.265/MPEG-H HEVC, which is currently the most state-of-the-art video codec available is provided.
Abstract: The amount of image data generated each day in health care is ever increasing, especially in combination with the improved scanning resolutions and the importance of volumetric image data sets. Handling these images raises the requirement for efficient compression, archival and transmission techniques. Currently, JPEG 2000's core coding system, defined in Part 1, is the default choice for medical images as it is the DICOM-supported compression technique offering the best available performance for this type of data. Yet, JPEG 2000 provides many options that allow for further improving compression performance for which DICOM offers no guidelines. Moreover, over the last years, various studies seem to indicate that performance improvements in wavelet-based image coding are possible when employing directional transforms. In this paper, we thoroughly investigate techniques allowing for improving the performance of JPEG 2000 for volumetric medical image compression. For this purpose, we make use of a newly developed generic codec framework that supports JPEG 2000 with its volumetric extension (JP3D), various directional wavelet transforms as well as a generic intra-band prediction mode. A thorough objective investigation of the performance-complexity trade-offs offered by these techniques on medical data is carried out. Moreover, we provide a comparison of the presented techniques to H.265/MPEG-H HEVC, which is currently the most state-of-the-art video codec available. Additionally, we present results of a first time study on the subjective visual performance when using the aforementioned techniques. This enables us to provide a set of guidelines and settings on how to optimally compress medical volumetric images at an acceptable complexity level. HighlightsWe investigated how to optimally compress volumetric medical images with JP3D.We extend JP3D with directional wavelets and intra-band prediction.Volumetric wavelets and entropy-coding improve the compression performance.Compression gains for medical images with directional wavelets are often minimal.We recommend further adoption of JP3D for volumetric medical image compression.

139 citations


Proceedings ArticleDOI
TL;DR: A novel feature set called PHase Aware pRojection Model (PHARM) in which residuals obtained using a small number of small-support kernels are represented using first-order statistics of their random projections as in the projection spatial rich model PSRM.
Abstract: State-of-the-art JPEG steganographic algorithms, such as J-UNIWARD, are currently better detected in the spatial domain rather than the JPEG domain. Rich models built from pixel residuals seem to better capture the impact of embedding than features constructed as co-occurrences of quantized JPEG coefficients. However, when steganalyzing JPEG steganographic algorithms in the spatial domain, the pixels’ statistical properties vary because of the underlying 8 × 8 pixel grid imposed by the compression. In order to detect JPEG steganography more accurately, we split the statistics of noise residuals based on their phase w.r.t. the 8 × 8 grid. Because of the heterogeneity of pixels in a decompressed image, it also makes sense to keep the kernel size of pixel predictors small as larger kernels mix up qualitatively different statistics more, losing thus on the detection power. Based on these observations, we propose a novel feature set called PHase Aware pRojection Model (PHARM) in which residuals obtained using a small number of small-support kernels are represented using first-order statistics of their random projections as in the projection spatial rich model PSRM. The benefit of making the features “phase-aware” is shown experimentally on selected modern JPEG steganographic algorithms with the biggest improvement seen for J-UNIWARD. Additionally, the PHARM feature vector can be computed at a fraction of computational costs of existing projection rich models.

101 citations



Proceedings ArticleDOI
30 Jul 2015
TL;DR: This paper proposes an Encryption-then-Compression system using a JPEG-friendly perceptual encryption method, which enables to be conducted prior to JPEG compression, and can provides approximately the same compression performance as that of JPEG compression without any encryption.
Abstract: In many multimedia applications, image encryption has to be conducted prior to image compression. This paper proposes an Encryption-then-Compression system using a JPEG-friendly perceptual encryption method, which enables to be conducted prior to JPEG compression. The proposed encryption method can provides approximately the same compression performance as that of JPEG compression without any encryption, where both gray scale images and color ones are considered. It is also shown that the proposed system consists of four block-based encryption steps, and provide a reasonably high level of security. Most of conventional perceptual encryption methods have not been designed for international compression standards, but this paper focuses on applying the JPEG standard, as one of the most widely used image compression standards.

64 citations


Proceedings ArticleDOI
19 Apr 2015
TL;DR: The experimental results demonstrated that the proposed ETC system achieved both acceptable compression performance and enough key-space for secure image communication while remaining compatible with the JPEG 2000 standard.
Abstract: A new Encryption-then-Compression (ETC) system for the JPEG 2000 standard is proposed in this paper. An ETC system is known as a system that makes image communication secure and efficient by using perceptual encryption and image compression. The proposed system uses the sign-scrambling and block-shuffling of discrete wavelet transform (DWT) coefficients as perceptual encryption. Unlike conventional ETC systems, the proposed system is compatible with the JPEG 2000 standard because the perceptually encrypted coefficients can be efficiently compressed by the JPEG 2000. The experimental results demonstrated that the proposed system achieved both acceptable compression performance and enough key-space for secure image communication while remaining compatible with the JPEG 2000 standard.

60 citations


Proceedings ArticleDOI
19 Apr 2015
TL;DR: The proposed algorithm for the compression of plenoptic images is compared with state-of-the-art image compression algorithms, namely, JPEG 2000 and JPEG XR and demonstrated that the proposed algorithm improves the coding efficiency.
Abstract: Plenoptic images are obtained from the projection of the light crossing a matrix of microlens arrays which replicates the scene from different direction into a camera device sensor. Plenoptic images have a different structure with respect to regular digital images, and novel algorithms for data compression are currently under research. This paper proposes an algorithm for the compression of plenoptic images. The micro images composing a plenoptic image are processed by an adaptive prediction tool, aiming at reducing data correlation before entropy coding takes place. The algorithm is compared with state-of-the-art image compression algorithms, namely, JPEG 2000 and JPEG XR. Obtained results demonstrate that the proposed algorithm improves the coding efficiency.

53 citations


Journal ArticleDOI
TL;DR: This paper develops a simple yet very effective detection algorithm to identify decompressed JPEG images that outperforms the state-of-the-art methods by a large margin especially for high-quality compressed images through extensive experiments on various sources of images.
Abstract: To identify whether an image has been JPEG compressed is an important issue in forensic practice. The state-of-the-art methods fail to identify high-quality compressed images, which are common on the Internet. In this paper, we provide a novel quantization noise-based solution to reveal the traces of JPEG compression. Based on the analysis of noises in multiple-cycle JPEG compression, we define a quantity called forward quantization noise. We analytically derive that a decompressed JPEG image has a lower variance of forward quantization noise than its uncompressed counterpart. With the conclusion, we develop a simple yet very effective detection algorithm to identify decompressed JPEG images. We show that our method outperforms the state-of-the-art methods by a large margin especially for high-quality compressed images through extensive experiments on various sources of images. We also demonstrate that the proposed method is robust to small image size and chroma subsampling. The proposed algorithm can be applied in some practical applications, such as Internet image classification and forgery detection.

52 citations


Journal ArticleDOI
TL;DR: A rate-distortion performance analysis of the HEVC MSP profile in comparison to other popular still image and video compression schemes, including JPEG, JPEG 2000, JPEG XR, H.264/MPEG-4 AVC, VP8, VP9, and WebP is presented.
Abstract: The first version of the High Efficiency Video Coding (HEVC) standard was approved by both ITU-T and ISO/IEC in 2013 and includes three profiles: Main and Main 10 for typical video data with 8 and 10 bits, respectively, as well as a profile referred to as Main Still Picture (MSP) profile. Apparently, the MSP profile extends the HEVC application space toward still images which, in turn, brings up the question of how this HEVC profile performs relative to existing still image coding technologies. This paper aims at addressing this question from a coding-efficiency point-of-view by presenting a rate-distortion performance analysis of the HEVC MSP profile in comparison to other popular still image and video compression schemes, including JPEG, JPEG 2000, JPEG XR, H.264/MPEG-4 AVC, VP8, VP9, and WebP. In summary, it can be stated that the HEVC MSP profile provides average bit-rate savings in the range from 10% to 44% relative to the whole set of competing video and still image compression schemes when averaged over a representative test set of photographic still images. Compared with Baseline JPEG alone, the average bit-rate saving for the HEVC MSP profile is 44%.

40 citations


Proceedings ArticleDOI
Yi Zhang, Xiangyang Luo, Chunfang Yang, Dengpan Ye1, Fenlin Liu 
24 Aug 2015
TL;DR: The proposed JPEG-compression resistant adaptive Steganography algorithm not only has a high correct rate of extracted messages after JPEG compression, which increases from about 60% to nearly 100% comparing with J-UNIWARD steganography under quality factor 75 of JPEG compression), but also has a strong detection resistant performance.
Abstract: Current typical adaptive Steganography algorithms cannot extract the embedded secret messages correctly after compression. In order to solve this problem, a JPEG-compression resistant adaptive steganography algorithm is proposed. Utilizing the relationship between DCT coefficients, the domain of messages embedding is determined. The modifying magnitude of different DCT coefficients can be determined according to the quality factors of JPEG compression. To ensure the completely correct extraction of embedded messages after JPEG compression, the RS codes is used to encode the messages to be embedded. Besides, based on the current energy function in the PQe steganography and the distortion function in J-UNIWARD Steganography, the corresponding distortion value of DCT coefficients is calculated. With the help of that, STCs is used to embed the encoded messages into the DCT coefficients, which have a smaller distortion value. The experimental results under different quality factors of JPEG compression and different payloads demonstrate that the proposed algorithm not only has a high correct rate of extracted messages after JPEG compression, which increases from about 60% to nearly 100% comparing with J-UNIWARD steganography under quality factor 75 of JPEG compression, but also has a strong detection resistant performance.

37 citations


Journal ArticleDOI
TL;DR: A statistical analysis of JPEG noises, including the quantization noise and the rounding noise during a JPEG compression cycle reveals that the noise distributions in higher compression cycles are different from those in the first compression cycle, and they are dependent on thequantization parameters used between two successive cycles.
Abstract: In this paper, we present a statistical analysis of JPEG noises, including the quantization noise and the rounding noise during a JPEG compression cycle. The JPEG noises in the first compression cycle have been well studied; however, so far less attention has been paid on the statistical model of JPEG noises in higher compression cycles. Our analysis reveals that the noise distributions in higher compression cycles are different from those in the first compression cycle, and they are dependent on the quantization parameters used between two successive cycles. To demonstrate the benefits from the analysis, we apply the statistical model in JPEG quantization step estimation. We construct a sufficient statistic by exploiting the derived noise distributions, and justify that the statistic has several special properties to reveal the ground-truth quantization step. Experimental results demonstrate that the proposed estimator can uncover JPEG compression history with a satisfactory performance.

Journal ArticleDOI
TL;DR: A watermarking-based image authentication scheme in the discrete cosine transform (DCT) domain robust to JPEG compression is presented and achieves very good watermark imperceptibility and is able to detect and locate malicious attacks with good precision.
Abstract: A watermarking-based image authentication scheme in the discrete cosine transform (DCT) domain robust to JPEG compression is presented. The binary authentication code is generated from a pseudo-random sequence based on a secret key and a block-dependent feature, protecting the scheme against cut-and-paste attacks. The watermark is embedded in low-frequency DCT coefficients selected by the secret key using a modified quantisation index modulation approach. Before embedding, the selected coefficients are quantised using the JPEG quantisation matrix for a selected quality factor, protecting the scheme against JPEG compression with higher quality factors. Experimental results show that the proposed technique achieves very good watermark imperceptibility and is able to detect and locate malicious attacks with good precision. Compared with other existing schemes, the proposed algorithm achieves better performance regarding false positive and false negative detection rates and in discriminating malicious attacks from JPEG compression.

Journal ArticleDOI
TL;DR: This paper introduces the ability to recover fragments of a JPEG file when the associated file header is missing, and shows that given the knowledge of Huffman code tables, the technique can very reliably identify the remaining decoder settings for all fragments of size 4 KiB or above.
Abstract: File carving techniques allow for recovery of files from storage devices in the absence of any file system metadata. When data are encoded and compressed, the current paradigm of carving requires the knowledge of the compression and encoding settings to succeed. In this paper, we advance the state of the art in JPEG file carving by introducing the ability to recover fragments of a JPEG file when the associated file header is missing. To realize this, we examined JPEG file headers of a large number of images collected from Flickr photo sharing site to identify their structural characteristics. Our carving approach utilizes this information in a new technique that performs two tasks. First, it decompresses the incomplete file data to obtain a spatial domain representation. Second, it determines the spatial domain parameters to produce a perceptually meaningful image. Recovery results on a variety of JPEG file fragments show that given the knowledge of Huffman code tables, our technique can very reliably identify the remaining decoder settings for all fragments of size 4 KiB or above. Although errors due to detection of image width, placement of image blocks, and color and brightness adjustments can occur, these errors reduce significantly when fragment sizes are >32 KiB.

Journal ArticleDOI
TL;DR: This paper develops a novel forensic technique that is able to detect chains of operators applied to an image and derives an accurate mathematical framework to fully characterize the probabilistic distributions of the discrete cosine transform coefficients of the quantized and filtered image.
Abstract: Powerful image editing software is nowadays capable of creating sophisticated and visually compelling fake photographs, thus posing serious issues to the trustworthiness of digital contents as a true representation of reality. Digital image forensics has emerged to help regain some trust in digital images by providing valuable aids in learning the history of an image. Unfortunately, in real scenarios, its application is limited, since multiple processing operators are likely to be applied, which alters the characteristic footprints exploited by current forensic tools. In this paper, we develop a novel forensic technique that is able to detect chains of operators applied to an image. In particular, we study the combination of Joint Photographic Experts Group compression and full-frame linear filtering, and derive an accurate mathematical framework to fully characterize the probabilistic distributions of the discrete cosine transform (DCT) coefficients of the quantized and filtered image. We then exploit such knowledge to define a set of features from the DCT distribution and build an effective classifier able to jointly disclose the quality factor of the applied compression and the filter kernel. Extensive experimental analysis illustrates the efficiency and versatility of the proposed approach, which effectively overcomes the state-of-the-art.

Journal ArticleDOI
Yair Wiseman1
31 Jul 2015
TL;DR: This paper proposes a method to adjust JPEG order of compression to an improved order that is more suitable for GPS images.
Abstract: GPS devices typically make use of images that are excessively large to be stored as a Bit Map, so the images are always compressed. The widespread compression technique is JPEG; however JPEG has a disadvantage – JPEG assumes that the average color of beginning of each blocks line is commonly similar to the average color of end of its preceding blocks line. Almost always this assumption is wrong for GPS images. This paper proposes a method to adjust JPEG order of compression to an improved order that is more suitable for GPS images.

Journal ArticleDOI
TL;DR: This paper proposes a new scheme based on discrete hyper-chaotic system and modified zigzag scan coding that achieves high compression performance and robust security simultaneously and gives the comparisons between this scheme and Zhang's scheme.

Journal ArticleDOI
TL;DR: This paper improved the HEVC lossless coding using sample-based angular prediction (SAP), modified level binarization, andbinarization table selection with the weighted sum of previously encoded level values to provide high compression ratio up to 11.32 and reduces decoding complexity.
Abstract: After the development of the next generation video coding standard, referred to as high efficiency video coding (HEVC), the joint collaborative team of the ITU-T video coding experts group and the ISO/IEC moving picture experts group has now also standardized a lossless extension of such a standard. HEVC was originally designed for lossy video compression, thus, not ideal for lossless video compression. In this paper, we propose an efficient residual data coding method for HEVC lossless video compression. Based on the fact that there are statistical differences of residual data between lossy and lossless coding, we improved the HEVC lossless coding using sample-based angular prediction (SAP), modified level binarization, and binarization table selection with the weighted sum of previously encoded level values. Experimental results show that the proposed method provides high compression ratio up to 11.32 and reduces decoding complexity.

Proceedings ArticleDOI
10 Dec 2015
TL;DR: A JPEG transmorphing algorithm is presented, which converts an image to its processed version while preserving sufficient information about the original image in the application markers of the processed JPEG image file, so that theOriginal image can be later recovered.
Abstract: Picture-related applications are extremely popular because pictures present attractive and vivid information. Nowadays, people record everyday life, communicate with each other, and enjoy entertainment using various interesting imaging applications. In many cases, processed images need to be recovered to their original versions. However, most approaches require storage or transmission of both original and processed images separately, which result in increased bandwidth and storage resources to be used. In contrast, in this paper, we present a JPEG transmorphing algorithm, which converts an image to its processed version while preserving sufficient information about the original image in the processed image. It does this by inserting partial information about the original image in the application markers of the processed JPEG image file, so that the original image can be later recovered. Experiments are conducted and results show that the proposed method offers a number of attractive features and a good performance in many applications.

Journal ArticleDOI
TL;DR: A hardwired design of embedded compression engine targeting the reduction of full high-definition video transmission bandwidth over the wireless network is developed that adopts an intra-coding framework and supports both lossless and rate-controlled near lossless compression options.
Abstract: A hardwired design of embedded compression engine targeting the reduction of full high-definition (HD) video transmission bandwidth over the wireless network is developed. It adopts an intra-coding framework and supports both lossless and rate-controlled near lossless compression options. The lossless compression algorithm is based on a simplified Context-Based, Adaptive, Lossless Image Coding (CALIC) scheme featuring pixelwise gradient-adjusted prediction and error-feedback mechanism. To reduce the implementation complexity, an adaptive Golomb-Rice coding scheme in conjunction with a context modeling technique is used in lieu of an adaptive arithmetic coder. With the measures of prediction adjustment, the near lossless compression option can be implemented on top of the lossless compression engine with minimized overhead. An efficient bit-rate control scheme is also developed and can support rate or distortion-constrained controls. For full HD (previously encoded) and nonfull HD test sequences, the lossless compression ratio of the proposed scheme, on average, is 21% and 46%, respectively, better than the Joint Photographic Experts Group-Lossless Standard and the Fast, Efficient Lossless Image Compression System (FELICS) schemes. The near lossless compression option can offer additional 6%–20% bit-rate reduction while keeping the Peak Signal-to-Noise Ratio value 50 dB or higher. The codec is further optimized complexity-wise to facilitate a high-throughput chip implementation. It features a five-stage pipelined architecture and two parallel computing kernels to enhance the throughput. Fabricated using the Taiwan semiconductor manufacturing company 90-nm complementary metal-oxide–semiconductor technology, the design can operate at 200 MHz and supports a 64 frames/s processing rate for full HD videos.

Journal ArticleDOI
TL;DR: The problem related to energy conservation in a resource-constrained wireless visual sensor network applied for habitat monitoring, border patrol, battlefield surveillance and so on, where images have to be transmitted over a wireless medium with limited bandwidth, is addressed.
Abstract: The problem related to energy conservation in a resource-constrained wireless visual sensor network applied for habitat monitoring, border patrol, battlefield surveillance and so on, where images have to be transmitted over a wireless medium with limited bandwidth, is addressed. The proposed approximation band transform algorithm is unique and novel since it extracts and encodes only approximations of the image using fixed-point arithmetic without applying highpass analysis filters. This low bit rate image compression algorithm is specifically designed for resource-constrained low-power sensors. Its performance is analysed in terms of bit rate (bits per pixel), image quality, processing time and its energy consumption in an Atmel Atmega128 processor. It is shown that this algorithm consumes only 12% of energy needed by joint photographic experts group (JPEG) [independent JPEG group (IJG)] version and gives better results than JPEG at very low bit rates and thus the proposed scheme can significantly enhance the lifetime of low-power sensors.

Proceedings ArticleDOI
21 Dec 2015
TL;DR: The experimental results prove that the visual quality of BPG compression is higher than that of JPEG with equal or reduced file size, and this is the first ever proposed hardware architecture for B PG compression.
Abstract: This paper proposes a hardware architecture for the newly introduced Better Portable Graphics (BPG) compression algorithm. Since its introduction in 1987, the Joint Photographic Experts Group (JPEG) graphics format has been the de facto choice for image compression. However, the new compression technique BPG outperforms JPEG in terms of compression quality and size of the compressed file. The objective of this paper is to present a hardware architecture for enhanced real time compression of the image. The complexity of the BPG encoder library is reduced by using hardware compression wherever possible over software compression because of the real time requirements, possibly in embedded systems with low latency requirements. BPG compression is based on the High Efficiency Video Coding (HEVC), which is considered a major advance in compression techniques. In this paper, only image compression is considered. The proposed architecture is prototyped in Matlab/Simulink. The experimental results prove that the visual quality of BPG compression is higher than that of JPEG with equal or reduced file size. To the best of the authors' knowledge, this is the first ever proposed hardware architecture for BPG compression.

Proceedings ArticleDOI
Yair Wiseman1
15 Apr 2015
TL;DR: This paper suggests a way to change this method to an improved method that is more suitable for JPEG based GPS images.
Abstract: Pictures used by GPS devices are usually overly large to be stored as Bit Map, so they are by and large compressed. The common compression method is JPEG; however JPEG assumes that the average color of beginning of each blocks line is usually similar to the average color of end of its preceding blocks line. Many times this assumption is incorrect for GPS picture. This paper suggests a way to change this method to an improved method that is more suitable for JPEG based GPS images.

Journal ArticleDOI
TL;DR: The proposed method tries to slightly modify the DCT coefficients for confusing the traces introduced by double JPEG compression with the same quantization matrix, and determines the quantity of modification by constructing a linear model to improve the security of anti-forensics.
Abstract: Double JPEG compression detection plays an important role in digital image forensics. Recently, Huang et al. (IEEE Trans Inf Forensics Security 5(4):848---856, 2010) first pointed out that the number of different discrete cosine transform (DCT) coefficients would monotonically decrease when repeatedly compressing a JPEG image with the same quantization matrix, and a strategy based on random permutation was developed to expose such an operation successfully. In this paper, we propose an anti-forensic method to fool this method. The proposed method tries to slightly modify the DCT coefficients for confusing the traces introduced by double JPEG compression with the same quantization matrix. By investigating the relationship between the DCT coefficients of the first compression and those of the second one, we determine the quantity of modification by constructing a linear model. Furthermore, in order to improve the security of anti-forensics, the locations of modification are adaptively selected according to the complexity of the image texture. The extensive experiments evaluated on 10,000 natural images have shown that the proposed method can effectively confuse the detector proposed in Huang et al. (IEEE Trans Inf Forensics Security 5(4):848---856, 2010), while keeping higher visual quality and leaving fewer other detectable statistical artifacts.

Proceedings ArticleDOI
01 Aug 2015
TL;DR: From the experiment, the ELA showed reliability with JPEG compression, image splicing and image retouching forgery, and the Error Level Analysis (ELA) technique was evaluated with different types of image tampering.
Abstract: The advancement in digital image tampering has encouraged studies in the image forensics fields. The image tampering can be found over various image formats such as Joint Photographic Experts Group (JPEG). JPEG is the most common format that supported by devices and applications. Therefore, researchers have been studying the implementation of JPEG algorithm in the image forensics. In this paper, the Error Level Analysis (ELA) technique was evaluated with different types of image tampering, including JPEG compression, image splicing, copy-move and image retouching. From the experiment, the ELA showed reliability with JPEG compression, image splicing and image retouching forgery.

Journal ArticleDOI
TL;DR: The proposed approach attempts to better explore image correlation in different directions by adopting a context-based adaptive scanning process and designing a new lossless image compression method which shows a competitive compression results compared to conventional lossless coding schemes such as PNG and JPEG 2000.
Abstract: In most classical lossless image compression schemes, images are scanned line by line, and so, only horizontal patterns are effectively compressed. The proposed approach attempts to better explore image correlation in different directions by adopting a context-based adaptive scanning process. The adopted scanning process aims to generate a compact one-dimensional image representation by using an image gradient based scan process. This process tries to find the best space-filling curve that ensures scanning the image according to the direction where minimal pixels’ intensity change is found. Such scan process would reduce high frequency data. It is used in order to provide an easily compressible smooth and highly correlated mono-dimensional signal. The suggested representation acts as a pre-processing which transforms the image source into some strongly correlated representation before applying coding algorithms. Based on this representation, a new lossless image compression method is designed. Our experimental results show that the proposed image representation is able to significantly improve the signal proprieties in terms of correlation and monotony and then compression performances. The suggested coding scheme shows a competitive compression results compared to conventional lossless coding schemes such as PNG and JPEG 2000.

Journal ArticleDOI
TL;DR: The proposed model provides a probability distribution for each block which is modeled by a mixture of non-parametric distributions by exploiting the high correlation between neighboring blocks, and provides significant improvements over the state-of-the-art lossless image compression standards and algorithms.
Abstract: In this paper, we propose a new approach for a block-based lossless image compression using finite mixture models and adaptive arithmetic coding. Conventional arithmetic encoders encode and decode images sample-by-sample in raster scan order. In addition, conventional arithmetic coding models provide the probability distribution for whole source symbols to be compressed or transmitted, including static and adaptive models. However, in the proposed scheme, an image is divided into non-overlapping blocks and then each block is encoded separately by using arithmetic coding. The proposed model provides a probability distribution for each block which is modeled by a mixture of non-parametric distributions by exploiting the high correlation between neighboring blocks. The Expectation-Maximization algorithm is used to find the maximum likelihood mixture parameters in order to maximize the arithmetic coding compression efficiency. The results of comparative experiments show that we provide significant improvements over the state-of-the-art lossless image compression standards and algorithms. In addition, experimental results show that the proposed compression algorithm beats JPEG-LS by 9.7 % when switching between pixel and prediction error domains.

Journal ArticleDOI
TL;DR: The paper describes a parallel method for a lossless data compression that uses graphical processing units (GPUs) and achieves a better compression speed than a standard CPU-based compression tools used in personal computers.

Proceedings ArticleDOI
16 Jun 2015
TL;DR: In this article, the performance of spectral decorrelation step has direct impact on the compression ratio (CR), and it is important to employ the most convenient spectral decorrelator in terms of computational complexity and CR.
Abstract: Integer-coefficient Discrete Wavelet Transformation (DWT) filters widely used in the literature are implemented and investigated as spectral decorrelator. As the performance of spectral decorrelation step has direct impact on the compression ratio (CR), it is important to employ the most convenient spectral decorrelator in terms of computational complexity and CR. Tests using AVIRIS image data set are carried out and CRs corresponding to various subband decomposition levels are presented within a lossless hyperspectral compression framework. Two-dimensional images corresponding to each band is compressed using JPEG-LS algorithm. Results suggest that Cohen-Daubechies-Feauveau (CDF) 9/7 integer-coefficient wavelet transform with five levels of spectral subband decomposition would be an efficient spectral decorrelator for onboard lossless hyperspectral image compression.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed protocol of encrypting the JPEG image suitable for image rescaling in the encrypted domain has a good capability of rescaling the privacy-protected JPEG file.

Journal ArticleDOI
TL;DR: This study applies RDLS to discrete wavelet transform (DWT) in JPEG 2000 lossless coding, employs a heuristic for image-adaptive RDLS filter selection, and finds that RDLS significantly improves bitrates of non-photographic images and of images with impulse noise added, while bit rates of photographic images are improved by below 1% on average.
Abstract: In a previous study, we noticed that the lifting step of a color space transform might increase the amount of noise that must be encoded during compression of an image. To alleviate this problem, we proposed the replacement of lifting steps with reversible denoising and lifting steps (RDLS), which are basically lifting steps integrated with denoising filters. We found the approach effective for some of the tested images. In this study, we apply RDLS to discrete wavelet transform (DWT) in JPEG 2000 lossless coding. We evaluate RDLS effects on bitrates using various denoising filters and a large number of diverse images. We employ a heuristic for image-adaptive RDLS filter selection; based on its empirical outcomes, we also propose a fixed filter selection variant. We find that RDLS significantly improves bitrates of non-photographic images and of images with impulse noise added, while bitrates of photographic images are improved by below 1% on average. Considering that the DWT stage may worsen bitrates of some images, we propose a couple of practical compression schemes based on JPEG 2000 and RDLS. For non-photographic images, we obtain an average bitrate improvement of about 12% for fixed filter selection and about 14% for image-adaptive selection. Denoising is integrated with DWT lifting steps in lossless JPEG 2000.A heuristic is used for image-adaptive selection of denoising filters.Significant bitrate improvements are obtained for nonphotographic images.Consistently good performance is observed on images with impulse noise.Compression schemes with various bitrate-complexity tradeoffs are proposed.