scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 2017"


Journal ArticleDOI
TL;DR: This paper explores the capability of CNNs to capture DJPEG artifacts directly from images and shows that the proposed CNN-based detectors achieve good performance even with small size images, outperforming state-of-the-art solutions, especially in the non-aligned case.

169 citations


Proceedings ArticleDOI
20 Jun 2017
TL;DR: This paper port JPEG-phase awareness into the architecture of a convolutional neural network to boost the detection accuracy of such detectors and introduces the "catalyst kernel" that allows the network to learn kernels more relevant for detection of stego signal introduced by JPEG steganography.
Abstract: Detection of modern JPEG steganographic algorithms has traditionally relied on features aware of the JPEG phase. In this paper, we port JPEG-phase awareness into the architecture of a convolutional neural network to boost the detection accuracy of such detectors. Another innovative concept introduced into the detector is the "catalyst kernel" that, together with traditional high-pass filters used to pre-process images allows the network to learn kernels more relevant for detection of stego signal introduced by JPEG steganography. Experiments with J-UNIWARD and UED-JC embedding algorithms are used to demonstrate the merit of the proposed design.

148 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed scheme can cluster the inter-block embedding changes and perform better than the state-of-the-art steganographic method.

133 citations


Journal ArticleDOI
TL;DR: This letter proposes an Encryption-then-Compression system using JPEG XR/JPEG-LS friendly perceptual encryption method, which enables to be conducted prior to the MPEG-4 standard used as an international standard lossless compression method.
Abstract: In many multimedia applications, image encryption has to be conducted prior to image compression. This letter proposes an Encryption-then-Compression system using JPEG XR/JPEG-LS friendly perceptual encryption method, which enables to be conducted prior to the JPEG XR/JPEG-LS standard used as an international standard lossless compression method. The proposed encryption scheme can provides approximately the same compression performance as that of the lossless compression without any encryption. It is also shown that the proposed system consists of four block-based encryption steps, and provides a reasonably high level of security. Existing conventional encryption methods have not been designed for international lossless compression standards, but for the first time this letter focuses on applying the standards. key words: Encryption-then-Compression system, lossless compression, international standard, JPEG XR, JPEG-LS

75 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate the efficiency of the 3-D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8- and 16-bit-depth contents, respectively, when compared with JPEG-LS, JPEG2000, CALIC, and HEVC, as well as other proposals based on the MRP algorithm.
Abstract: This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3-D-MRP, is based on the principle of minimum rate predictors (MRPs), which is one of the state-of-the-art lossless compression technologies presented in the data compression literature. The main features of the proposed method include the use of 3-D predictors, 3-D-block octree partitioning and classification, volume-based optimization, and support for 16-b-depth images. Experimental results demonstrate the efficiency of the 3-D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8- and 16-bit-depth contents, respectively, when compared with JPEG-LS, JPEG2000, CALIC, and HEVC, as well as other proposals based on the MRP algorithm.

68 citations


Journal ArticleDOI
TL;DR: A new lossless compression algorithm for biased bitstreams with better compression ratio than the binary arithmetic coding method to fulfill the above task for pre-reserving space.

49 citations


Journal ArticleDOI
TL;DR: A no-reference image quality assessment (NR-IQA) method for JPEG images that obtains the quality score by considering the blocking artifacts and the luminance changes from all nonoverlapping 8 × 8 blocks in one JPEG image.
Abstract: When scoring the quality of JPEG images, the two main considerations for viewers are blocking artifacts and improper luminance changes, such as blur. In this letter, we first propose two measures to estimate the blockiness and the luminance change within individual blocks. Then, a no-reference image quality assessment (NR-IQA) method for JPEG images is proposed. Our method obtains the quality score by considering the blocking artifacts and the luminance changes from all nonoverlapping 8 × 8 blocks in one JPEG image. The proposed method has been tested on five public IQA databases and compared with five state-of-the-art NR-IQA methods for JPEG images. The experimental results show that our method is more consistent with subjective evaluations than the state-of-the-art NR-IQA methods. The MATLAB source code of our method is available at http://image.ustc.edu.cn/IQA.html .

37 citations


Journal ArticleDOI
01 May 2017
TL;DR: This work has proposed new approaches, which combined image reduction and expansion techniques, digital watermarking and lossless compression standards such as JPEG-LS (JLS) and TIFF formats, and provided significant improvements over the well-known JPEG image compression standard.
Abstract: The computerization of images have been important for different medical applications. Nevertheless, the huge volume of medical images can rapidly saturate transmission especially in telemedicine field and may encumber storage systems in case of local saving. Data compression represents the most used solution to deal with this problem. Indeed, it can minimize the data space and may reduce both the time of data transfer and bandwidth consumption. In this context, we have proposed new approaches, which combined image reduction and expansion techniques, digital watermarking and lossless compression standards such as JPEG-LS (JLS) and TIFF formats. We named these compression methods wREPro.TIFF (watermarked Reduction/Expansion Protocol combined with TIFF format) and wREPro.JLS (wREPro combined with JPEG-LS format). The results of comparative experiments show that we have provided significant improvements over the well-known JPEG image compression standard. Indeed, our proposed compression algorithms have ensured a better preservation of the image quality notably for high compression ratios.

36 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that comparing with current J-UNIWARD steganography under quality factor 85 of JPEG compression, the extraction error rates decrease from above 20 % to nearly 0, while the stego images remain a better detection resistant performance comparing with the current JPEG compression and detection resistant adaptive Steganography algorithm.
Abstract: Since it is difficult to acquire a strong JPEG compression resistant ability while achieving a good detection resistant performance for current information hiding algorithms, a JPEG compression and detection resistant adaptive steganography algorithm using feature regions is proposed. Based on the proposed feature region extraction and selection algorithms, the embedding domain robust to JPEG compression and containing less embedding distortion can be obtained. Utilizing the current distortion functions, the distortion value of DCT coefficients in the embedding domain can be calculated. Combined with error correct coding and STCs, the messages are embedded into the cover images with minimum embedding distortion, and can be extracted with high accuracy after JPEG compression, hence, the JPEG compression and detection resistant performance are enhanced at the same time. The experimental results demonstrate that comparing with current J-UNIWARD steganography under quality factor 85 of JPEG compression, the extraction error rates decrease from above 20 % to nearly 0, while the stego images remain a better detection resistant performance comparing with the current JPEG compression and detection resistant adaptive steganography algorithm.

35 citations


Journal ArticleDOI
TL;DR: This paper put forward a new image lossless compression joint encryption algorithm based on chaotic map with all original information intact that passes many security tests, such as sensitivity test, entropy test, autocorrelation test, NIST SP800–22 test.
Abstract: Nowadays poor security, low transmission and storage efficiency of images have become serious concerns. In order to improve the situation, this paper put forward a new image lossless compression joint encryption algorithm based on chaotic map with all original information intact. The lossless compression uses SPIHT(Set Partitioning in Hierarchical Trees) encoding method based on integer wavelet transform, and encrypt multiple rounds in the process of wavelet coefficients and SPIHT coding applying many kinds of chaotic maps. Experimental results show that the compressed file size is about 50 % of the original file size, which achieves relatively good lossless compression ratio. Besides, the encryption method passes many security tests, such as sensitivity test, entropy test, autocorrelation test, NIST SP800---22 test. There is a high application value in the medical field and the national security department whose image files require a relatively high quality.

35 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: Experimental results prove that training on such a kind of most powerful attacks allows good detection in the presence of a much wider variety of attacks and processing.
Abstract: In this paper we present an adversary-aware double JPEG detector which is capable of detecting the presence of two JPEG compression steps even in the presence of heterogeneous processing and counter-forensic (C-F) attacks. The detector is based on an SVM classifier fed with a large number of features and trained to recognise the traces left by double JPEG detection in the presence of attacks. Since it is not possible to train the SVM on all possible kinds of processing and C-F attacks, a selected set of images, manipulated with a limited number of attacks is added to the training set. The processing tools used for training are chosen among those that proved to be most effective in disabling double JPEG detection. Experimental results prove that training on such a kind of most powerful attacks allows good detection in the presence of a much wider variety of attacks and processing. Good performance are retained over a wide range of compression quality factors.

Journal ArticleDOI
TL;DR: This paper uses inter-channel and intra-channel correlations to propose an efficient and simple lossless compression method capable of lossless EEG signal compression with a higher compression rate than existing methods.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: Results show that the proposed method outperforms direct application of the reference state of the art image encoders, in terms of BD-PSNR gain and bit rate reduction.
Abstract: This paper proposes an algorithm for lossy compression of unfocused light field images. The raw light field is preprocessed by demosaicing, devignetting and slicing of the raw lenset array image. The slices are then rearranged in tiles and compressed by the standard JPEG 2000 encoder. The experimental analysis compares the performance of the proposed method against the direct compression with JPEG 2000, and JPEG XR, in terms of BD-PSNR gain and bit rate reduction. Obtained results show that the proposed method outperforms direct application of the reference state of the art image encoders.

Journal ArticleDOI
TL;DR: A novel technique to discover double JPEG compression traces, which discriminates single compressed images from double counterparts, estimates the first quantization in double compression, and localizes tampered regions in a forgery examination is presented.
Abstract: This paper presents a novel technique to discover double JPEG compression traces. Existing detectors only operate in a scenario that the image under investigation is explicitly available in JPEG format. Consequently, if quantization information of JPEG files is unknown, their performance dramatically degrades. Our method addresses both forensic scenarios which results in a fresh perceptual detection pipeline. We suggest a dimensionality reduction algorithm to visualize behaviors of a big database including various single and double compressed images. Based on intuitions of visualization, three bottom-up, top-down and combined top-down/bottom-up learning strategies are proposed. Our tool discriminates single compressed images from double counterparts, estimates the first quantization in double compression, and localizes tampered regions in a forgery examination. Extensive experiments on three databases demonstrate results are robust among different quality levels. F1-measure improvement to the best state-of-the-art approach reaches up to 26.32 %. An implementation of algorithms is available upon request to fellows.

Book ChapterDOI
19 Jun 2017
TL;DR: This paper proposes an adjustment of the recent guided fireworks algorithm from the class of swarm intelligence algorithms for quantization table optimization and tests the proposed approach on standard benchmark images and compared results with other approaches from literature.
Abstract: Digital images are very useful and ubiquitous, however there is a problem with their storage because of their large size and memory requirement JPEG lossy compression algorithm is prevailing standard that solves that problem It facilitates different levels of compression (and the corresponding quality) by using recommended quantization tables It is possible to optimize these tables for better image quality at the same level of compression This presents a hard combinatorial optimization problem for which stochastic metaheuristics proved to be efficient In this paper we propose an adjustment of the recent guided fireworks algorithm from the class of swarm intelligence algorithms for quantization table optimization We tested the proposed approach on standard benchmark images and compared results with other approaches from literature By using various image similarity metrics our approach proved to be more successful

Posted Content
TL;DR: Guetzli, a new JPEG encoder that aims to produce visually indistinguishable images at a lower bit-rate than other common JPEG encoders, optimizes both the JPEG global quantization tables and the DCT coefficient values in each JPEG block using a closed-loop optimizer.
Abstract: Guetzli is a new JPEG encoder that aims to produce visually indistinguishable images at a lower bit-rate than other common JPEG encoders. It optimizes both the JPEG global quantization tables and the DCT coefficient values in each JPEG block using a closed-loop optimizer. Guetzli uses Butteraugli, our perceptual distance metric, as the source of feedback in its optimization process. We reach a 29-45% reduction in data size for a given perceptual distance, according to Butteraugli, in comparison to other compressors we tried. Guetzli's computation is currently extremely slow, which limits its applicability to compressing static content and serving as a proof- of-concept that we can achieve significant reductions in size by combining advanced psychovisual models with lossy compression techniques.

Posted Content
TL;DR: This paper presents a CNN solution by using raw DCT (discrete cosine transformation) coefficients from JPEG images as input, designed to reveal whether a JPEG format image has been doubly compressed.
Abstract: Detection of double JPEG compression is important to forensics analysis. A few methods were proposed based on convolutional neural networks (CNNs). These methods only accept inputs from pre-processed data, such as histogram features and/or decompressed images. In this paper, we present a CNN solution by using raw DCT (discrete cosine transformation) coefficients from JPEG images as input. Considering the DCT sub-band nature in JPEG, a multiple-branch CNN structure has been designed to reveal whether a JPEG format image has been doubly compressed. Comparing with previous methods, the proposed method provides end-to-end detection capability. Extensive experiments have been carried out to demonstrate the effectiveness of the proposed network.

Proceedings ArticleDOI
19 Sep 2017
TL;DR: The details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results are presented.
Abstract: JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.

Proceedings ArticleDOI
06 Sep 2017-Irbm
TL;DR: In this paper, a new joint watermarking-compression scheme is presented, which stands in the combination of the lossless compression standard JPEG-LS with the bit substitution watermark modulation.
Abstract: In this paper, we present a new joint watermarking-compression scheme the originality of which stands in the combination of the lossless compression standard JPEG-LS with the bit substitution watermarking modulation. This scheme allows the access to watermarking-based security services without decompressing the image. It becomes possible to trace images or to verify their authenticity directly from their compressed bitstream. Performance of our scheme, expressed in terms of embedding capacity and distortion, are evaluated on ultrasound images. They show that the watermarked images do not perceptually differ from their original counterparts while offering a capacity large enough to support various security services.

Journal ArticleDOI
TL;DR: Experiments reveal that the proposed pipeline attains excellent visual quality while providing compression performance competitive to that of state-of-the-art compression algorithms for mosaic images.
Abstract: Digital cameras have become ubiquitous for amateur and professional applications. The raw images captured by digital sensors typically take the form of color filter array (CFA) mosaic images, which must be "developed" (via digital signal processing) before they can be viewed. Photographers and scientists often repeat the "development process" using different parameters to obtain images suitable for different purposes. Since the development process is generally not invertible, it is commonly desirable to store the raw (or undeveloped) mosaic images indefinitely. Uncompressed mosaic image file sizes can be more than 30 times larger than those of developed images stored in JPEG format. Thus, data compression is of interest. Several compression methods for mosaic images have been proposed in the literature. However, they all require a custom decompressor followed by development-specific software to generate a displayable image. In this paper, a novel compression pipeline that removes these requirements is proposed. Specifically, mosaic images can be losslessly recovered from the resulting compressed files, and, more significantly, images can be directly viewed (decompressed and developed) using only a JPEG 2000 compliant image viewer. Experiments reveal that the proposed pipeline attains excellent visual quality, while providing compression performance competitive to that of state-of-the-art compression algorithms for mosaic images.

Journal ArticleDOI
TL;DR: Results show the effectiveness of the proposed scheme on identifying the resampled JPEG images as well as the JPEG images undergone resampling and then JPEG recompression and the proposed approach can be used to estimate the Resampling factors for restoring the whole operation chain.
Abstract: The goal of forensic investigators is to reveal the processing history of a digital image. Many forensic techniques are devoted to detecting the intrinsic traces left by image processing and tampering. However, existing forensic techniques are easily defeated in presence of pre- and post-processing. In real scenarios, images may be sequentially manipulated by a series of operations (the so called operation chain). This paper addresses the operation chain consisting of JPEG compression and resampling. The transformed block artifacts (TBAG) characterizing this operation chain are analysed at both the pixel and discrete cosine transforms (DCT) domain and are utilized to design the detection scheme. Both theoretical analysis and experimental results show the effectiveness of our proposed scheme on identifying the resampled JPEG images as well as the JPEG images undergone resampling and then JPEG recompression. Moreover, the proposed approach can be used to estimate the resampling factors for restoring the whole operation chain. HighlightsOperation chain consists of JPEG compression and Resampling.Detection relies on TBAG and DCTR.Resampling factor can be estimated.

Posted Content
TL;DR: A 32- layer convolutional neural Networks (CNNs) in to improve the efficiency of preprocess and reuse the features by concatenating all features from the previous layers with the same feature- map size, thus improve the flow of information and gradient.
Abstract: Different from the conventional deep learning work based on an images content in computer vision, deep steganalysis is an art to detect the secret information embedded in an image via deep learning, pose challenge of detection weak information invisible hidden in a host image thus learning in a very low signal-to-noise (SNR) case. In this paper, we propose a 32- layer convolutional neural Networks (CNNs) in to improve the efficiency of preprocess and reuse the features by concatenating all features from the previous layers with the same feature- map size, thus improve the flow of information and gradient. The shared features and bottleneck layers further improve the feature propagation and reduce the CNN model parameters dramatically. Experimental results on the BOSSbase, BOWS2 and ImageNet datasets have showed that the proposed CNN architecture can improve the performance and enhance the robustness. To further boost the detection accuracy, an ensemble architecture called as CNN-SCA-GFR is proposed, CNN-SCA- GFR is also the first work to combine the CNN architecture and conventional method in the JPEG domain. Experiments show that it can further lower detection errors. Compared with the state-of-the-art method XuNet [1] on BOSSbase, the proposed CNN-SCA-GFR architecture can reduce detection error rate by 5.67% for 0.1 bpnzAC and by 4.41% for 0.4 bpnzAC while the number of training parameters in CNN is only 17% of what used by XuNet. It also decreases the detection errors from the conventional method SCA-GFR by 7.89% for 0.1 bpnzAC and 8.06% for 0.4 bpnzAC, respectively.

Journal ArticleDOI
TL;DR: Improved JPEG anti-forensic techniques are proposed to remove the blocking artifacts left by the JPEG compression in both spatial and DCT domain and outperform the existing state-of-the-art techniques in achieving enhanced tradeoff between image visual quality and forensic undetectability, but with high computational cost.

Posted Content
TL;DR: The presented system integrates the conventional scheme of compressive sampling (on the entire image) and reconstruction with quantization and entropy coding and proposes an effective method to select the near-best quality at any given bit rate.
Abstract: We present an end-to-end image compression system based on compressive sensing. The presented system integrates the conventional scheme of compressive sampling and reconstruction with quantization and entropy coding. The compression performance, in terms of decoded image quality versus data rate, is shown to be comparable with JPEG and significantly better at the low rate range. We study the parameters that influence the system performance, including (i) the choice of sensing matrix, (ii) the trade-off between quantization and compression ratio, and (iii) the reconstruction algorithms. We propose an effective method to jointly control the quantization step and compression ratio in order to achieve near optimal quality at any given bit rate. Furthermore, our proposed image compression system can be directly used in the compressive sensing camera, e.g. the single pixel camera, to construct a hardware compressive sampling system.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: The paper justifies the proposed hybrid algorithm by benchmarks which show that the hybrid algorithm achieves significantly higher decompressed image quality than the JPEG.
Abstract: We propose a new hybrid image compression algorithm which combines the F-transform and the JPEG. At first, we apply the direct F-transform and then, the JPEG compression. Conversly, the JPEG decompression is followed by the inverse F-transform to obtain the decompressed image. This scheme brings three benefits: (i) the direct F-transform filters out high frequencies so that the JPEG can reach a higher compression ratio; (ii) the JPEG color quantization can be omitted in order to achieve greater decompressed image quality; (iii) the JPEG-decompressed image is processed by by the inverse F-transform w.r.t. the adjoint partition almost lossless. The paper justifies the proposed hybrid algorithm by benchmarks which show that the hybrid algorithm achieves significantly higher decompressed image quality than the JPEG.

Journal ArticleDOI
TL;DR: It is demonstrated in the article that the FPCA is much faster and more memory efficient than Huffman Coding, while outperforming Shannon–Fano Coding in terms of both redundancy and time efficiency.
Abstract: The enormous data inflow during three-dimensional 3D pavement surface data collection requires an efficient compression system for 3D data. However, with respect to the phase of lossless encoding, the commonly used Huffman Coding is inefficient in terms of speed and memory usage for encoding 3D pavement surfaces. The Fast Prefix Coding Algorithm FPCA is proposed in the article as an effective substitute of Huffman Coding at the stage of lossless encoding. It is demonstrated in the article that the FPCA is much faster and more memory efficient than Huffman Coding, while outperforming Shannon-Fano Coding in terms of both redundancy and time efficiency. The FPCA-based coding approach is a modification of the baseline JPEG algorithm to support 3D pavement data whose dynamic range is more than 12 bits. The presented modifications include algorithms for Quantization, Run-Length Encoding and Entropy Coding without limiting data depth in terms of dynamic range. Compared with the baseline JPEG approach, the proposed coding system is able to restrict the data loss more successfully and can achieve a significantly higher level of time efficiency and compression ratio over than 30:1 for most of the evaluated 3D images. With parallel computing techniques, encoding full-lane width pavement in 3D and at 1 mm resolution with an up-to-date desktop computer can be conducted at 150 MPH or even higher speed.

Journal ArticleDOI
TL;DR: A new JPEG backward-compatible image coding method for HDR images that generates the residual data in the discrete cosine transform (DCT) domain, unlike existing JPEG XT profiles that generate their residual images in the spatial domain.

Proceedings ArticleDOI
01 Feb 2017
TL;DR: Simulation results show that the proposed ETEC scheme can provide better compression compared to JPEG-LS and SPIHT algorithms for pixelated images that are used for data communication between a computer screen and a camera.
Abstract: In the digital world, the size of images is an important challenge when dealing with the storage and transmission requirements. Compression is one of the fundamental techniques to address this problem. A number of transform based compression techniques are discussed in the literature and some are used in practice. In this paper, we propose an edge-based image transformation method which will be used with an entropy encoding technique to greatly reduce image size without loss in content. In the first stage of the proposed transform scheme, the intensity difference of neighboring pixels is calculated in the horizontal or vertical direction depending on the presence of a horizontal or vertical edge. In the second stage, the intensity differences are used to form two matrixes — one containing the absolute intensity difference and the other having the polarity of the differences. Next, Huffman or Arithmetic entropy coding is applied on the generated matrixes. The proposed edge-based transformation and entropy coding (ETEC) scheme is compared to the existing lossless compression techniques: Joint Photographic Experts Group Lossless (JPEG-LS) and Set Partitioning in Hierarchical Trees (SPIHT). Simulation results show that the proposed ETEC scheme can provide better compression compared to JPEG-LS and SPIHT algorithms for pixelated images that are used for data communication between a computer screen and a camera.

Posted Content
TL;DR: In this paper, the authors proposed a method to detect double or single JPEG compression using convolutional neural networks (CNNs) and different kinds of input to the CNN have been taken into consideration.
Abstract: When an attacker wants to falsify an image, in most of cases she/he will perform a JPEG recompression. Different techniques have been developed based on diverse theoretical assumptions but very effective solutions have not been developed yet. Recently, machine learning based approaches have been started to appear in the field of image forensics to solve diverse tasks such as acquisition source identification and forgery detection. In this last case, the aim ahead would be to get a trained neural network able, given a to-be-checked image, to reliably localize the forged areas. With this in mind, our paper proposes a step forward in this direction by analyzing how a single or double JPEG compression can be revealed and localized using convolutional neural networks (CNNs). Different kinds of input to the CNN have been taken into consideration, and various experiments have been carried out trying also to evidence potential issues to be further investigated.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: This article proposes a new adaptive block-based histogram packing which improves the lossless compression performance of JPEG 2000 with sparse histogram images.
Abstract: JPEG 2000 is one of the most efficient and well performing standards for continuous-tone natural images compression. However, a compression performance loss may occur when encoding images with sparse or locally sparse histograms. Images of the later type include only a subset of the available intensity values implied by the nominal alphabet. This article proposes a new adaptive block-based histogram packing which improves the lossless compression performance of JPEG 2000 with sparse histogram images. We take advantage, in this work, of the strength likelihood between symbol sets of the neighboring image blocks and the efficiency of the offline histogram packing with sparse or locally sparse histogram images. Results of its effectiveness with JPEG 2000 are presented.