scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
28 Dec 2000
TL;DR: A new 3D wavelet-based compression engine is proposed and compared against a classical 3D JPEG-based coder and a state-of-the-art 3D SPIHT coder for different medical imaging modalities and it is demonstrated that the performance of the proposed coder is superior for lossless coding, and competitive with 3d SPIHT at lower bit-rates.
Abstract: The increasing use of three-dimensional imaging modalities triggers the need for efficient techniques to transport and store the related volumetric data. Desired properties like quality and resolution scalability, region-of-interest coding, lossy-to-lossless coding and excellent rate-distortion characteristics for as well low as high bit-rates are inherently supported by wavelet-based compression tools. In this paper a new 3D wavelet-based compression engine is proposed and compared against a classical 3D JPEG-based coder and a state-of-the-art 3D SPIHT coder for different medical imaging modalities. Furthermore, we evaluate the performance of a selected set of lossless integer lifting kernels. We demonstrate that the performance of the proposed coder is superior for lossless coding, and competitive with 3D SPIHT at lower bit-rates.

17 citations

Journal ArticleDOI
TL;DR: The proposed method tries to slightly modify the DCT coefficients for confusing the traces introduced by double JPEG compression with the same quantization matrix, and determines the quantity of modification by constructing a linear model to improve the security of anti-forensics.
Abstract: Double JPEG compression detection plays an important role in digital image forensics. Recently, Huang et al. (IEEE Trans Inf Forensics Security 5(4):848---856, 2010) first pointed out that the number of different discrete cosine transform (DCT) coefficients would monotonically decrease when repeatedly compressing a JPEG image with the same quantization matrix, and a strategy based on random permutation was developed to expose such an operation successfully. In this paper, we propose an anti-forensic method to fool this method. The proposed method tries to slightly modify the DCT coefficients for confusing the traces introduced by double JPEG compression with the same quantization matrix. By investigating the relationship between the DCT coefficients of the first compression and those of the second one, we determine the quantity of modification by constructing a linear model. Furthermore, in order to improve the security of anti-forensics, the locations of modification are adaptively selected according to the complexity of the image texture. The extensive experiments evaluated on 10,000 natural images have shown that the proposed method can effectively confuse the detector proposed in Huang et al. (IEEE Trans Inf Forensics Security 5(4):848---856, 2010), while keeping higher visual quality and leaving fewer other detectable statistical artifacts.

17 citations

Journal ArticleDOI
TL;DR: This paper proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream, which allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolutioncodestream.
Abstract: Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG 2000 codestream. This codestream is JPEG 2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG 2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results.

17 citations

Proceedings ArticleDOI
01 Nov 2007
TL;DR: The zigzag unit typically found in implementations of JPEG encoders is eliminated and the division operation of the quantization step is replaced by a combination of multiplication and shift operations.
Abstract: This paper presents the implementation of a JPEG encoder that exploits minimal usage of FPGA resources. The encoder compresses an image as a stream of 8times8 blocks with each element of the block applied and processed individually. The zigzag unit typically found in implementations of JPEG encoders is eliminated. The division operation of the quantization step is replaced by a combination of multiplication and shift operations. The encoder is implemented on Xilinx Spartan-3 FPGA and is benchmarked against two software implementations on four test images. It is demonstrated that it yields performance of similar quality while requiring very limited FPGA resources. A co-emulation technique is applied to reduce development time and to test and verify the encoder design.

17 citations

Journal ArticleDOI
TL;DR: The experimental results show that the new lossless intra-coding method reduces the bit rate in comparison with the lossless-intra coding method in the HEVC standard and the proposed method results in slightly better compression ratio than the JPEG200 lossless coding.
Abstract: A new lossless intra-coding method based on a cross residual transform is applied to the next generation video coding standard HEVC (High Efficiency Video Coding). HEVC includes a multi-directional spatial prediction method to reduce spatial redundancy by using neighboring pixels as a prediction for the pixels in a block of data to be encoded. In the new lossless intra-coding method, the spatial prediction is performed as pixelwise DPCM but is implemented as block-based manner by using cross residual transform on the HEVC standard. The experimental results show that the new lossless intra-coding method reduces the bit rate of approximately 8.43% in comparison with the lossless-intra coding method in the HEVC standard and the proposed method results in slightly better compression ratio than the JPEG200 lossless coding.

17 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815