scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
Hyung-Il Kim1, H.W. Park
16 Sep 1996
TL;DR: Simulation results show that the proposed algorithm reduces the blocking artifacts significantly in the subjective and objective views.
Abstract: A postprocessing algorithm is proposed to reduce the blocking artifacts of joint photographic experts group (JPEG) decompressed images The reconstructed images from JPEG compression produce noticeable image degradation near the block boundaries, in particular, for highly compressed images because each block is transformed and quantized independently The reduction of these blocking effects has been an essential issue for high quality visual communications The proposed postprocessing algorithm reduces these blocking artifacts efficiently A comparison study between the proposed algorithm and other postprocessing algorithms is made by computer simulation with several JPEG images These simulation results show that the proposed algorithm reduces the blocking artifacts significantly in the subjective and objective views

11 citations

Proceedings Article
27 May 2006
TL;DR: A specific approach for digital watermarking based on the IDP and using the biometric data as a watermark and this is a basis for the future method development and implementation in security systems, which require fast and reliable user authentication.
Abstract: In the paper is presented a new approach for solving some of the authentication problems in large computer systems, communication networks and mobile communications, using a new method for lossless compression of some kinds of biometric information (fingerprints and signature images). The image processing is based on two-level Inverse Difference Pyramid (IDP) Decomposition with 2D Walsh-Hadamard Transform, followed by Histogram-Adaptive Run-Length data coding. In the paper are presented the comparison results, obtained for large number of test images of the pointed image classes. The investigation was performed with software products, based on the new method, on the JPEG 2000 standard (lossless version) and on the FBI compression standard. The new method attains high compression ratio and this is a basis for the future method development and implementation in security systems, which require fast and reliable user authentication. In the paper is presented a specific approach for digital watermarking based on the IDP and using the biometric data as a watermark.

11 citations

Journal ArticleDOI
30 Sep 2012
TL;DR: A new lossless intra coding method based on residual transform is applied to the next generation video coding standard HEVC (High Efficiency Video Coding) to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded.
Abstract: A new lossless intra coding method based on residual transform is applied to the next generation video coding standard HEVC (High Efficiency Video Coding). HEVC includes a multi-directional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed as samplewise DPCM (Difference Pulse Code Modulation) but is implemented as block-based manner by using residual transform and secondary residual transform on the HEVC standard. Experimental results show that the new lossless intra coding method reduces the bit rate by approximately 6.45% in comparison with the lossless intra coding method previously included in the HEVC standard.

11 citations

Proceedings ArticleDOI
01 Dec 2006
TL;DR: This scheme applies SNOW 2 stream cipher to JPEG 2000 codestreams in a way that preserves most of the inherent flexibility, scalability, and transcodability of encrypted JPEG 2000 images and also preserves end-to-end security.
Abstract: In this paper we propose a progressive encryption and controlled access scheme for JPEG 2000 encoded images. Our scheme applies SNOW 2 stream cipher to JPEG 2000 codestreams in a way that preserves most of the inherent flexibility of JPEG 2000 encoded images and enables untrusted intermediate network transcoders to downstream an encrypted JPEG 2000 image without access to decryption keys. Our scheme can also control access to various image resolutions or quality layers, by granting users different levels of access, using different decryption keys. Our scheme preservers most of the inherent flexibility, scalability, and transcodability of encrypted JPEG 2000 images and also preserves end-to-end security.

11 citations

Proceedings ArticleDOI
12 Jul 2009
TL;DR: The performance of the recent JPEG2000 Part 10 standard, known as JP3D, is evaluated for the lossy and lossless compression of hyperspectral imagery and results reveal that, while for lossless coding,JP3D very slightly surpasses the performance of JPEG 2000 Part 2, for lossy coding, JP3d fails to match the rate-distortion performance ofThe 2D Part-2 coder.
Abstract: The performance of the recent JPEG2000 Part 10 standard, known as JP3D, is evaluated for the lossy and lossless compression of hyperspectral imagery. Experimental results using a Karhunen-Loeve transform (KLT) for spectral decorrelation and a 2D wavelet transform for spatial decorrelation compare the performance of JP3D against 2D JPEG2000 as specified by Part 2 of the standard. JP3D is used with both the 2D arithmetic-coding contexts as specified in the JP3D standard as well as non-standard experimental 3D contexts. Results reveal that, while for lossless coding, JP3D very slightly surpasses the performance of JPEG2000 Part 2, for lossy coding, JP3D fails to match the rate-distortion performance of the 2D Part-2 coder.

11 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815