scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
24 Jun 2004
TL;DR: A compression scheme that exploits inter-slice redundancy in medical imaging that is compatible with the JPEG 2000 core coding system extensions (part 2) is evaluated.
Abstract: We evaluate a compression scheme that exploits inter-slice redundancy in medical imaging that is compatible with the JPEG 2000 core coding system extensions (part 2). The scheme consists of applying a decorrelating transform in the cross-slice vertical direction subsequently compressing each slice with a core part 1 JPEG 2000 coder. The rates for each slice are chosen according to a heuristic log-based allocation algorithm. Two transforms were considered, the KLT and the DWT 9/spl times/7. A gain of up to 4-4 dB in RMSE was achieved using the DWT 9/spl times/7 over a baseline JPEG 2000 system that uses no decorrelating transform.

7 citations

Proceedings ArticleDOI
08 Jul 2009
TL;DR: This paper proposes an algorithm that utilizes both JPEG 2000 and robust watermarking for protection and compression of the medical image and confirmed that the proposed algorithm was faster than when they are done separately.
Abstract: The Picture Archiving and Communication System(PACS) was introduced for computerization of the medical system and telediagnosis between the hospital. It is becoming possible to create, store, and transmit medical images via PACS. There has been a growing interest in protecting medical images with an enormous amount of information. To improve transmission speed among the hospitals, the medical image should be compression JPEG 2000 by high compression ratio. This paper proposes an algorithm that utilizes both JPEG 2000 and robust watermarking for protection and compression of the medical image. With the proposed algorithm, it takes considerably less time to do JPEG 2000 and watermarking than when they are done separately. Based on the experiment results, it takes 0.72 second for the proposed algorithm and 1.11 second when they are done separately. We confirmed that the proposed algorithm was faster than when they are done separately.

7 citations

Proceedings ArticleDOI
04 Jan 2012
TL;DR: The result shows that the performance of standard JPEG method can be improved by proposed method, and this hybrid approach achieves about 20% more compression ratio than the Standard JPEG.
Abstract: Lossy JPEG compression is a widely used compression technique. Normally the JPEG technique uses two process quantization, which is lossy process and entropy encoding, which is considered lossless process. In this paper, a new technique has been proposed by combining the JPEG algorithm and Symbol Reduction Huffman technique for achieving more compression ratio. The symbols reduction technique reduces the number of symbols by combining together to form a new symbol. As a result of this technique the number of Huffman code to be generated also reduced. The result shows that the performance of standard JPEG method can be improved by proposed method. This hybrid approach achieves about 20% more compression ratio than the Standard JPEG.

7 citations

Proceedings ArticleDOI
09 Aug 1992
TL;DR: A quantization scheme for discrete cosine transform (DCT) coefficients in the JPEGs baseline sequential method for image compression is proposed in this paper, which is adaptive to the image characteristics and is statistical in nature.
Abstract: A quantization scheme for discrete cosine transform (DCT) coefficients in the Joint Photographic Experts Group's (JPEGs) baseline sequential method for image compression is proposed. The DCT coefficients should be quantized to achieve maximum compression without degrading the visual image quality. The scheme is adaptive to the image characteristics and is statistical in nature. The results are evaluated in terms of compression, root-mean-square error, and subjective visual quality, 8-b/pixel monochrome images of size 512 * 512 have been compressed in the range 0.4-1 b/pixel with good to excellent quality. >

7 citations

Proceedings ArticleDOI
29 Mar 1994
TL;DR: The authors show a rate-distortion optimal quantization technique to threshold the DCT coefficients in the industry image and video coding standards JPEG and MPEG respectively which achieves a decent thresholding gain and uses a fast dynamic programming recursive structure which exploits certain monotonicity characteristics of the JPEG andmpeg codebooks to drastically reduce the complexity.
Abstract: The authors show a rate-distortion optimal quantization technique to threshold the DCT coefficients in the industry image and video coding standards JPEG and MPEG respectively. Their scheme achieves a decent thresholding gain in terms of both objective SNR (about 1 dB) as well as perceived quality and uses a fast dynamic programming recursive structure which exploits certain monotonicity characteristics of the JPEG and MPEG codebooks to drastically reduce the complexity. The primary advantage of their encoding algorithm is that it is completely compatible with the baseline JPEG and MPEG decoders. >

7 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815