scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2014
TL;DR: A series of test conducted in lab with multiple transfer protocol on Network Attached Storage to find out which transfer protocol is faster under moderate speed and high latency network.
Abstract: Picture Archiving and Communication System(PACS) is responsible for storing Digital Imaging and Communication in Medicine (DICOM) images fromradiology modalities into its database, images takes a lot of time to transfer to remote location through WAN due to large file size and slow transfer protocol. A PACS alternative system has been developed which performs basic functions of a generic PACS. Images directly from modalities are large in size by default transfer syntax of these images is Endian Explicit syntax. Changing this transfer syntax to lossless JPEG 2000 decreases the file size and because of lossless compression quality of image is still same as original image. These compressed images are then copied into Network Attached Storage working as PACS alternative. A series of test conducted in lab with multiple transfer protocol on Network Attached Storage (NAS) to find out which transfer protocol is faster under moderate speed and high latency network.

5 citations

Journal ArticleDOI
TL;DR: This work extends the definition of Lehmer-type inversions (Lehmer 1960 and 1964) from permutations to multiset permutations and presents a one-pass algorithm based on inversions of a multisetset permutation.
Abstract: Linear prediction schemes, such as that of the Joint Photographic Experts Group (JPEG), are simple and normally produces a residual sequence with lower zero-order entropy. Occasionally the entropy of the prediction error becomes greater than that of the original image. Such situations frequently occur when the image data have discrete gray levels located within certain intervals. To alleviate this problem, various authors have suggested different preprocessing methods. However, the techniques reported require two passes. We extend the definition of Lehmer-type inversions (Lehmer 1960 and 1964) from permutations to multiset permutations and present a one-pass algorithm based on inversions of a multiset permutation. We obtain comparable results when we apply JPEG and even better results when we apply some other linear prediction schemes on a preprocessed image, which is treated as multiset permutation.

5 citations

Journal ArticleDOI
TL;DR: To further improve the compression efficiency, the gradient adjusted prediction (GAP) is used and experimental results show that the proposed method is better than lossless JPEG and some LZ-based compression methods.
Abstract: In general text compression techniques cannot be used directly in image compression because the model of text and image are different. Previously, a new class of text compression, namely, the block-sorting algorithm which involves Burrows and Wheeler (1994) transformation (BWT) gave excellent results in text compression. However, if we apply it directly in image compression, the result is poor. Surprisingly, good results can be obtained if we employ a prediction model such as the one defined in the JPEG standard before the BWT algorithm. Thus, the predictive model plays a critical role in the compression process. To further improve the compression efficiency, we use the gradient adjusted prediction (GAP). Experimental results show that the proposed method is better than lossless JPEG and some LZ-based compression methods.

5 citations

Proceedings ArticleDOI
01 Jun 2014
TL;DR: A novel transform to further eliminate the redundancy between residues of different blocks in intra prediction to show an improvement in the compression ratio, without substantial increases of computational complexity in the encoder or decoder.
Abstract: The High Efficiency Video Coding (HEVC) with the transform bypass mode is simple but inefficient for lossless coding. For this reason, we propose a novel transform to further eliminate the redundancy between residues of different blocks in intra prediction. Dependent on intra prediction modes, the proposed transform is adaptable to exploit correlations of residues formed by different modes. In order to accurately obtain parameters of the transform matrix, an approach similar to the Wiener filtering method is adopted. Experimental results show that on top of the lossless coding mode in HEVC, our method offers the performance with a 7.4% bit-rate reduction on average for All Intra Main configuration. Compared with other representative algorithms, our proposal still shows an improvement in the compression ratio, without substantial increases of computational complexity in the encoder or decoder.

5 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815