scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Posted Content
TL;DR: In this paper, the authors tried to answer the following question: which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view?
Abstract: Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data. The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than Arithmetic coding. In addition, implementation of Huffman coding is much easier than the Arithmetic coding.

16 citations

Proceedings ArticleDOI
09 Jan 2014
TL;DR: F fuzzy based soft hybrid JPEG technique (FSHJPEG) gives high compression ratio, preserving most of the image information and the image is reproduced with good quality and reduces blocking artifacts, ringing effects and false contouring appreciably.
Abstract: In the last few years, rapid growth in the technological development has been reported. This rapid growth in technology demands fast and efficient processing, transmission and storage of data. Although lots of work have been reported in the literature related to efficient processing and transmission of data, but this would not be achieved without assuring reduction in data storage, as during processing and transmission most of the efforts and time is required for either accessing the data or storing the data. Therefore to cope up with the current technological demands, the data should be in highly compressed form. One of the most important form of data is digital image which is nothing but a two dimensional signal. Digital image in their raw form require a huge amount of storage capacity so that a scheme that produces high degree of compression is required which should preserve the critical image information. However JPEG standards are already available for gray image compression, but this area is still open for algorithms, which can provide better compression ratio while keeping mean square error low. Zadah in his paper proved that, imprecise situations can be properly handled using fuzzy logic. This feature of fuzzy logic has been incorporated by introducing a novel data compression technique for gray images using fuzzy logic based fusion of available JPEG and JPEG2K Standards (FSHJPEG) to achieve higher compression ratio as compared to stand alone JPEG and JPEG2K standards. The fuzzy based soft hybrid JPEG technique (FSHJPEG) gives high compression ratio, preserving most of the image information and the image is reproduced with good quality. This new technique not only gives high compression ratio, but also reduces blocking artifacts, ringing effects and false contouring appreciably. The compression ratio obtained using FSHJPEG is more as compared to currently used standards of Image compression, preserving most of the image information.

16 citations

Journal ArticleDOI
01 Feb 1996
TL;DR: In this paper, the authors present a strategy for generating optimal quantization tables for use in JPEG image compression and its extension to general block sizes, and demonstrate significant improvements over JPEG coding due to the use of optimal quantisation rather than default tables.
Abstract: The authors present a strategy for generating optimal quantisation tables for use in JPEG image compression and its extension to general block sizes. Directly optimised quantisation tables were obtained by simulated annealing. A composite cost function minimised the RMS error between original and recovered images while keeping the compression ratio close to some desired value. Examination of these tables led to a simple model giving quantisation coefficients in terms of (x,y) position in the table and three model parameters. Annealing on the model parameters for several compressions yielded an expression for each parameter as a function of compression ratio. This approach was extended to general block sizes, and psychovisual evaluation determined the visually optimal block size for each compression ratio. The authors demonstrate significant improvements over JPEG coding due to the use of optimal quantisation rather than default tables. Use of general block size effectively extends the JPEG approach to higher compressions than are feasible with standard JPEG coding.

16 citations

Proceedings Article
01 Sep 2011
TL;DR: In order to investigate the impact of quantization matrix on the performance of JPEG, a sample DCT was calculated, and images were quantized using several quantization matrices and the results are compared with the standard quantization Matrix.
Abstract: With the increase in imaging sensor resolution, the captured images are becoming larger and larger, which requires higher image compression ratio. Discrete Cosine Transform (DCT) quantization and entropy encoding are the two main steps in the Joint Photographic Experts Group (JPEG) image Compression standard. In order to investigate the impact of quantization matrix on the performance of JPEG, a sample DCT was calculated, images were quantized using several quantization matrices. The results are compared with the standard quantization matrix. The performance of JPEG is also analyzed for different images with different compression factors.

16 citations

Proceedings ArticleDOI
22 Aug 1994
TL;DR: A technique for image compression using the Discrete Cosine Transform (DCT) method which compared to the classical JPEG gave no blocking effect at the same compression rate.
Abstract: The paper presents a technique for image compression using the Discrete Cosine Transform (DCT) method. In the Joint Photographic Expert Group norm (JPEG), the image is usually compressed using an "universal" quantization matrix. We propose a technique which employs an appropriate distribution model of the DCT coefficients to deduce the quantization matrix from a set of training images. This technique compared to the classical JPEG gave no blocking effect at the same compression rate. >

16 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815