scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Book ChapterDOI
06 Aug 2012
TL;DR: Two algorithms have been discussed: The JPEG DCT image compression and the JPEG wavelet compression, and the results show that wavelets compression gives better quality for the same compression ratio in comparison to DCT compression.
Abstract: Compression is becoming very necessary in today’s time especially in the medical field. The reason for this is that the current analog film based medical images are very difficult to manage and can be easily damaged if exposed to sunlight. Digital images are more reliable and easier to manage but occupy a lot of computer space. Compressed medical images occupy less space and can be easily transmitted over the network in lesser amount of time. In this paper two algorithms have been discussed: The JPEG DCT image compression and the JPEG wavelet compression. The algorithms have been compared on the values obtained for the Mean Square Error and Peak Signal to Noise Ratio. The results show that wavelet compression gives better quality for the same compression ratio in comparison to DCT compression.
Proceedings ArticleDOI
28 Apr 1995
TL;DR: A lossless compression scheme for images is presented in this paper employing mixed transforms and it is shown that for a given number of retained coefficients, the mixed transform representation produces a smaller Root Mean Square Residual Error as compared to employing the DCT alone.
Abstract: A lossless compression scheme for images is presented in this paper employing mixed transforms. First, the mixed transforms technique is applied to compress images in a lossy manner. The image is represented employing subsets of basis functions of two or more transforms. The coefficients are quantized and the image is reconstructed. The resulting reconstructed image samples are rounded to the nearest integer and the modified residual error is computed. This error is transmitted employing a lossless technique such as Huffman coding. It is shown that for a given number of retained coefficients, the mixed transform representation produces a smaller Root Mean Square Residual Error as compared to employing the DCT alone. Also, it is shown that the first order entropy of the error is smaller for the mixed transforms technique than for the DCT, thus resulting in smaller length Huffman codes.
Proceedings ArticleDOI
TL;DR: This paper will present a tutorial on arithmetic coding, provide a history of arithmetic coding in JPEG, share the motivation for T.851, outline its changes, and provide comparison results with both the baseline Huffman and the original QM-coder entropy coders.
Abstract: The Joint Photographic Experts Group (JPEG) baseline standard remains a popular and pervasive standard for continuous tone, still image coding. The "J" in JPEG acknowledges its two main parent organizations, ISO (International Organization for Standardization) and the ITU-T (International Telecommunications Union – Telecommunication). Notwithstanding their joint efforts, both groups have subsequently (and separately) standardized many improvements for still image coding. Recently, the ITU-T Study Group 16 completed the standardization for a new entropy coder - called the Q15-coder, whose statistical model is from the original JPEG-1 standard. This new standard, ITU-T Rec. T.851, can be used in lieu of the traditional Huffman (a form of variable length coding) entropy coder, and complements the QM arithmetic coder, both originally standardized in JPEG as ITU-T T.81 | ISO/IEC 10918:1. In contrast to Huffman entropy coding, arithmetic coding makes no assumptions about an image's statistics, but rather responds in real time. This paper will present a tutorial on arithmetic coding, provide a history of arithmetic coding in JPEG, share the motivation for T.851, outline its changes, and provide comparison results with both the baseline Huffman and the original QM-coder entropy coders. It will conclude with suggestions for future work.
Journal ArticleDOI
TL;DR: The system is based on JPEG image encoding and decoding and the frame skipping algorithm is used to quickly obtain the pictures collected by the camera in r eal time, and through the neural network code, the collected low illumination pictures are enhanced.
Abstract: The system is based on JPEG image encoding and decoding. The frame skipping algorithm is used to quickly obtain the pictures collected by the camera in r eal time. Through the neural network code, the collected low illumination pictures are enhanced. Through the bottom assembly language, the collected original image is compressed into JPEG and saved. Then the JPEG image is read and decompressed into the original image. The underlying assembly statements are vectorized through ASIMD instructions and accelerated in parallel through multithreading. Finally, the number of video frames displayed is equal to the number of video frames collected by the camera. The acceleration effect is very good.

Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815