scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
16 May 1998
TL;DR: This study compares motion wavelet compression to motion JPEG compression using the standard correlation coefficient and the normalized mean squared error, and found the motion wavelets technique slightly better.
Abstract: Future developments in teleradiology hinge on the delivery of real or near real-time images, sometimes across less than optimal bandwidth communication channels. Ultrasound, to achieve its greatest diagnostic value, needs to transmit not just still images but video as well. A significant amount of compression, however, may be required to achieve near real-time video across limited bandwidths. This will inevitably result in degraded video quality. A variety of compression algorithms are in widespread use including H.261, H.323, JPEG (Joint Photographic Experts Group), MPEG (Motion Picture Expert Group) and most recently wavelets. We have developed a suite of tools to evaluate each of these methods, and to identify potential areas where wavelet compression may have an advantage. In this particular study, we compare motion wavelet compression to motion JPEG compression using the standard correlation coefficient and the normalized mean squared error, and found the motion wavelet technique slightly better.

6 citations

Proceedings ArticleDOI
04 Jan 2002
TL;DR: A low complexity, low cost technique that accurately detects diagonal edges and predicts the value of pixels to be encoded based on the gradients available within the standard predictive template of JPEG-LS is proposed.
Abstract: JPEG-LS is the latest pixel based lossless to near lossless still image coding standard introduced by the Joint Photographic Experts Group (JPEG) In this standard simple localized edge detection techniques are used in order to determine the predictive value of each pixel These edge detection techniques only detect horizontal and vertical edges and the corresponding predictors have only been optimized for the accurate prediction of pixels in the locality of diagonal edges In this paper we propose a low complexity, low cost technique that accurately detects diagonal edges and predicts the value of pixels to be encoded based on the gradients available within the standard predictive template of JPEG-LS We provide experimental results to show that the proposed technique outperforms JPEG-LS in terms of predicted mean squared error, by a margin of up to 851%

6 citations

Book ChapterDOI
01 Jan 2003
TL;DR: Jpeg2000 brings a new paradigm to the standards of image compression and creates a framework where the image compression system is more akin to an image processing system.
Abstract: Publisher Summary In this chapter, a high-level description of the JPEG2000 algorithm is provided. JPEG2000 is the latest international standard for providing state-of-the-art compression performance and it offers a number of functionalities that address the requirements of emerging imaging applications. With JPEG baseline, an image is compressed using a particular quantization table, which determines the quality that will be achieved at decompression time. When JPEG lossless or JPEG-LS is used for compression, only lossless decompression is available. JPEG has four “modes” of operation and they rely on distinctly different technologies. JPEG2000 brings a new paradigm to the standards of image compression and creates a framework where the image compression system is more akin to an image processing system. The benefits of the four codes are integrated into JPEG2000. Key compression parameters such as resolution and quality can be delayed until the creation of the codestream and several image products can be extracted from a single codestream. The compressor decides the maximum image quality, resolution, and size. Progression of four dimensions such as quality, resolution, spatial location, and component are supported by JPEG2000. In JPEG2000, the image is defined as a two dimensional rectangular arrays of samples. JPEG2000 requires more memory than the sequential JPEG and JPEG2000 is close to JPEG-LS.

6 citations

Proceedings ArticleDOI
25 Sep 1994
TL;DR: This work focuses on techniques and algorithms for detecting the occurrence of a particular error and then for locating that error, and proposes the most effective method for error detection and image correction.
Abstract: The use of the variable-length coding in the final stage of image compression using JPEG makes the image more sensitive to channel errors and can have severe effects on the viewed image. This is due to loss of synchronization in the decoder. Even one bit error can propagate significantly throughout the image. In the past, some techniques have been proposed for resynchronizing Huffman decoders using special synchronizing codewords. The JPEG standard itself allows the use of a special restart marker to help decoder resynchronization. It does not, however, give any guidelines for error recovery. We first describe the most probable types of errors that occur in a JPEG data stream. We focus on techniques and algorithms for detecting the occurrence of a particular error and then for locating that error. One technique functions at the entropy encoding level by taking advantage of the specific data structure of the JPEG stream and using alternately two different end-of-block characters. Others function at the DCT coefficient level or at the pixel level, detecting unlikely patterns that are produced due to errors. We compare different methods and finally propose the most effective method for error detection and image correction. >

6 citations

Journal ArticleDOI
TL;DR: This work proposes a sub-block interchange lossless compression method which belongs to this block sorting class and has outperformed GIF in compression ratios and BWT in compression times when tested with 512/spl times/512 pixel 8-bit grey scale images.
Abstract: Lempel-Ziv-Welch (LZW) technique for text compression has been successfully modified to lossless image compression such as GIF. A new class of text compression, namely, Burrows and Wheeler (1994) transformation (BWT) has been developed which gives promising results for text compression. Here, we propose a sub-block interchange lossless compression method which belongs to this block sorting class. Our compression results have outperformed GIF in compression ratios and BWT in compression times when tested with 512/spl times/512 pixel 8-bit grey scale images. The comparison of compression ratios and times with GIF, BWT and other popular LZ based compression methods are discussed.

6 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815