scispace - formally typeset
Search or ask a question

Showing papers on "Lossless JPEG published in 1989"


Journal ArticleDOI
TL;DR: Improvements between 3 and 10 dB in peak-to-peak signal- to-noise ratios (PSNR) are provided by the robust decoder with respect to conventional JPEG decoders, for bit error rates around 10 −4 .
Abstract: The robustness to transmission errors of JPEG coded images is investigated and techniques are proposed to reduce their effects. After an analysis of the JPEG transfer format, three main classes of transfer format constituents are distinguished, and a JPEG compatible approach is proposed to stop error propagation in the entropy coded data, with encoder and decoder reset after fixed coding interval lengths. With the use of restart intervals, the propagation of errors is stopped but no correction has taken place. Therefore, a concealment procedure is defined and investigated. It consists of two steps. First, error detection must be performed and three different techniques are assessed and compared. Then, block error concealment is achieved. Simulation results are reported. Depending on the entropy coding and on the neighborhood templates used for detection and concealment, prediction based or interpolation based, improvements between 3 and 10 dB in peak-to-peak signal-to-noise ratios (PSNR) are provided by the robust decoder with respect to conventional JPEG decoders, for bit error rates around 10 −4 .

30 citations


Proceedings ArticleDOI
05 Apr 1989
TL;DR: Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original, and lossless image coding algorithms meet this requirement by generating a decodes image that is numerically identical to the original.
Abstract: Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

20 citations


Proceedings ArticleDOI
27 Nov 1989
TL;DR: The ISO/CCITT Joint Photographic Experts Group is in the process of developing an international standard for general-purpose, continuous-tone still-image compression, which consists of a baseline system, a simple coding method sufficient for many applications, a set of extended system capabilities, and an independent lossless method for applications needing that type of compression only.
Abstract: The ISO/CCITT Joint Photographic Experts Group is in the process of developing an international standard for general-purpose, continuous-tone still-image compression. A brief history is presented as background to a summary of the past year's progress, which was highlighted by definition of the overall structure of the proposed standard. This structure consists of a baseline system, a simple coding method sufficient for many applications, a set of extended system capabilities, which extend the baseline system to satisfy a broader range of applications, and an independent lossless method for applications needing that type of compression only. >

8 citations


Proceedings ArticleDOI
14 Aug 1989
TL;DR: A simple, easy to implement real-time lossless image coding algorithm which takes into account the pixel-to-pixel correlation in an image and seems to react robustly to mismatch between the assumed and actual statistics.
Abstract: A simple, easy to implement real-time lossless image coding algorithm which takes into account the pixel-to-pixel correlation in an image is presented. The algorithm has built-in limits to the variance in the size of the codewords, and seems to react robustly to mismatch between the assumed and actual statistics. Because of the limited dynamic range in codeword size it is not expected to have significant buffer overflow and underflow problems. Test results using this algorithm are presented. >

8 citations


Proceedings ArticleDOI
06 Sep 1989
TL;DR: Arithmetic coding has been applied to provide lossless and loss-inducing compression of optical, infrared, and synthetic aperture radar imagery of natural scenes to reflect the inherent sensor-dependent differences in the stochastic structure of the imagery.
Abstract: Summary form only given. Arithmetic coding has been applied to provide lossless and loss-inducing compression of optical, infrared, and synthetic aperture radar imagery of natural scenes. Several different contexts have been considered, including both predictive and nonpredictive variations, with both image-dependent and image-independent variations. In lossless coding experiments, arithmetic coding algorithms have been shown to outperform comparable variants of both Huffman and Lempel-Ziv-Welch coding algorithms by approximately 0.5 b/pixel. For image-dependent contexts constructed from high-order autoregressive predictors, arithmetic coding algorithms provide compression ratios as high as four. Contexts constructed from lower-order autoregressive predictors provide compression ratios nearly as great as those of the higher-order predictors with favorable computational trades. Compression performance variations have been shown to reflect the inherent sensor-dependent differences in the stochastic structure of the imagery. Arithmetic coding has also been demonstrated to be a valuable addition to loss-inducing compression techniques. >

1 citations