scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
01 Jan 2005
TL;DR: JPG-LS and JPEG-2000 are the latest ISO/ITU standards for compressing continuous-tone images and are based on LOCO-I algorithm, which was chosen to incorporate the standard due to its good balance between complexity and efficiency.
Abstract: Lossless compression is necessary for many high performance applications such as geophysics, telemetry, nondestructive evaluation, and medical imaging, which require exact recoveries of original images. Lossless image compression can be always modeled as a two-stage procedure: decorrelation and entropy coding. The first stage removes spatial redundancy or inter-pixel redundancy by means of run-length coding, SCAN language based methodology, predictive techniques, transform techniques, and other types of decorrelation techniques. The second stage, which includes Huffman coding, arithmetic coding, and LZW, removes coding redundancy. Nowadays, the performances of entropy coding techniques are very close to its theoretical bound, and thus more research activities concentrate on decorrelation stage. JPEG-LS and JPEG-2000 are the latest ISO/ITU standards for compressing continuous-tone images. JPEG-LS is based on LOCO-I algorithm, which was chosen to incorporate the standard due to its good balance between complexity and efficiency. Another technique proposed for JPEG-LS was CALIC. JPEG-2000 was designed with the main objective of providing efficient compression for a wide range of compression ratios.

81 citations

Proceedings ArticleDOI
10 Sep 2001
TL;DR: This paper presents the architecture and the VHDL design of a Two Dimensional Discrete Cosine Transform (2-D DCT) for JPEG image compression, used as the core of a JPEG compressor and is the critical path in JPEG compression hardware.
Abstract: This paper presents the architecture and the VHDL design of a Two Dimensional Discrete Cosine Transform (2-D DCT) for JPEG image compression This architecture is used as the core of a JPEG compressor and is the critical path in JPEG compression hardware The 2-D DCT calculation is made using the 2-D DCT separability property, such that the whole architecture is divided into two I-D DCT calculations by using a transpose buffer These parts are described in this paper, with an architectural discussion and the VHDL synthesis results as well The 2-D DCT architecture uses 4,792 logic cells of one Altera Flex10kE FPGA and reaches an operating frequency of 122 MHz One input block with 8/spl times/8 elements of 8 bits each is processed in 252 /spl mu/s and the pipeline latency is 160 clock cycles

80 citations

Patent
25 Jul 1994
TL;DR: In this article, the color image compression and decompression is achieved by either spatially or chromatically multiplexing three digitized color planes, into a digital array representative of a single digitized spatially- and chromatically-multiplexed plane, or, by use of a color imaging device, capturing an image directly into a single spatially multiplexed image plane, for further compression, transmission and/or storage.
Abstract: Color image compression and decompression is achieved by either spatially or chromatically multiplexing three digitized color planes, into a digital array representative of a single digitized spatially- and chromatically-multiplexed plane, or, by use of a color imaging device, capturing an image directly into a single spatially-multiplexed image plane, for further compression, transmission and/or storage (40). At the point of decompression, a demultiplexer (50) separately extracts, from the stored or transmitted image, data to restore each of the color planes. Specific demultiplexing techniques involve correlating information of other planes with the color plane to be demultiplexed. Various techniques of entropy reduction, smoothing and speckle reduction may be used together with standard digital color compression techniques, such as JPEG. Using lossless JPEG about 6:1 data compression is achieved with no losses in subsequent processing after initial compression. Using lossy JPEG substantially higher data compression is achievable, but with proportional loss in perceived image quality.

80 citations

Proceedings ArticleDOI
01 Aug 1991
TL;DR: This paper proposes a probabilistic model for lossless image compression that can be used to find and encode as much of the image structure of the data as possible, and then to encode efficiently the unstructured, noisy residual.
Abstract: Lossless text compression methods involve some form of moderately high-order exact string matching. However, this work cannot easily be carried over to lossless image compression, because images are two-dimensional and (more important) essentially quantized analog data. A better plan is to find and encode as much of the image structure of the data as possible, and then to encode efficiently the unstructured, noisy residual. In three steps the authors predict the value of each pixel, model the error of the prediction, and encode the error of the prediction. Having a probabilistic model for the errors, they can use arithmetic coding to encode the errors efficiently with respect to the model. >

79 citations

Patent
Kunal Mukerjee1
15 Apr 2004
TL;DR: Predictive lossless coding as discussed by the authors chooses and applies one of multiple available differential pulsecode modulation (DPCM) modes to individual macro-blocks to produce DPCM residuals having a closer to optimal distribution for run-length, Golomb Rice RLGR entropy encoding.
Abstract: Predictive lossless coding provides effective lossless image compression of both photographic and graphics content in image and video media. Predictive lossless coding can operate on a macroblock basis for compatibility with existing image and video codecs. Predictive lossless coding chooses and applies one of multiple available differential pulse-code modulation (DPCM) modes to individual macro-blocks to produce DPCM residuals having a closer to optimal distribution for run-length, Golomb Rice RLGR entropy encoding. This permits effective lossless entropy encoding despite the differing characteristics of photographic and graphics image content.

77 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815