scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
14 Mar 2010
TL;DR: This work applied a scheme to learn local image structures and efficiently predict image data based on this structure information to lossless image compression and developed a lossed image encoder.
Abstract: One major challenge in image compression is to efficiently represent and encode high-frequency structure components in images, such as edges, contours, and texture regions To address this issue for lossy image compression, in our previous work, we proposed a scheme to learn local image structures and efficiently predict image data based on this structure information In this work, we applied this structure learning and prediction scheme to lossless image compression and developed a lossless image encoder Our extensive experimental results demonstrate that the lossless image encoder is competitive and even outperforms the state-of-the-art lossless image compression methods

7 citations

Journal ArticleDOI
TL;DR: A novel coefficient selection method based on face segmentation has been proposed for selecting a limited number of zigzag scanned quantized coefficients in JPEG compressed domain, which led to an improvement in recognition accuracy and a reduction in computational complexity of the face recognition system.
Abstract: JPEG compression standard is widely used for reducing the volume of images that are stored or transmitted via networks. In biometrics datasets, facial images are usually stored in JPEG compressed format and should be fully decompressed to be used in a face recognition system. Recently, in order to reduce the computational complexity of JPEG decompression step, face recognition in compressed domain is considered as an emerging topic in face recognition systems. In this paper, a novel coefficient selection method based on face segmentation has been proposed for selecting a limited number of zigzag scanned quantized coefficients in JPEG compressed domain, which led to an improvement in recognition accuracy and a reduction in computational complexity of the face recognition system. In the proposed method, different low frequency coefficients based on the importance of the regions of a face have been selected for recognition process. The experiments were conducted on FERET and FEI face databases, and PCA and ICA methods have been utilized to extract the features of the selected coefficients. Different criteria including recognition accuracy and time complexity metrics were employed in order to evaluate the performance of the proposed method, and the results have been compared with those of the state-of-the art methods. The results show the superiority of the proposed approach, in terms of recognition ranks, discriminatory power and time complexity aspects.

7 citations

Proceedings ArticleDOI
15 Mar 1999
TL;DR: This work discusses the various experiments conducted on context modeling of wavelet coefficients for arithmetic coding to optimize the compression efficiency of the EZW lossless coding framework.
Abstract: The EZW lossless coding framework consists of three stages: (i) a reversible wavelet transform, (ii) an EZW data structure to order the coefficients and (iii) an arithmetic coding using context modeling. In this work, we discuss the various experiments conducted on context modeling of wavelet coefficients for arithmetic coding to optimize the compression efficiency. The context modeling of wavelet coefficients can be classified into two parts: (i) context modeling of significance information and (ii) context modeling of the remaining or residue information. It was observed from our experiments while context modeling of residue helped in achieving considerable compression efficiency, the context modeling of significance information helped only to a modest extent.

7 citations

Proceedings ArticleDOI
Wen Gao1, Minqiang Jiang, Haoping Yu1
TL;DR: To improve the performance of the lossless coding mode, several new coding tools that were contributed to JCT-VC but not adopted in version 1 of HEVC standard are introduced are introduced.
Abstract: In this paper, we first review the lossless coding mode in the version 1 of the HEVC standard that has recently finalized. We then provide a performance comparison between the lossless coding mode in the HEVC and MPEG-AVC/H.264 standards and show that the HEVC lossless coding has limited coding efficiency. To improve the performance of the lossless coding mode, several new coding tools that were contributed to JCT-VC but not adopted in version 1 of HEVC standard are introduced. In particular, we discuss sample based intra prediction and coding of residual coefficients in more detail. At the end, we briefly address a new class of coding tools, i.e., a dictionary-based coder, that is efficient in encoding screen content including graphics and text.

7 citations

Journal ArticleDOI
TL;DR: An improvement on the Joint Bi-Level Imaging Group (JBIG) method for continuous-tone image compression is proposed, which indicates the potentially achievable lower bound bit rate, and should be useful in decorrelation analysis as well as in the design as cascaded decorrelators.
Abstract: Lossless compression techniques are essential in some applications, such as archival and communication of medical im- ages. In this paper, an improvement on the Joint Bi-Level Imaging Group (JBIG) method for continuous-tone image compression is proposed. The method is an innovative combination of multiple decorrelation procedures, namely a lossless Joint Photographic Ex- perts Group (JPEG)-based predictor, a transform-based inter-bit- plane decorrelator, and a JBIG-based intra-bit-plane decorrelator. The improved JBIG coding scheme outperformed lossless JPEG coding, JBIG coding, and the best mode of compression with revers- ible embedded wavelets (CREW) coding, on the average bit rate, by 0.56 (8 bits/component images only), 0.14, and 0.12 bits per pixel with the JPEG standard set of 23 continuous-tone test images. The compression technique may be easily incorporated into currently existing JBIG-based products. A high-order entropy estimation algo- rithm is also presented, which indicates the potentially achievable lower bound bit rate, and should be useful in decorrelation analysis as well as in the design as cascaded decorrelators. © 1997 SPIE and IS&T. (S1017-9909(97)00802-7)

7 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815