scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
T. Tada1, Kohei Cho1, Haruhisa Shimoda1, Toshibumi Sakata1, Shinichi Sobue 
18 Aug 1993
TL;DR: It was determined that all the test satellite images could be compressed to at least 1/10 of the original data volume preserving high visual image quality.
Abstract: Image compression is a key technology to realize on-line satellite image transmission economically and quickly Among various image compression algorithms, the JPEG algorithm is the international standard for still color image compression In this study, various kinds of satellite images were compressed with the JPEG algorithm The relation between compression ratio and image quality were evaluated As for the image quality evaluation, both subjective evaluation and objective evaluation were performed It was determined that all the test satellite images could be compressed to at least 1/10 of the original data volume preserving high visual image quality The degradation of spatial distribution quality of the compressed images were evaluated using power spectrum of original and compressed images >

11 citations

Journal ArticleDOI
TL;DR: An overview of the objective quality assessment that will be conducted as part of JPEG XS evaluation procedures is given and the most complex algorithm, HEVC SCC intra, achieves the highest compression efficiency on screen content.
Abstract: Today, many existing types of video transmission and storage infrastructure are not able to handle UHD uncompressed video in real time. To reduce the required bit rates, a low-latency lightweight compression scheme is needed. To this end, several standardization efforts, such as Display Stream Compression, Advanced DSC, and JPEG XS, are currently being made. Focusing on screen content use cases, this paper provides a comparison of existing codecs suited for this field of application. In particular, the performance of DSC, VC-2, JPEG 2000 (in low-latency and low-complexity configurations), JPEG and HEVC Screen Content Coding Extension (SCC) in intra mode are evaluated. First, quality is assessed in single and multiple generations. Then, error robustness is evaluated by inserting one-bit errors at random positions in the compressed bitstreams. Unsurprisingly, the most complex algorithm, HEVC SCC intra, achieves the highest compression efficiency on screen content. JPEG 2000 performs well in the three experiments while HEVC SCC does not provide multi-generation robustness. DSC guarantees quality preservation in single generation at high bit rates and VC-2 provides very high error resilience. This work gives the reader an overview of the objective quality assessment that will be conducted as part of JPEG XS evaluation procedures.

11 citations

Journal ArticleDOI
TL;DR: The authors show that the proposed method, based on the clustering of transcoding operations represented as high-dimensional vectors, significantly outperforms previous methods in accuracy.
Abstract: The problem of efficiently adapting JPEG images to satisfy given constraints such as maximum file size and resolution arises in a number of applications, from universal media access for mobile browsing to multimedia messaging services. However, optimizing for perceived quality user experience commands a non-negligible computational cost which in the authors work, they aim to minimize by the use of low-cost predictors. In previous work, the authors presented predictors and predictor-based systems to achieve low-cost and near-optimal adaption of JPEG images under given constraints of file size and resolution. In this work, they extend and improve these solutions by including more information about images to obtain more accurate predictions of file size and quality resulting from transcoding. The authors show that the proposed method, based on the clustering of transcoding operations represented as high-dimensional vectors, significantly outperforms previous methods in accuracy.

11 citations

Journal ArticleDOI
TL;DR: This study applies RDLS to discrete wavelet transform (DWT) in JPEG 2000 lossless coding, employs a heuristic for image-adaptive RDLS filter selection, and finds that RDLS significantly improves bitrates of non-photographic images and of images with impulse noise added, while bit rates of photographic images are improved by below 1% on average.
Abstract: In a previous study, we noticed that the lifting step of a color space transform might increase the amount of noise that must be encoded during compression of an image. To alleviate this problem, we proposed the replacement of lifting steps with reversible denoising and lifting steps (RDLS), which are basically lifting steps integrated with denoising filters. We found the approach effective for some of the tested images. In this study, we apply RDLS to discrete wavelet transform (DWT) in JPEG 2000 lossless coding. We evaluate RDLS effects on bitrates using various denoising filters and a large number of diverse images. We employ a heuristic for image-adaptive RDLS filter selection; based on its empirical outcomes, we also propose a fixed filter selection variant. We find that RDLS significantly improves bitrates of non-photographic images and of images with impulse noise added, while bitrates of photographic images are improved by below 1% on average. Considering that the DWT stage may worsen bitrates of some images, we propose a couple of practical compression schemes based on JPEG 2000 and RDLS. For non-photographic images, we obtain an average bitrate improvement of about 12% for fixed filter selection and about 14% for image-adaptive selection. Denoising is integrated with DWT lifting steps in lossless JPEG 2000.A heuristic is used for image-adaptive selection of denoising filters.Significant bitrate improvements are obtained for nonphotographic images.Consistently good performance is observed on images with impulse noise.Compression schemes with various bitrate-complexity tradeoffs are proposed.

11 citations

Journal ArticleDOI
TL;DR: An adaptive fuzzy-tuning modeler is employed that applies fuzzy inference to deal efficiently with the problem of conditional probability estimation and the compression results of the proposed method are good and satisfactory for various types of source data.
Abstract: This paper describes an online lossless data-compression method using adaptive arithmetic coding. To achieve good compression efficiency, we employ an adaptive fuzzy-tuning modeler that applies fuzzy inference to deal efficiently with the problem of conditional probability estimation. In comparison with other lossless coding schemes, the compression results of the proposed method are good and satisfactory for various types of source data, Since we adopt the table-lookup approach for the fuzzy-tuning modeler, the design is simple, fast, and suitable for VLSI implementation.

11 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815