scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
01 Dec 2013
TL;DR: The paper proposes a novel decoding of Progressive JPEG images using existing hardware solution which natively supports Baseline JPEG, and proposes memory optimization for Progressive JPEG to reduce DDR storage requirement for low cost embedded system.
Abstract: The Widespread usage of social media for picture sharing led resulting popularity of JPEG progressive format due to refinement of image over time on slow internet connection. Typically, these pictures are decoded by means of software and takes large decoding time as resolution in terms of Mpixels increase. In case of embedded devices, typically have Baseline JPEG hardware support due to traditional camera capture and local playback. The paper proposes a novel decoding of Progressive JPEG images using existing hardware solution which natively supports Baseline JPEG. The solution used enhancing native baseline JPEG hardware by means of local CPU to enable Huffman decoding for Progressive format. The second part of part proposed memory optimization for Progressive JPEG to reduce DDR storage requirement for low cost embedded system. This uses concept of sign and zero map with non-bit exact decoding without visual quality impact. The solution runs faster by large factor (factor of HW and CPU speed) with 88% lesser memory storage for any resolution.

9 citations

Proceedings ArticleDOI
15 Apr 2002
TL;DR: It is indicated that the VDM can be used to predict the visibility of compression artifacts and guide the selection of encoder bit rate for individual images to maintain artifact visibility below a specified threshold.
Abstract: The Sarnoff JNDmetrix visual discrimination model (VDM) was applied to predict the visibility of compression artifacts in mammographic images. Sections of digitized mammograms were subjected to irreversible (lossy) JPEG and JPEG 2000 compression. The detectability of compressed images was measured experimentally and compared with VDM metrics and PSNR for the same images. Artifacts produced by JPEG 2000 compression were generally easier for observers to detect than those produced by JPEG encoding at the same compression ratio. Detection thresholds occurred at JPEG 2000 compression ratios from 6:1 to 10:1, significantly higher than the average 2:1 ratio obtained for reversible (lossless) compression. VDM predictions of artifact visibility were highly correlated with observer performance for both encoding techniques. Performance was less correlated with encoder bit rate and PSNR, which was a relatively poor predictor of threshold bit rate across images. Our results indicate that the VDM can be used to predict the visibility of compression artifacts and guide the selection of encoder bit rate for individual images to maintain artifact visibility below a specified threshold.© (2002) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

9 citations

Proceedings ArticleDOI
01 Oct 2003
TL;DR: In this paper, a comparison of JPEG and JPEG2000 image coders focusing on low bit rates is provided, where two picture quality measures are used: peak signal to noise ratio (PSNR) and picture quality scale (PQS) as perception based quantitative picture quality measure.
Abstract: The objective of this paper is to provide a comparison of JPEG and JPEG2000 image coders focusing on low bit rates. Coders are evaluated in rate-distortion sense. The influences of different image contents and compression ratios are assessed. Two picture quality measures are used: peak signal to noise ratio (PSNR) as traditional objective picture quality measure, and picture quality scale (PQS) as perception based quantitative picture quality measure.

9 citations

01 Jan 2011
TL;DR: This column will discuss the teaching of the fundamental limits of lossless data compression, expanding on one of the items in the Shannon Lecture.
Abstract: Most courses on information theory, particularly those patterned after Cover and Thomas [1], cover the algorithmic side of lossless data compression, most notably Huffman, arithmetic and Lempel-Ziv codes. I like to do it at the advanced-undergraduate level and it is great fun to teach. However, how we describe and analyze those algorithms is not the purpose of this column. Instead, expanding on one of the items in my Shannon Lecture, I will discuss the teaching of the fundamental limits of lossless data compression.

9 citations

Journal ArticleDOI
TL;DR: A fast and efficient VLSI hardware architecture design of context formation for EBCOT tier-1 in JPEG 2000 is proposed and implemented and Experimental results show that the design outperforms well-known techniques with respect to the processing time.
Abstract: With the augmentation in multimedia technology, demand for high-speed real-time image compression systems has also increased. JPEG 2000 still image compression standard is developed to accommodate such application requirements. Embedded block coding with optimal truncation (EBCOT) is an essential and computationally very demanding part of the compression process of JPEG 2000 image compression standard. Various applications, such as satellite imagery, medical imaging, digital cinema, and others, require high speed and performance EBCOT architecture. In JPEG 2000 standard, the context formation block of EBCOT tier-1 contains high complexity computation and also becomes the bottleneck in this system. In this paper, we propose a fast and efficient VLSI hardware architecture design of context formation for EBCOT tier-1. A high-speed parallel bit-plane coding (BPC) hardware architecture for the EBCOT module in JPEG 2000 is proposed and implemented. Experimental results show that our design outperforms well-known techniques with respect to the processing time. It can reach 70 % reduction when compared to bit plane sequential processing.

9 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815