scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
05 Jun 2000
TL;DR: This paper designs a lossless version of the lapped orthogonal transform (LOT), and investigates two cases of 31-band and 64-band decomposition in which the 4-point and 8-point lossless LOT are used, respectively.
Abstract: In lossless transforms, integer input signals are transformed into integer transform coefficients and losslessly reconstructed. Lossless versions of the discrete cosine transform and wavelet transforms have been proposed. In this paper, we design a lossless version of the lapped orthogonal transform (LOT). The fast LOT is decomposed into block transforms. Then the lossless LOT is obtained by replacing them by the corresponding lossless ladder networks. We investigate two cases of 31-band and 64-band decomposition in which the 4-point and 8-point lossless LOT are used, respectively. We compare them with the conventional lossless methods in terms of lossless and lossy compression efficiency. The proposed methods are found to have good performance.

11 citations

Book ChapterDOI
TL;DR: An analytical model and a numerical analysis of the sub-sampling, compression and re-scaling process, that makes explicit the possible quality/compression trade-offs, and shows that the image auto-correlation can provide good estimates for establishing the down-sampled factor that achieves optimal performance.
Abstract: The most popular lossy image compression method used on the Internet is the JPEG standard. JPEG's good compression performance and low computational and memory complexity make it an attractive method for natural image compression. Nevertheless, as we go to low bit rates that imply lower quality, JPEG introduces disturbing artifacts. It appears that at low bit rates a down-scaled image when JPEG compressed visually beats the high resolution image compressed via JPEG to be represented with the same number of bits.Motivated by this idea, we show how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process.We give an analytical model and a numerical analysis of the sub-sampling, compression and re-scaling process, that makes explicit the possible quality/compression trade-offs. We show that the image auto-correlation can provide good estimates for establishing the down-sampling factor that achieves optimal performance. Given a specific budget of bits, we determine the down sampling factor necessary to get the best possible recovered image in terms of PSNR.

11 citations

Book Chapter
01 Jul 2014
TL;DR: A new idea for applying the JPEG technique with Discrete Wavelet Transform (DWT) for high-resolution images and is compared with JPEG and JPEG2000 algorithm by using 2D and 3D RMSE.
Abstract: Image compression is one of the important techniques used today for image and video transmission. There are many types of image compression techniques are used these days; one of them is JPEG technique. In this research, we introduce a new idea for applying the JPEG technique with Discrete Wavelet Transform (DWT) for high-resolution images. Our image compression algorithm consists of; firstly, transform an image by single level DWT. Secondly, JPEG algorithm applied on "LL" sub-band this process is called JPEG Transformation. Thirdly, separate the final transformed matrix into DC-Array and AC-Matrix contains DC values and AC coefficients respectively. Finally, the minimize-matrix-size algorithm applied on AC-Matrix followed by arithmetic coding. The novel decompression algorithm used in this research is Parallel Sequential Search Algorithm, which is represented inverse minimize-matrix-size algorithm. The searching algorithm consist of a P pointers, all these pointers are working in parallel to find the original AC-coefficients. Thereafter, combines all decoded DC-values with the decoded ACcoefficients in one matrix followed by apply inverse JPEG transformed and inverse DWT. the technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm by using 2D and 3D RMSE

11 citations

Proceedings ArticleDOI
22 Oct 2012
TL;DR: It is demonstrated that tuned quantisation tables can be used as image descriptors for performing content-based image retrieval and can be performed in an extremely fast fashion as it is based on information only from the JPEG headers.
Abstract: In this paper, we present an extremely fast method for online image retrieval of JPEG compressed images. We exploit minimal perceptual error image compression which optimises JPEG quantisation tables to improve the resulting image quality. In particular, we demonstrate that thus tuned quantisation tables can be used as image descriptors for performing content-based image retrieval. Image similarity is expressed as similarity between the respective quantisation tables and feature extraction and comparison be performed in an extremely fast fashion as it is based on information only from the JPEG headers. We show that our method takes only about 2-2.5% of the time of standard compressed domain algorithms, yet achieves retrieval accuracy within 3.5% of these techniques on a large dataset.

10 citations

Proceedings ArticleDOI
23 Jun 1997
TL;DR: The present paper presents a methodology for the optimal construction of reduced pyramids by selecting the interpolation synthesis post-filters so as to minimize the error variance at each level of the pyramid.
Abstract: Reduced pyramids, including in particular pyramids without analysis filters are known to produce excellent results when used for lossless signal and image compression. The present paper presents a methodology for the optimal construction of such pyramids by selecting the interpolation synthesis post-filters so as to minimize the error variance at each level of the pyramid. This establishes optimally efficient interpolative pyramidal lossless compression. It also has the added advantage of producing lossy replicas of the original which, at lower resolutions retain as much similarity to the original as possible. The general optimization methodology is developed first, for a general family of reduced pyramids. Subsequently, this is applied to the optimization of pyramids in this family formed using 2D quincunx sampling matrices. Optimal versions of these techniques are determined for 2D images characterized by separable or isotropic correlation functions. The advantages of the developed methods are demonstrated by experimental evaluation.

10 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815