scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal ArticleDOI
TL;DR: Experimental results show dramatically reduced computational complexity while the subjective image quality closely approximates that of full lossless decoding.
Abstract: This paper presents a method of lossless to lossy transcoding of images that takes advantage of bitplane coding in JPEG 2000. Decoding JPEG 2000 lossless codestreams and subsequently re-encoding them into JPEG 2000 lossy streams consumes lots of computational time. To reduce this time, partial decoding of the lossless codestream is proposed. In addition, by employing a unique rate control, the image quality degradation usually associated with partial decoding is reduced. Experimental results show dramatically reduced computational complexity while the subjective image quality closely approximates that of full lossless decoding.

1 citations

06 Dec 2011
TL;DR: ALACRI2TY (Analytics-driven Lossless dAta Compression for Rapid In-situ Indexing, sToring, and querYing), which at its core consists of two components: lossless compressor and query processing engine over compressed data, yields a multi-fold improvement in query response time over state-of-the-art systems such as FastBit, MonetDB, and SciDB.
Abstract: ARKATKAR, ISHA. ALACRI2TY: Lossless Data Compression for Analytics-driven Query Processing. (Under the direction of Nagiza F. Samatova.) Analysis of scientific simulations is highly data-intensive and is becoming an increasingly important challenge. Peta-scale data sets require us to look for alternative ways of performing query-driven analyses. This thesis is an attempt in the direction of query processing over losslessly compressed scientific data. We propose ALACRI2TY (Analytics-driven Lossless dAta Compression for Rapid In-situ Indexing, sToring, and querYing), which at its core consists of two components: lossless compressor and query processing engine over compressed data. ALACRI2TY’s compression component performs compression of double precision scientific data by unique value-based binning. Based on significant bit splitting, ALACRI2TY improves compression ratios over general-purpose compression utilities. It then indexes the metadata about the compression rather than the data to enable light-weight index storage. The query processing engine answers range queries over this compressed data with a low degree of unnecessary decompression. ALACRI2TY’s methodology involving compression and binning enables (1) Indexing with a total storage requirement (data+index) of less than 135% (versus 200-300% in existing scientific database systems); (2) Data access at multiple precision levels of detail necessitated by the varying sensitivity of analytical kernels (e.g., low-precision for histograms and descriptive statistics, medium-precision for clustering, and full-precision for Fourier analysis); (3) Robust performance across univariate as well as multi-variate query constraints via efficient bitmapbased aggregation of partial results. Altogether, these capabilities yield a multi-fold improvement in query response time over state-of-the-art systems such as FastBit, MonetDB, and SciDB when tested on several realworld data sets from scientific simulations and using the high-end compute clusters and Lustre file system at Oak Ridge National Laboratory. c © Copyright 2012 by Isha Arkatkar

1 citations

Journal Article
Yang Aiping1
TL;DR: The experiment results show that the proposed algorithm in this paper is better than the traditional JPEG compression algorithm at the same bit rate.
Abstract: This paper proposes an improved JPEG compression algorithm based on Haralick sloped-facet model.Different areas after the segmentation are taken into account in this algorithm and different areas are compressed in different proportions.The experiment results show that the proposed algorithm in this paper is better than the traditional JPEG compression algorithm at the same bit rate.

1 citations

01 Jan 2005
TL;DR: An investigation into a lossless embedded audio coder based on the AAC coder and utilising both backward Linear Predictive Coding (LPC) and cascade coding is provided, showing that em­ ploying LPC in an embedded architecture achieves approximately an 8% decrease in the coding rate.
Abstract: Embedded lossless audio coding is a technique for embedding a perceptual audio coding bitstream within a lossless audio coding bitstream. This paper provides an investigation into a lossless embedded audio coder based on the AAC coder and utilising both backward Linear Predictive Coding (LPC) and cascade coding. Cascade coding is a technique for entropy coding o f large dynamic range integer se­ quences that has the advantage o f simple implemen­ tation and low complexity. Results show that em­ ploying LPC in an embedded architecture achieves approximately an 8% decrease in the coding rate. The overall compression performance o f cascade coding closely follows Rice coding, a popular en­ tropy coding method for lossless audio. It is also shown that performance can be further improved by incorporating a start o f the art lossless coder into the proposed embedded co d e r. I. In t r o d u c t io n Lossless audio coding has received attention re­ cently with M PEG ’s effort in standardizing MPEG-4 Audio Lossless Coding (MPEG-4 ALS) [1], How­ ever, little attention has focused on researching em­ bedded lossless coding. In this scheme, depicted in Fig. 1, a lossless enhancement layer is appended to an embedded lossy layer, resulting in both a lossy and lossless bitstream. The lossy layer is useful for transmission or reviewing purposes, whereas the full lossless signal would be more suitable for archival or high quality transmission purposes. In Figure 1, the input signal, s(n) is coded with a perceptual coder to produce a synthesized version, s ’(n) and a bit stream, bp. The residual signal, r(n), is found as: r{ri) = s(n) — s ’ (n) (1) The resulting residual is first decorrelated to pro­ duce a new signal, r ’(n) which is then encoded with an entropy coder to produce bit stream bc(n). In the decoder, the received bitstreams are decoded to pro­ duce signals s ’(n) and r ’(n). The decoded signal, r ’(n) is then re-correlated to produce r(n) and the original signal losslessly recovered as described in expression (2). Fig. 1. Diagram of an embedded lossless audio coder. s(n) = s' (n) + r(n) (2) Existing approaches to embedded lossless coding include using AAC as the lossy layer [2], and using a method based on scalable image coding [3], Work performed in [4] analyzed the characteristics o f em­ ploying an AAC coder as a lossy base layer and loss­ lessly coding the difference between the lossy base layer and the original signal (the lossless enhance­ ment layer) using established lossless compression schemes such as gzip (based on Lempel-Ziv com­ pression [5]) and M onkey’s Audio [6], In the field o f entropy coding for audio, which is the final step in achieving lossless compression in Fig. 1, Rice coding (which is a special case o f Huff­ man coding) is the de-facto standard [7]. It is used in many pure lossless compression algorithms such as Shorten [8], Free Lossless Audio Coder (FLAC) [9], M onkey’s Audio (MAC) [6], and more recently in MPEG-4 ALS [1], This paper examines the performance o f a lossless coder based on the one described in Fig. 1. The paper extends the research described in [4] to include a decorrelation stage (based on Linear Predictive Cod­ ing (LPC)) and an entropy coding stage based on cascade coding [10, 11]. Section II will describe the embedded lossy coder adopted in this work, Section III will describe and present results for the decorrelation stage based on LPC and Section IV will provide an overview o f the entropy coding stage based on cascade coding. Sec­ tion V details the resulting overall compression per­ formance o f the proposed lossless coder and Section VI provides conclusions and future directions. Hamming w indow Hybrid window

1 citations

Proceedings ArticleDOI
28 Dec 2009
TL;DR: This paper presents an efficient scheme to transmit JPEG coded images over wireless channels, protected against various channel effects of wireless channel using Reed Solomon block code and transmitted over wireless channel.
Abstract: The Joint Photographic Experts Group (JPEG) standard is widely used for coding still images. The JPEG standard includes two basic compression methods. A DCT-based method is specified for `lossy' compression, and a predictive method for `lossless' compression. JPEG features a simple lossy technique known as the Baseline method which has been by far the most widely implemented method for large number of applications. It is very sensitive to transmission errors and can be employed successfully only when the transmission channel is nearly error free. Wireless communication channels are characterized by long bursts of data errors, and average bit error rates (BERs) of 10-3 to 10-4. This paper presents an efficient scheme to transmit JPEG coded images over wireless channels. The compressed image is protected against various channel effects of wireless channel using Reed Solomon block code and transmitted over wireless channel.

1 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815