scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
13 Jul 2014
TL;DR: Tests demonstrate that the compression ratios and compression speed achieved with this approach can be comparable to, or better than, lossless proprietary JPEG variants and other image formats (i.e. PNG, TIFF).
Abstract: A DEM can be represented like an image, except that it contains a single channel of information of various shades of grey and can be compressed in a lossy or lossless manner by way of existing image compression protocols. Compression has the effect of reducing memory requirements and speed of transmission over digital links, while maintaining the integrity of data as required. In this context, this paper investigates the use of an alternative image pyramid approach for DEM lossless compression referred to as Pyramid Lossless Differential Coding (PLDC). The effect of the PLDC on floating-point elevation values for 16-bit DEMs of dissimilar terrain characteristics is investigated here. Tests demonstrate that the compression ratios and compression speed achieved with this approach can be comparable to, or better than, lossless proprietary JPEG variants and other image formats (i.e. PNG, TIFF).

2 citations

Journal ArticleDOI
Yu-Ping Sui1, Cheng-Yu Yang1, Yanjun Liu1, Jun Wang1, Zhonghui Wei1, Xin He1 
TL;DR: A simple and adaptive lossless compression algorithm for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder, which has the merits of adaptability, and independent data packet.
Abstract: A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Compression Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.

2 citations

01 Jan 2011
TL;DR: A novel design of VLSI architecture for the node of wireless image sensor network is proposed and is composed of a general purpose processor and several dedicated hardware accel- erators for image processing and wireless communication.
Abstract: This paper proposes a novel design of VLSI architecture for the node of wireless image sensor network This architecture aims at the SoC (System on a chip) im- plementation and is composed of a general purpose em- bedded processor and several dedicated hardware accel- erators for image processing and wireless communication The hardware implemented Image processing unit (IPU) adopts an innovative image processing approach which concludes Bayer Color fllter array (CFA) pre-processing and lossless JPEG compressing The IPU can process 5 frames/s (VGA full color resolution) under a 16 MHz system clock, reaching a 26»47 bits/pixel compression rate with the PSNR larger than 463dB The hardware implemented Wireless communication unit (WCU) exe- cutes computing intensive and timing critical tasks of the IEEE 802154 Media access control (MAC) layer, which can achieve high performance and low power consumption on wireless operations compared of software implementa- tion Furthermore, low power design and techniques are employed to extend battery life, resulting in 45mW maxi- mum system power consumption when the system is in the full working mode (i:e processor, IPU and WCU are ac- tive simultaneously) The proposed architecture has been proto-typed on an FPGA system and fabricated in 018"m CMOS process

2 citations

Proceedings Article
01 Jan 1994
TL;DR: An adaptive coding scheme based on integer block codes is provided to explore the correlation among the neighbouring prediction errors and gives very good compression performance comparable with or even better than Huffman and arithmetic coding.
Abstract: Images can be considered as two dimensional Markov fields in which neighbouring pixels are highly correlated. Prediction approaches recommended by JPEG for lossless image compression coding break the two dimensional correlation of image data and transform it into a rather loosely correlated error data matrix upon which coding is conducted. Actually the prediction errors still have certain correlation due to the local correlation feature of image data. In this paper, an adaptive coding scheme based on integer block codes is provided to explore the correlation among the neighbouring prediction errors. The scheme tries to encode each prediction error into a binary representation with a length estimated from its neighbouring prediction errors. When the estimated length is not long enough to represent the error number, a specially designed overhead is added to guarantee lossless decoding. Apart from its simplicity and adaptivity, the scheme gives very good compression performance comparable with or even better than Huffman and arithmetic coding.

2 citations

Journal ArticleDOI
TL;DR: This work presents JParEnt, a new approach to parallel entropy decoding for JPEG decompression on heterogeneous multicores, and introduces a dynamic workload partitioning scheme to account for GPUs of low compute power relative to the CPU.
Abstract: Summary The JPEG format employs Huffman codes to compress the entropy data of an image. Huffman codewords are of variable length, which makes parallel entropy decoding a difficult problem. To determine the start position of a codeword in the bitstream, the previous codeword must be decoded first. We present JParEnt, a new approach to parallel entropy decoding for JPEG decompression on heterogeneous multicores. JParEnt conducts JPEG decompression in two steps: (1) an efficient sequential scan of the entropy data on the CPU to determine the start-positions (boundaries) of coefficient blocks in the bitstream, followed by (2) a parallel entropy decoding step on the graphics processing unit (GPU). The block boundary scan constitutes a reinterpretation of the Huffman-coded entropy data to determine codeword boundaries in the bitstream. We introduce a dynamic workload partitioning scheme to account for GPUs of low compute power relative to the CPU. This configuration has become common with the advent of SoCs with integrated graphics processors (IGPs). We leverage additional parallelism through pipelined execution across CPU and GPU. For systems providing a unified address space between CPU and GPU, we employ zero-copy to completely eliminate the data transfer overhead. Our experimental evaluation of JParEnt was conducted on six heterogeneous multicore systems: one server and two desktops with dedicated GPUs, one desktop with an IGP, and two embedded systems. For a selection of more than 1000 JPEG images, JParEnt outperforms the SIMD–implementation of the libjpeg-turbo library by up to a factor of 4.3×, and the previously fastest JPEG decompression method for heterogeneous multicores by up to a factor of 2.2×. JParEnt's entropy data scan consumes 45% of the entropy decoding time of libjpeg-turbo on average. Given this new ratio for the sequential part of JPEG decompression, JParEnt achieves up to 97% of the maximum attainable speedup (95% on average). On the IGP-based desktop platform, JParEnt achieves energy savings of up to 45% compared to libjpeg-turbo's SIMD-implementation.

2 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815