scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
H. Ito1, Koichi Magai1, R. Fujii1, M. Suzuki1
24 Oct 2004
TL;DR: An embedding technique to use different quantization vectors for the watermarking and the JPEG compression is introduced to avoid the limitation on compressed picture quality set by the restriction on quantization step sizes for DCT coefficients.
Abstract: A watermarking scheme for JPEG image authentication is proposed. It is a direct extension of Wong's algorithm to JPEG-coded images where the signature is embedded at the end of scanned DCT coefficients. instead of LSB's of raw pixel values. We address a problem that the watermark disappears after the integer rounding in JPEG decompression and show that imposing a restriction on the quantization step sizes for DCT coefficients can solve this problem. To avoid the limitation on compressed picture quality set by this restriction, we introduce an embedding technique to use different quantization vectors for the watermarking and the JPEG compression. Simulation results are shown to verify the proposed scheme.

1 citations

Journal Article
TL;DR: The experimental results showed that the proposed wavelet domain semi-fragile watermarking scheme had the advantages such as exactly locating for malicious tampering, big embedding capability, good robustness to JPEG compression, and good protection to key characteristics used for disease recognition and diagnosis such as color, texture, and image detail.
Abstract: A wavelet domain semi-fragile watermarking scheme for color plant-disease image authentication was presented based on the facts that image detail was protected in JPEG compression and embedding watermark in green component was more robust against content-protection image processing.The experimental results showed that the proposed scheme had the advantages such as exactly locating for malicious tampering,big embedding capability,good robustness to JPEG compression,and good protection to key characteristics used for disease recognition and diagnosis such as color,texture,and image detail.

1 citations

Proceedings ArticleDOI
03 Sep 2015
TL;DR: This study evaluates the performance of several lossless grayscale image compression algorithms like CALIC, which guarantees full reconstruction of the original data without incurring any distortion in the process.
Abstract: Lossless data compression has been suggested for many space science exploration mission applications either to increase the science return or to reduce the requirement for on-board memory, station contact time, and data archival volume. A Lossless compression technique guarantees full reconstruction of the original data without incurring any distortion in the process. In this study we evaluate the performance of several lossless grayscale image compression algorithms like CALIC.

1 citations

01 Jan 2016
TL;DR: The main aim of Lossless Huffman coding using block and codebook size for image compression is to convert the image to a form better that is suited for analysis to human.
Abstract: Images are basic source of information for almost all scenarios that degrades its quality both in visually and quantitatively way. Now-a-days, image compression is one of the demanding and vast researches because high Quality image requires larger bandwidth. Raw images need larger memory space. In this paper, read an image of equal dimensional size (width and length) from MATLAB. Initialize and extract M-dimensional vectors or blocks from that image. However, initialize and design a code-book of size N for the compression. Quantize that image by using Huffman coding Algorithm to design a decode with table-lookup for reconstructing compressed image of different 8 scenarios. In this paper, several enhancement techniques were used for lossless Huffman coding in spatial domain such as Laplacian of Gaussian filter. Use laplacian of Gaussian filter to detect edges of lossless Huffman coding best quality compressed image(scenario#8) of block size of 16 and codebook size of 50. Implement the other enhancement techniques such as pseudo-coloring, bilateral filtering, and water marking for the lossless Huffman coding c based on best quality compressed image. Evaluate and analyze the performance metrics (compression ratio, bit-rate, PSNR, MSE and SNR) for reconstructed compress image with different scenarios depending on size of block and code-book. Once finally, check the execution time, how fast it computes that compressed image in one of the best scenarios. The main aim of Lossless Huffman coding using block and codebook size for image compression is to convert the image to a form better that is suited for analysis to human.

1 citations

01 Jan 2012
TL;DR: The paper describes the development of an accelerator for selected still image compression algorithms using the hardware description language VHDL and a successful implementation on a programmable system decompressor of still images saved in JPEG standard ISO / IEC 10918-1 (1993), Baseline mode.
Abstract: Image compression is one of the most important topics in the industry, commerce and scientific research. Image compression algorithms need to perform a large number of operations on a large number of data. In the case of compression and decompression of still images the time needed to process a single image is not critical. However, the assumption of this project was to build a solution which would be fully parallel, sequential and synchronous. The paper describes the development of an accelerator for selected still image compression algorithms. In its hardware implementation there was used the hardware description language VHDL. The result of this work was a successful implementation on a programmable system decompressor of still images saved in JPEG standard ISO / IEC 10918-1 (1993), Baseline mode, which is a primary, fundamental, and mandatory mode for this standard. The modular system and method of connection allows the continuous input data stream. Particular attention was paid to selection and implementation of two major, in the authors opinion, algorithms occuring in this standard. Executing the IDCT module uses an algorithm transformation IDCT-SQ modified by the authors of this paper. It provides a full pipelining by applying the same kind of arithmetic operations between each stage. The module used to decode Huffman's code proved to be a bottleneck for the whole project.

1 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815