scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that the performance of JPEG 2000 codec implemented by CUDA is better than CPU based implementation and is achieved 27.7 frame/second in 4K digital cinema.
Abstract: In this paper, we propose a CUDA implementation of DWT for JPEG 2000 codec. We show that the performance of JPEG 2000 codec implemented by CUDA is better than CPU based implementation. The performance of the DWT implemented by CUDA is achieved 27.7 frame/second in 4K digital cinema.

1 citations

Patent
28 Oct 1999
TL;DR: In this paper, the problem of reducing a compressed file size furthermore by deciding a cut-off number in a 1st array with coefficients sequenced by frequencies and nullifying frequency coefficients of the first array whose numbers are over the cutoff number and whose values are less than a corresponding threshold is solved.
Abstract: PROBLEM TO BE SOLVED: To reduce a compressed file size furthermore by deciding a cut-off number in a 1st array with coefficients sequenced by frequencies and nullifying frequency coefficients of the 1st array whose numbers are over the cut-off number and whose values are less than a corresponding threshold. SOLUTION: A compressed bit stream of a current JPEG file 17a is fed to a Huffman decoder 19, from which 8×8 blocks consisting of quantized DCT coefficients are reproduced. The coefficient blocks are given to a JPEG coder (FSBJT) 31 with a limited file size, where they are encoded. In this case, a 1st path is generated for data, and when a cut-off number is defined to be, e.g. 61, all of 61st, 62nd and 63rd coefficients are compared with a threshold value to generate an array whose element values can be saved for the compression purpose. After DCT data are coded, the result is given to a Huffman encoder 16, from which a new JPEG file 17b is generated.

1 citations

Journal Article
TL;DR: A good error criterion for fidelity control is introduced and a low complexity, context based, near lossless compression algorithm of biomedical signals is developed, which not only ensured the high fidelity of signals, but also yielded better compression results than lossed compression did.
Abstract: The compression of biomedical signals is widely needed in clinic. However, many doctors consider lossy compression techniques not applicable because they may cause loss of diagnostic information signals, therefore lossless compression seems to be the only choice. But this is not the case. Firstly, error and noise may be added to the signal during the process of data acquisition, so even compressed with lossless techniques, the whole process is still lossy. Secondly, the performance of lossless compression is limited, in contrast, lossy compression can yield a much higher compression ratio than the lossless compression. Hence, the lossy compression is applicable in biomedical signals, the key issue is how to ensure the high fidelity of signals. Conventional lossy compression methods would not serve this purpose, so we concentrated on a special lossy technique called near lossless compression. We first introduced a good error criterion for fidelity control; then, based on this criterion, developed a low complexity, context based, near lossless compression algorithm of biomedical signals. Experiments showed that our algorithm not only ensured the high fidelity of signals, but also yielded better compression results than lossless compression did. Finally, we also give our insights into the future research on the near lossless compression.

1 citations

Journal Article
TL;DR: A novel watermarking algorithm is proposed, which aims at the contradiction between invisibility and robust-ness and can survive JPEG compression, noise, filtering, clipping, and other attacks, which is practical.
Abstract: In this paper,a novel watermarking algorithm is proposed,which aims at the contradiction between invisibility and robust-ness. First,a pair of positions is selected considering the characteristic of HVS and the principle of JPEG compression; then the wa-termark is embedded using the relationship of coefficients on the selected positions; finally,the coefficients are enhanced adaptively. The scheme improves the invisibility and robustness of the watermark. It also realizes blind detection. The relationships between em-bedding intensity and invisibility together with anti-JPEG compression are analysed. The theory and experiment show the algorithm can survive JPEG compression,noise,filtering,clipping and other attacks,which is practical.

1 citations

01 Jan 2012
TL;DR: The purpose of proposed Hybrid Quantization Method is to overcome the limitations in the standard JPEG method & to provide a solution to them to find best possible solution for trade off between Compression ratio and Quality of compressed Image (MSE & PSNR).
Abstract: This paper refers to proposed “Hybrid Quantization Method” in JPEG image compression standard. The purpose of proposed Hybrid Quantization Method is to overcome the limitations in the standard JPEG method & to provide a solution to them. In standard JPEG process, only one quantization matrix is used for compression of the entire image. Higher Quantization matrix provide better compression ratio but poor image quality. Similarly Lower Quantization matrix offers best image quality but the compression ratio is less. Different images have different frequency contents. So, if the quantization matrix is chosen based on the frequency content of the input image, then it is possible to improve image quality for almost same compression ratio. Proposed Hybrid Quantization Method aims to find best possible solution for trade off between Compression ratio (size of image) and Quality of compressed Image (MSE & PSNR).

1 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815