scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
01 May 1994
TL;DR: The overall compression performance of the Rice algorithm implementations exceeds that of all algorithms tested including arithmetic coding, UNIX compress, UNix pack, and gzip.
Abstract: This paper describes two VLSI implementations that provide an effective solution to compressing medical image data in real time. The implementations employ a lossless data compression algorithm, known as the Rice algorithm. The first chip set was fabricated in 1991. The encoder can compress at 20 Msamples/sec and the decoder decompresses at the rate of 10 Msamples/sec. The chip set is available commercially. The second VLSI chip development is a recently fabricated encoder that provides improvements for coding low entropy data and incorporates features that simplify system integration. A new decoder is scheduled to be designed and fabricated in 1994. The performance of the compression chips on a suite of medical images has been simulated. The image suite includes CT, MR, angiographic images, and nuclear images. In general, the single-pass Rice algorithm compression performance exceeds that of two-pass, lossless, Huffman-based JPEG. The overall compression performance of the Rice algorithm implementations exceeds that of all algorithms tested including arithmetic coding, UNIX compress, UNIX pack, and gzip.

6 citations

DOI
21 Jan 2008
TL;DR: In this paper, the authors describe a technique to detect image tampering using two different methods: the first is based on the Bayer interpolation process and its consequences in the Fourier domain.
Abstract: In this paper, we describe a technique to detect image tampering using two different methods. The first is based on the Bayer interpolation process and its consequences in the Fourier domain. The second uses artifacts of the JPEG compression and more particularly in the JPEG frame observable in the Fourier domain.

6 citations

Proceedings ArticleDOI
28 Aug 2005
TL;DR: The main contribution of the research is higher compression ratios than standard techniques in lossless scenario, which will be of great importance for data management in a hospital and for teleradiology.
Abstract: Medical images are very important for diagnostics and therapy However, digital imaging generates large amounts of data which need to be compressed, without loss of relevant information, to economize storage space and allow speedy transfer In this research three techniques are implemented for medical image compression, which provide high compression ratios with no loss of diagnostic quality Different image modalities are employed for experiments in which X-rays, MRI, CT scans, Ultrasounds and Angiograms are included The proposed schemes are evaluated by comparing with existing standard compression techniques like JPEG, lossless JPEG2000, LOCOI and Huffman Coding In a medical image only a small region is diagnostically relevant while the remaining image is much less important This is called Region of Interest (ROI) The first approach compresses the ROI strictly losslessly and the remaining regions of the image with some loss In the second approach an image is first compressed at a high compression ratio but with loss, and the difference image is then compressed losslessly Difference image contain less data and is compressed more compactly than original Third approach exploits inter-image redundancy for similar modality and same part of human body More similarity means less entropy which leads to higher compression performance The overall compression ratio is combination of lossy and lossless compression ratios The resulting compression is not only strictly lossless, but also expected to yield a high compression ratio These techniques are based on self designed Neural Network Vector Quantizer (NNVQ) and Huffman coding Their clever combination is used to get lossless effect These are spatial domain techniques and do not require frequency domain transformation An overall compression ratio of 6-14 is obtained for images with proposed methods Whereas, by compressing same images by a lossless JPEG2K and Huffman, compression ratio of 2 is obtained at most The main contribution of the research is higher compression ratios than standard techniques in lossless scenario This result will be of great importance for data management in a hospital and for teleradiology

6 citations

Journal ArticleDOI
TL;DR: The experimental results show that CWA has virtually no impact on the visual quality of the watermarked images and is highly sensitive to image modifications, and reduces the size of JPEG compressed-domain images by as much as 6.3%.
Abstract: This paper proposes a novel fragile watermarking algorithm, designated as the compression-watermarking algorithm (CWA), which inserts watermark information in a JPEG image by modifying the last nonzero coefficient in each discrete cosine transform (DCT) quantized block. The proposed algorithm not only provides an authentication capability, but also decreases the size of JPEG compressed-domain images. The experimental results show that CWA has virtually no impact on the visual quality of the watermarked images and is highly sensitive to image modifications. Furthermore, it is found that CWA reduces the size of watermarked image by as much as 6.3% when applied to the watermarking of standard JPEG test images. Therefore, CWA provides a feasible solution for image authentication and data reduction in DCT-based domains such as JPEG and MPEG-family coders/decoders.

6 citations

Proceedings ArticleDOI
18 Jun 1996
TL;DR: The performance of the proposed approach is comparable to that exhibited by JPEG lossless schemes while being better than the Huffman, the Lempe-Ziv and arithmetic coding.
Abstract: We propose a lossless image compression scheme using wavelet decomposition. Wavelet decomposition of an image f(x,y) at a resolution 2/sup j/ consists of an approximated image at a resolution 2/sup j-1/ and three detail images along the horizontal, vertical and diagonal directions. The approximated wavelet coefficients are encoded using a variable block size segmentation (VBSS) algorithm proposed by Ranganathan et.al. (see IEEE Trans. on Image Proc., vol.4, no.10,p.1396-1406, 1995) and the detail signals are encoded using directional prediction and categorization similar to that in the VBSSS algorithm. The residual error due to the finite precision arithmetic is encoded using adaptive arithmetic coding (AAC). The performance of the proposed approach is comparable to that exhibited by JPEG lossless schemes while being better than the Huffman, the Lempe-Ziv and arithmetic coding.

6 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815