scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
20 Oct 2009
TL;DR: In this scheme, an enhanced chaotic key based algorithm (ECKBA) is proposed for encryption and decryption and lossless JPEG is used for compression of medical images.
Abstract: Illegal data access has become prevalent in wireless and general communication networks. There is a need for secure transmission of medical images since telemedicine is increasingly being used. Recently, the use of chaotic signals for secure data transmission has seen significant growth in developing chaotic encryption and decryption algorithms. Moreover, retaining the details of the medical image is particularly important for accurate diagnosis. Hence, in this paper, a secure scheme for medical image transmission is proposed. In this scheme, an enhanced chaotic key based algorithm (ECKBA) is proposed for encryption and decryption and lossless JPEG is used for compression of medical images. Further, the cryptanalysis of the proposed ECKBA is performed. The efficacy of the proposed scheme is illustrated through implementation results on a CT scanned abdomen image of a kidney patient and on MRI image of a patient with brain tumor.

2 citations

Proceedings Article
01 Jan 1998
TL;DR: This paper discusses several important discrepancies, including the evaluation of decorrelating perforamnce, the implementation of transform and the criteria of choosing transform, aimed at a better understanding of applying linear transforms in lossless coding scenario.
Abstract: Recent developments on the implementation of integer-to-integer transform provide a new basis for transform-based lossless coding. Although it shares many features with popular transform-based lossy coding, there are also a few discrepancies between them because of different coding rules. In this paper we discuss several important discrepancies, including the evaluation of decorrelating perforamnce, the implementation of transform and the criteria of choosing transform. We target at a better understanding of applying linear transforms in lossless coding scenario.

2 citations

Proceedings ArticleDOI
01 Sep 2012
TL;DR: This article presents a novel method for exploiting inter/intra-block redundancies for JPEG image coding and shows that the first column (row) DCT coefficient of f can be predicted without incurring any loss from the remaining DCT coefficients of f and from the D CT coefficients of g (h), assuming that the DCT is reversible.
Abstract: This article presents a novel method for exploiting inter/intra-block redundancies for JPEG image coding. It first expands the given image by duplicating certain rows and columns so that if f is a 8 by 8 pixel block and g and h are blocks to its immediate left and above, resp., adjacent pixels at the common boundary of f and g and at the boundary of f with h in the expanded image match. We show that the first column (row) DCT coefficients of f can be predicted without incurring any loss from the remaining DCT coefficients of f and from the DCT coefficients of g (h), assuming that the DCT is reversible. Our experiment shows that on average, we can save about 14.6% of JPEG Huffman code bits just by not coding the first row/column DCT coefficients.

2 citations

Proceedings ArticleDOI
21 Jul 2015
TL;DR: Experimental results show that the proposed compression scheme provides a significant quality gain as compared with the original JPEG baseline coding method and another super-resolution directed down-sampling (SRDDS) based compression scheme.
Abstract: In this paper, we focus on the design of a new block-based image compression by using our proposed transform domain downward conversion (TDDC). Applied directly on each 16×16 macro-block of pixels, this downward conversion is implemented through our proposed advanced padding technique such that a non-zero 8×8 coefficient block (thus down-sized) is generated only at the top-left corner in the transform domain, accompanied by zeros in other 75% positions. Consequently, a considerable bit-count saving can be achieved for the whole macro-block. In the meantime, 25% pixels reserved during the TDDC may be directly reconstructed from the down-sized coefficient block while the other 75% pixels that are not reserved during the TDDC will be reconstructed via the interpolation. Finally, this TDDC-based compression is used in conjunction with the JPEG baseline coding method (i.e., 1-out-of-2 selection) according to a rate-distortion optimization (RDO) based criterion. Experimental results show that our proposed compression scheme provides a significant quality gain as compared with the original JPEG baseline coding method and another super-resolution directed down-sampling (SRDDS) based compression scheme.

2 citations

Proceedings ArticleDOI
01 Feb 2015
TL;DR: In this article, the authors proposed a 2D-ADL scheme incorporating the directionally spatial prediction into the conventional lifting based on 9/7 wavelet transform and forms a novel, efficient and flexible lifting structure with proposed scaling coefficients.
Abstract: Lifting is an efficient algorithm to implement the discrete wavelet transform in order to overcome the drawbacks of the conventional wavelet transform that does not provide a compact representation of edges which are not in horizontal and vertical directions. The lifting scheme provides a general and flexible tool for the construction of wavelet decompositions and perfect reconstruction filter banks. It has been adopted in JPEG 2000. The paper follows this research line, improved SPIHT based on adaptive coding becomes analyzed and tuned with two dimensional Adaptive Directional Lifting based on CDF 9/7 has structured for lossy to lossless JPEG 2000 image coding. The proposed 2D-ADL scheme incorporates the directionally spatial prediction into the conventional lifting based on 9/7 wavelet transform and forms a novel, efficient and flexible lifting structure with proposed scaling coefficients. In order to obtain better compression on image edge, an improved Set Partitioning In Hierarchical Trees (ASPIHT) algorithm based on prior scanning the coefficients around which there were more significant coefficients was replaced with conventional SPIHT. Although, the proposed 2D-ADL based on CDF9/7 scheme followed by ASPIHT codec significantly reduce edge artifacts and ringing and outperforms the conventional 1D lifting scheme followed by SPIHT upto 12dB as reported.

2 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815