Topic
Lossless JPEG
About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.
Papers published on a yearly basis
Papers
More filters
••
09 Apr 2003TL;DR: A lossless image compression method based on base switching transformation (BST), in which the method proves to be superior, and a hierarchical concept for the BST method, which can effectively improve the compression ratio.
Abstract: In this paper, we shall propose a lossless image compression method based on base switching transformation (BST). In order to lower down the difference between the minimum and maximum pixel values of each block used in BST and reduce the number of bits that BST needs, the first step we take is to modify each pixel value of original image by subtracting the value of the neighboring pixel from it. The reason is that the pixel values of two neighboring pixels are usually very close to each other. Besides, we provide a hierarchical concept for the BST method. It can effectively improve the compression ratio. With our experimental results, we have made a comparison among our method, the joint photographic experts group (JPEG) compression method, and joint bi-level image experts group (JBIG) compression method, in which our method proves to be superior.
5 citations
••
26 Aug 2008TL;DR: A Laplacian based statistical model is proposed to predict zero-quantized DCT coefficients in JPEG and to reduce the computations of encoding process to achieve the best real-time performance at the expense of negligible visual degradation.
Abstract: Digital image/video coding standards such as JPEG, H.264 are becoming more and more important for multimedia applications. Due to the huge amount of computations, there are significant efforts to speed up the encoding process. This paper proposes a Laplacian based statistical model to predict zero-quantized DCT coefficients in JPEG and to reduce the computations of encoding process. Compared with the standard JPEG and the reference in the literature, the proposed model can significantly simplify the computational complexity and achieve the best real-time performance at the expense of negligible visual degradation. Moreover, it can be directly applied to other DCT-based image/video codec. Computational reduction also implies longer battery lifetime and energy economy for digital applications.
5 citations
01 Jan 2012
TL;DR: A set of new JPEG Compression algorithms is presented that combines K-Means clustering algorithm and DCT to further reduce the bandwidth requirements and is identified to be giving almost same Peak Signal-to-Noise Ratio (PSNR) as the standard JPEG algorithm.
Abstract: Use of Digital Image Communication has increased exponentially in the recent years. Joint Photographic Experts Group (JPEG) is the most successful still image compression standard for bandwidth conservation. Evidently JPEG Compression system consists of a DCT transformation unit followed by a quantizer and encoder unit. At the decoder end, image is created by inverse DCT. In this paper, we present a set of new JPEG Compression algorithms that combines K-Means clustering algorithm and DCT to further reduce the bandwidth requirements. Experiments are carried out with many standard still images. Our algorithm identified to be giving almost same Peak Signal-to-Noise Ratio (PSNR) as the standard JPEG algorithm.
5 citations
••
TL;DR: The result shows that the performance of standard JPEG technique can be improved by proposed method, and this new hybrid approach achieves about 20% more compression ratio than the Standard JPEG.
Abstract: The JPEG standard technique involves three process mapping reduces interpixel redundancy, quantization, which is lossy process and entropy encoding, which is considered lossless process. Lossy JPEG compression is commonly used image compression technique. In the present paper a new hybrid technique has been introduced by combining the JPEG algorithm and Symbol Reduction Huffman technique for achieving more compression ratio .In the symbols reduction method , the number of symbols are reduced by combining together to form a new symbol. As a result of this method the number of Huffman code to be generated also reduced. It is simple, fast and easy to implement. The result shows that the performance of standard JPEG technique can be improved by proposed method. This new hybrid approach achieves about 20% more compression ratio than the Standard JPEG.
5 citations
••
08 May 2006TL;DR: A low-complexity, efficient embedded hybrid-coding algorithm so-called embedded subband partitioning block arithmetic coding (ESPBA) is presented and experimental results show that ESPBA method has better PSNR performance than based-IWT SPIHT for lossy compression.
Abstract: The IWT allows both lossy and lossless compression using a single bitstream. In this paper, a low-complexity, efficient embedded hybrid-coding algorithm so-called embedded subband partitioning block arithmetic coding (ESPBA) is presented. The new algorithm firstly selects a segmentation threshold based on the integer powers of two. All image coefficients above this threshold are encoded using simple quadtree partitioning scheme. The residual image coefficients below the threshold are encoded using block arithmetic coding based on context modeling. Experimental results show that ESPBA method has better PSNR performance than based-IWT SPIHT for lossy compression. The lossless compression performance of the presented method is comparable to JPEG-LS and SPIHT.
5 citations