scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal Article
TL;DR: A new proposed PBO based JPEG compression scheme is used to compress the images which helps to reduce the size of images and obtains the higher PSNR value, which shows that the PBObased JPEG Compression is better than JPEG technique for image compression.
Abstract: The basic goal of image data compression is to reduce the bit rate for transmission and storage while either maintaining the original quality or providing an acceptable fidelity. JPEG is the one of the hottest topics in image compression technology. JPEG is different because it is primarily a lossy method of compression. It converts the spatial domain into frequency domain. It used DCT which provides the high compressed image. A new proposed PBO based JPEG compression scheme is used to compress the images which helps to reduce the size of images. The method uses four metrics such as peak signal to noise ratio, mean squared error, Entropy value & EPI factor value that measured the performance to compare and analyze the results. The proposed model focuses on reducing the size of image & time elapsed in compression with minimum distortion in reconstructed image and is practically implemented using MATLAB 7.11.0 environment. The aim of compression is to achieve good quality compressed image making the storage and transmission more efficient. The proposed method is implemented using some images. The implementation of the PBO based JPEG obtains the higher PSNR value. The higher the PSNR value higher the quality of an image. The higher PSNR is obtained for compressed image by PBO based JPEG compression as compared to JPEG Compression. This shows that the PBO based JPEG Compression is better than JPEG technique for image compression.
Journal ArticleDOI
TL;DR: In this article , the authors propose a method to solve the problem of the problem: the one-dimensional graph. .> . . . ]]
Abstract:
01 Jan 2013
TL;DR: A possibility of context switching into a lossless compression system that adjusts fast in case of rapid signal changes and chooses one of the set of a few predictor models individually for each sample instead of each frame.
Abstract: In this paper there is described a possibility of context switching into a lossless compression system. The context is determined based on the features of the previous signal samples. Each context is associated with an individual predictor. The idea of context switching allows us to choose one of the set of a few predictor models individually for each sample instead of each frame. Consequently, the system adjusts fast in case of rapid signal changes. The system was implemented using the ImpulseC hardware description language and implemented on an FPGA platform.
Journal ArticleDOI
TL;DR: This work attempts to address the issue of re-constructing high quality image with the use of just one descriptor rather than the conventional descriptor, and compares theUse of Type I quantizer and Type II quantizer.
Abstract: The growing trend of online image sharing and downloads today mandate the need for better encoding and decoding scheme. This paper looks into this issue of image coding. Multiple Description Coding is an encoding and decoding scheme that is specially designed in providing more error resilience for data transmission. The main issue of Multiple Description Coding is the lossy transmission channels. This work attempts to address the issue of re-constructing high quality image with the use of just one descriptor rather than the conventional descriptor. This work compare the use of Type I quantizer and Type II quantizer. We propose and compare 4 coders by examining the quality of re-constructed images. The 4 coders are namely JPEG HH (Horizontal Pixel Interleaving with Huffman Coding) model, JPEG HA (Horizontal Pixel Interleaving with Arithmetic Encoding) model, JPEG VH (Vertical Pixel Interleaving with Huffman Encoding) model, and JPEG VA (Vertical Pixel Interleaving with Arithmetic Encoding) model. The findings suggest that the use of horizontal and vertical pixel interleavings do not affect the results much. Whereas the choice of quantizer greatly affect its performance.
Journal Article
TL;DR: A new algorithm is proposed to embed a watermark in digital image such that, it can survive against lossy JPEG (Joint Photographic Experts Group) compression.
Abstract: In this paper, a new algorithm is proposed to embed a watermark in digital image such that, it can survive against lossy JPEG (Joint Photographic Experts Group) compression. Analyzing operations are performed on the cover image before and after compression to determine strong locations survival against JPEG compression; these locations are used as host for the watermark. A map is used to embed these strong locations indices; used by the receiver to extract these indices. Fidelity Criteria evaluates errors between the original and cover images, good tests are achieved without perceptual degradation of the transparency of the cover image.

Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815