scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Journal ArticleDOI
TL;DR: The secure watermark detection in the CS domain is possible is proved by the theoretical analysis and experimental result and can also be widespread to other shared secure signal processing and data-mining applications in the cloud.
Abstract: -Privacy is one of the critical issue when the data storage are outsource by the data owners to a cloud, which is one of the third party computing service. In this method, we recognize a cloud computing application scenario which needs concurrently performing safe watermark detection and privacy preserving multimedia data storage. Later propose a compressive sensing based framework with the help of secure multi party computation protocols to deal with such a condition. For secure watermark detection in a CS domain to keep the privacy, the multimedia data and secret watermark pattern are offered to the cloud. During CS transformation, MPC protocols protect the privacy of CS matrix and watermark pattern with the help of semi honest security model. From the CS domain, given object image, watermark pattern of the watermark, and the CS matrix size, we obtain the estimated watermark detection presentation. The secure watermark detection in the CS domain is possible is proved by our theoretical analysis and experimental result. This can also be widespread to other shared secure signal processing and data-mining applications in the cloud.
Proceedings ArticleDOI
16 Dec 2007
TL;DR: A new adaptive image compression scheme based on JPEG was proposed to improve the image quality of the JPEG baseline sequential image compression codec and simulation results show that the performance is comparable to that of other compared schemes.
Abstract: In this paper, a new adaptive image compression scheme based on JPEG was proposed to improve the image quality of the JPEG baseline sequential image compression codec. In this scheme, the Hough transformation technique is employed as an edge-oriented classifier to categorize image blocks into four classes. Different types of image blocks are processed by different quantization and encoding ways for achieving an effective compression performance. In order to preserve edge information, the adaptive quantization tables based on the human visual system and zig-zag scan sequences are used. For both objective and subjective visual image quality, simulation results show that the performance of the proposed scheme is comparable to that of other compared schemes.
Asaf Sofu1
01 Jan 1995
TL;DR: Three lossless compression algorithms, Huffman, Adaptive Huffman and Lempel-Zjv-Welch codjng are djscussed and analyzed to understand which one is the most efficient and convenient lossless compressed scheme for vector transform and vector subband coding algorithms.
Abstract: Lossless compression algorithms are utili-t9 in image and video compression systems such as JPEG and MPEG. Three lossless compression algorithms, Huffman, Adaptive Huffman and Lempel-Zjv-Welch codjng are djscussed and analyzed to understand which one is the most efficient and convenient lossless compression scheme for vector transform and vector subband coding algorithms. After a general overview of each of the algorithms, the compression performance of the lossless algorithms and the overall lossy system are compared and analyzed. To achieve the best compression possible, the coding algorithms and the source are modified. The maximum available lossless compression with these techniques are studied and analyzed.
Posted Content
TL;DR: This work explores new photo storage management techniques that are fast so they do not adversely affect photo download latency, are complementary to existing distributed erasure coding techniques, can efficiently be converted to the standard JPEG user devices expect, and significantly increase compression.
Abstract: The popularity of photo sharing services has increased dramatically in recent years. Increases in users, quantity of photos, and quality/resolution of photos combined with the user expectation that photos are reliably stored indefinitely creates a growing burden on the storage backend of these services. We identify a new opportunity for storage savings with application-specific compression for photo sharing services: photo recompression. We explore new photo storage management techniques that are fast so they do not adversely affect photo download latency, are complementary to existing distributed erasure coding techniques, can efficiently be converted to the standard JPEG user devices expect, and significantly increase compression. We implement our photo recompression techniques in two novel codecs, ROMP and L-ROMP. ROMP is a lossless JPEG recompression codec that compresses typical photos 15% over standard JPEG. L-ROMP is a lossy JPEG recompression codec that distorts photos in a perceptually un-noticeable way and typically achieves 28% compression over standard JPEG. We estimate the benefits of our approach on Facebook's photo stack and find that our approaches can reduce the photo storage by 0.3-0.9x the logical size of the stored photos, and offer additional, collateral benefits to the photo caching stack, including 5-11% fewer requests to the backend storage, 15-31% reduction in wide-area bandwidth, and 16% reduction in external bandwidth.
Journal ArticleDOI
TL;DR: This undertaking shows another lossless color image compression algorithm based on pixel prediction and arithmetic coding based on Reversible Color Transform that diminishes the bit rates contrasted and JPEG 2000 and JPEG-XR.
Abstract: Lossless image compression is a class of image compression algorithms that allows the original image to be perfectly reconstructed from the compressed image. This undertaking shows another lossless color image compression algorithm based on pixel prediction and arithmetic coding. Lossless squeezing of a Red, Green and Blue (RGB) picture is carried out by first decorrelating utilizing Reversible Color Transform (RCT). The got Y part is then compacted by a conventional lossless grayscale picture clamping strategy. The chrominance image is encoded utilizing Arithmetic coding and pixel prediction system. By utilizing RCT the forecast lapse is characterized and arithmetic coding is connected to the mistake signal. The packed image and encoded picture is joined to structure a lossless compacted RGB picture. It is demonstrated that this system diminishes the bit rates contrasted and JPEG 2000 and JPEG-XR. With a specific end goal to further lessen the bit rate the compression strategies and the pixel prediction strategy can be altered for better execution. Keywords-Lossless image compression, Pixel Prediction, Arithmetic Coding, RCT, Huffman coding

Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815