scispace - formally typeset
Search or ask a question
Topic

Lossless JPEG

About: Lossless JPEG is a research topic. Over the lifetime, 2415 publications have been published within this topic receiving 51110 citations. The topic is also known as: Lossless JPEG & .jls.


Papers
More filters
Proceedings ArticleDOI
16 May 2005
TL;DR: Simulation results demonstrate that the embedded watermarks can be almost fully extracted from images compressed with very high compression ratio.
Abstract: A watermarking technique for copyright protection is introduced. It achieves a good improvement in the robustness of protected images, especially against attacks using JPEG compression. Simulation results demonstrate that the embedded watermarks can be almost fully extracted from images compressed with very high compression ratio.

33 citations

Journal ArticleDOI
01 Dec 2000
TL;DR: Wang et al. as discussed by the authors investigated the prediction scheme of JPEG-LS, the latest JPEG standard for lossless/near lossless image compression, and proposed an additional diagonal edge detection scheme to achieve better prediction accuracy and hence provide potential for further improvement.
Abstract: The authors investigate the prediction scheme of JPEG-LS, the latest JPEG standard for lossless/near lossless image compression. They show that it is not sufficient to consider only horizontal and vertical edges in constructing predictive values. As a result, they propose an additional diagonal edge detection scheme to achieve better prediction accuracy and hence provide potential for further improvement. Experiments show that, in terms of mean-square-error values, the proposed scheme outperforms the existing JPEG-LS prediction for all images tested, while the complexity of the overall algorithm is maintained at a similar level.

33 citations

Journal ArticleDOI
TL;DR: This study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG, and verified other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG.
Abstract: This presentation focuses on the quantitative comparison of three lossy compression methods applied to a variety of 12-bit medical images. One Joint Photographic Exports Group (JPEG) and two wavelet algorithms were used on a population of 60 images. The medical images were obtained in Digital Imaging and Communications in Medicine (DICOM) file format and ranged in matrix size from 256 × 256 (magnetic resonance [MR]) to 2,560 × 2,048 (computed radiography [CR], digital radiography [DR], etc). The algorithms were applied to each image at multiple levels of compression such that comparable compressed file sizes were obtained at each level. Each compressed image was then decompressed and quantitative analysis was performed to compare each compressed-thendecompressed image with its corresponding original image. The statistical measures computed were sum of absolute differences, sum of squared differences, and peak signal-to-noise ratio (PSNR). Our results verify other research studies which show that wavelet compression yields better compression quality at constant compressed file sizes compared with JPEG. The DICOM standard does not yet include wavelet as a recognized lossy compression standard. For implementers and users to adopt wavelet technology as part of their image management and communication installations, there has to be significant differences in quality and compressibility compared with JPEG to justify expensive software licenses and the introduction of proprietary elements in the standard. Our study shows that different wavelet implementations vary in their capacity to differentiate themselves from the old, established lossy JPEG.

33 citations

Proceedings ArticleDOI
05 Jun 2016
TL;DR: This work analyzes the error propagation sensitivity in the DCT network and uses this information to model the impact of introduced errors on the output quality of JPEG, and formulate a novel optimization problem that maximizes power savings under an error budget.
Abstract: JPEG compression based on the discrete cosine transform (DCT) is a key building block in low-power multimedia applications. We use approximate computing to exploit the error tolerance of JPEG and formulate a novel optimization problem that maximizes power savings under an error budget. We analyze the error propagation sensitivity in the DCT network and use this information to model the impact of introduced errors on the output quality. Simulations show up to 15% reduction in area and delay which corresponds to 40% power savings at iso-delay.

33 citations

Journal ArticleDOI
TL;DR: This work investigates the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens and suggests that VDM metrics could be used to guide the compression ofvirtual slides to achieve visually lossed compression while providing 5-12 times the data reduction of reversible methods.
Abstract: A major issue in telepathology is the extremely large and growing size of digitized “virtual” slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. “Visually lossless” compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

33 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
82% related
Feature (computer vision)
128.2K papers, 1.7M citations
82% related
Feature extraction
111.8K papers, 2.1M citations
82% related
Image processing
229.9K papers, 3.5M citations
80% related
Convolutional neural network
74.7K papers, 2M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202321
202240
20215
20202
20198
201815