scispace - formally typeset
Search or ask a question
Topic

Quantization (image processing)

About: Quantization (image processing) is a research topic. Over the lifetime, 7977 publications have been published within this topic receiving 126632 citations.


Papers
More filters
Journal ArticleDOI
Graham Hudson1, Alain Leger, Birger Niss, Istvan Sebestyen, Jørgen Vaaben 
TL;DR: The JPEG standard has become one of the most successful standards in information and communication technologies (ICT) history as discussed by the authors and has been used for image compression for many applications, such as web pages, medical imaging, and public records.
Abstract: Digital image capture, processing, storage, transmission, and display are now taken for granted as part of the technology of modern everyday life. Digital image compression is one of the enabling technologies of the present multimedia world. The image compression technique used for application as diverse as photography, web pages, medical imaging, and public records is JPEG, named after the ISO/CCITT “joint photographic experts group,” established in 1986, which developed the technique in the late 1980s and produced the international standard in the early ’90s. ITU-T T.81¦ISO/IEC 10918-1, also called “JPEG-1” has become one of the most successful standards in information and communication technologies (ICT) history. The authors of this paper—all members of the original JPEG development team—were all intimately involved in image-coding research and JPEG in particular. The paper goes behind the scenes explaining why and how JPEG came about and looks under the bonnet of the technique explaining the different components that give the standard of its efficiency, versatility, and robustness that have made a technique that has stood the test of time and evolved to cover applications beyond its original scope. In addition, the authors give a short outlook of the main milestones in coding schemes of still images since “JPEG-1.”

56 citations

Journal ArticleDOI
TL;DR: A novel perception-based quantization to remove nonvisible information in high dynamic range (HDR) color pixels by exploiting luminance masking so that the performance of the High Efficiency Video Coding (HEVC) standard is improved for HDR content.
Abstract: The human visual system (HVS) exhibits nonlinear sensitivity to the distortions introduced by lossy image and video coding. This effect is due to the luminance masking, contrast masking, and spatial and temporal frequency masking characteristics of the HVS. This paper proposes a novel perception-based quantization to remove nonvisible information in high dynamic range (HDR) color pixels by exploiting luminance masking so that the performance of the High Efficiency Video Coding (HEVC) standard is improved for HDR content. A profile scaling based on a tone-mapping curve computed for each HDR frame is introduced. The quantization step is then perceptually tuned on a transform unit basis. The proposed method has been integrated into the HEVC reference model for the HEVC range extensions (HM-RExt), and its performance was assessed by measuring the bitrate reduction against the HM-RExt. The results indicate that the proposed method achieves significant bitrate savings, up to 42.2%, with an average of 12.8%, compared with HEVC at the same quality (based on HDR-visible difference predictor-2 and subjective evaluations).

56 citations

Book ChapterDOI
03 Sep 2009
TL;DR: It is demonstrated that it is even possible to beat the quality of the much more advanced JPEG 2000 standard when one uses subdivisions on rectangles and a number of additional optimisations.
Abstract: Although widely used standards such as JPEG and JPEG 2000 exist in the literature, lossy image compression is still a subject of ongoing research. Galic et al. (2008) have shown that compression based on edge-enhancing anisotropic diffusion can outperform JPEG for medium to high compression ratios when the interpolation points are chosen as vertices of an adaptive triangulation. In this paper we demonstrate that it is even possible to beat the quality of the much more advanced JPEG 2000 standard when one uses subdivisions on rectangles and a number of additional optimisations. They include improved entropy coding, brightness rescaling, diffusivity optimisation, and interpolation swapping. Experiments on classical test images are presented that illustrate the potential of our approach.

56 citations

DOI
14 Jul 2004
TL;DR: An image encryption algorithm combining with JPEG encoding is proposed that supports direct bit-rate control or recompression, which means that the encrypted image can still be decrypted correctly even if its compression ratio has been changed.
Abstract: Image encryption is a suitable method to protect image data. The encryption algorithms based on position confusion and pixel substitution change compression ratio greatly. In this paper, an image encryption algorithm combining with JPEG encoding is proposed. In luminance and chrominance plane, the DCT blocks are confused by pseudo-random SFCs (space filling curves). In each DCT block, DCT coefficients are confused according to different frequency bands and their signs are encrypted by a chaotic stream cipher. The security of the cryptosystem against brute-force attack and known-plaintext attack is also analyzed. Experimental results show that, the algorithm is of high security and low cost. What's more, it supports direct bit-rate control or recompression, which means that the encrypted image can still be decrypted correctly even if its compression ratio has been changed. These advantages make it suitable for image transmission over network.

56 citations

Journal ArticleDOI
TL;DR: A novel algorithm to achieve the reconstruction of the history of an image or a video by exploiting the effects of successive quantizations followed by dequantizations in case of double JPEG compressed images.
Abstract: One of the most common problems in the image forensics field is the reconstruction of the history of an image or a video. The data related to the characteristics of the camera that carried out the shooting, together with the reconstruction of the (possible) further processing, allow us to have some useful hints about the originality of the visual document under analysis. For example, if an image has been subjected to more than one JPEG compression, we can state that the considered image is not the exact bitstream generated by the camera at the time of shooting. It is then useful to estimate the quantization steps of the first compression, which, in case of JPEG images edited and then saved again in the same format, are no more available in the embedded metadata. In this paper, we present a novel algorithm to achieve this goal in case of double JPEG compressed images. The proposed approach copes with the case when the second quantization step is lower than the first one, exploiting the effects of successive quantizations followed by dequantizations. To improve the results of the estimation, a proper filtering strategy together with a function devoted to find the first quantization step, have been designed. Experimental results and comparisons with the state-of-the-art methods, confirm the effectiveness of the proposed approach.

56 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
84% related
Image segmentation
79.6K papers, 1.8M citations
84% related
Feature (computer vision)
128.2K papers, 1.7M citations
84% related
Image processing
229.9K papers, 3.5M citations
83% related
Robustness (computer science)
94.7K papers, 1.6M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20228
2021354
2020283
2019294
2018259
2017295