Topic
JPEG
About: JPEG is a research topic. Over the lifetime, 9980 publications have been published within this topic receiving 199206 citations. The topic is also known as: continuous-tone still image encoding & continuous-tone still image decoding.
Papers published on a yearly basis
Papers
More filters
••
01 Jul 1999TL;DR: In this article, the authors proposed a fragile watermarking approach which embeds a watermark in the discrete wavelet domain of the image by quantizing the corresponding coefficients, which allows the user to make application-dependent decisions concerning whether an image which is JPEG compressed for instance, still has credibility.
Abstract: In this paper, we consider the problem of digital watermarking to ensure the credibility of multimedia. We specifically address the problem of fragile digital watermarking for the tamper proofing of still images. Applications of our problem include authentication for courtroom evidence, insurance claims, and journalistic photography. We present a novel fragile watermarking approach which embeds a watermark in the discrete wavelet domain of the image by quantizing the corresponding coefficients. Tamper detection is possible in localized spatial and frequency regions. Unlike previously proposed techniques, this novel approach provides information on specific frequencies of the image that have been modified. This allows the user to make application-dependent decisions concerning whether an image, which is JPEG compressed for instance, still has credibility. Analysis is provided to evaluate the performance of the technique to varying system parameters. In addition, we compare the performance of the proposed method to existing fragile watermarking techniques to demonstrate the success and potential of the method for practical multimedia tamper proofing and authentication.
554 citations
••
TL;DR: Although the JPEG 2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications.
Abstract: In 1996, the JPEG committee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG 2000, has resulted in a comprehensive standard (ISO 15444∣ITU-T Recommendation T.800) that is being issued in six parts. Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International Standard at the end of 2000. Parts 2–6 define extensions to both the compression technology and the file format and are currently in various stages of development. In this paper, a technical description of Part 1 of the JPEG 2000 standard is provided, and the rationale behind the selected technologies is explained. Although the JPEG 2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications.
528 citations
01 Jan 1995
TL;DR: A method using a JPEG model based, frequency hopped, randomly sequenced pulse position modulated code (RSPPMC) is described, which supports robustness of embedded labels against several damaging possibilities such as lossy data compression, low pass filtering and/or color space conversion.
Abstract: This paper first presents a "hidden label" approach for identifying the ownership and distribution of multimedia information (image or video data) in digital networked environment. Then it discusses criteria and difficulties in implementing the approach. Finally a method using a JPEG model based, frequency hopped, randomly sequenced pulse position modulated code (RSPPMC) is described. This method supports robustness of embedded labels against several damaging possibilities such as lossy data compression, low pass filtering and/or color space conversion.
528 citations
••
23 May 2004TL;DR: In this article, a feature-based steganalytic method for JPEG images is proposed, where the features are calculated as an L 1 norm of the difference between a specific macroscopic functional calculated from the stego image and the same functional obtained from a decompressed, cropped, and recompressed stegos image.
Abstract: In this paper, we introduce a new feature-based steganalytic method for JPEG images and use it as a benchmark for comparing JPEG steganographic algorithms and evaluating their embedding mechanisms. The detection method is a linear classifier trained on feature vectors corresponding to cover and stego images. In contrast to previous blind approaches, the features are calculated as an L1 norm of the difference between a specific macroscopic functional calculated from the stego image and the same functional obtained from a decompressed, cropped, and recompressed stego image. The functionals are built from marginal and joint statistics of DCT coefficients. Because the features are calculated directly from DCT coefficients, conclusions can be drawn about the impact of embedding modifications on detectability. Three different steganographic paradigms are tested and compared. Experimental results reveal new facts about current steganographic methods for JPEGs and new de-sign principles for more secure JPEG steganography.
508 citations
•
TL;DR: In this article, a nonlinear analysis transformation, a uniform quantizer, and a non-linear synthesis transformation are used to optimize the entire model for rate-distortion performance over a database of training images.
Abstract: We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural networks, the joint nonlinearity is chosen to implement a form of local gain control, inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion performance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions, the relaxed loss function may be interpreted as the log likelihood of a generative model, as implemented by a variational autoencoder. Unlike these models, however, the compression model must operate at any given point along the rate-distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM.
497 citations