scispace - formally typeset
Search or ask a question

Showing papers on "JPEG published in 2002"


Proceedings ArticleDOI
10 Dec 2002
TL;DR: It is shown that Peak Signal-to-Noise Ratio (PSNR), which requires the reference images, is a poor indicator of subjective quality and tuning an NR measurement model towards PSNR is not an appropriate approach in designing NR quality metrics.
Abstract: Human observers can easily assess the quality of a distorted image without examining the original image as a reference. By contrast, designing objective No-Reference (NR) quality measurement algorithms is a very difficult task. Currently, NR quality assessment is feasible only when prior knowledge about the types of image distortion is available. This research aims to develop NR quality measurement algorithms for JPEG compressed images. First, we established a JPEG image database and subjective experiments were conducted on the database. We show that Peak Signal-to-Noise Ratio (PSNR), which requires the reference images, is a poor indicator of subjective quality. Therefore, tuning an NR measurement model towards PSNR is not an appropriate approach in designing NR quality metrics. Furthermore, we propose a computational and memory efficient NR quality assessment model for JPEG images. Subjective test results are used to train the model, which achieves good quality prediction performance.

913 citations


Journal ArticleDOI
TL;DR: This paper introduces a new paradigm for data embedding in images (lossless dataembedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted.
Abstract: One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons) or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom). In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF) and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.

702 citations


01 Jan 2002
TL;DR: The JPEG2000 standard as discussed by the authors is an International Standard (ISO 154447ITU-T Recommendation T.800) that is being issued in six parts (Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International standard at the end of 2000.
Abstract: In 1996, the JPEGcommittee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG2000, has resulted in a comprehensive standard (ISO 154447ITU-T Recommendation T.800) that is being issued in six parts. Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International Standard at the end of 2000. Parts 2–6 define extensions to both the compression technology and the file format and are currently in various stages of development. In this paper, a technical description of Part 1 of the JPEG2000 standard is provided, and the rationale behind the selected technologies is explained. Although the JPEG2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications. r 2002 Elsevier Science B.V. All rights reserved.

664 citations


Journal ArticleDOI
TL;DR: Although the JPEG 2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications.
Abstract: In 1996, the JPEG committee began to investigate possibilities for a new still image compression standard to serve current and future applications. This initiative, which was named JPEG 2000, has resulted in a comprehensive standard (ISO 15444∣ITU-T Recommendation T.800) that is being issued in six parts. Part 1, in the same vein as the JPEG baseline system, is aimed at minimal complexity and maximal interchange and was issued as an International Standard at the end of 2000. Parts 2–6 define extensions to both the compression technology and the file format and are currently in various stages of development. In this paper, a technical description of Part 1 of the JPEG 2000 standard is provided, and the rationale behind the selected technologies is explained. Although the JPEG 2000 standard only specifies the decoder and the codesteam syntax, the discussion will span both encoder and decoder issues to provide a better understanding of the standard in various applications.

528 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors classify and review current stego-detection algorithms that can be used to trace popular steganographic products and present some new results regarding their previously proposed detection of LSB embedding using sensitive dual statistics.
Abstract: Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.

369 citations


Journal ArticleDOI
01 Mar 2002
TL;DR: A novel steganographic method based on joint photographic expert-group (JPEG) that has a larger message capacity than Jpeg-Jsteg, and the quality of the stego-images of the proposed method is acceptable.
Abstract: In this paper, a novel steganographic method based on joint photographic expert-group (JPEG) is proposed. The proposed method modifies the quantization table first. Next, the secret message is hidden in the cover-image with its middle-frequency of the quantized DCT coefficients modified. Finally, a JPEG stego-image is generated. JPEG is a standard image and popularly used in Internet. The stego-image will not be suspected if we could apply a JPEG image to data hiding. We compare our method with a JPEG hiding-tool Jpeg-Jsteg. From the experimental results, we obtain that the proposed method has a larger message capacity than Jpeg-Jsteg, and the quality of the stego-images of the proposed method is acceptable. Besides, our method has the same security level as Jpeg-Jsteg.

366 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy.
Abstract: Lossless data embedding has the property that the distortion due to embedding can be completely removed from the watermarked image without accessing any side channel. This can be a very important property whenever serious concerns over the image quality and artifacts visibility arise, such as for medical images, due to legal reasons, for military images or images used as evidence in court that may be viewed after enhancement and zooming. We formulate two general methodologies for lossless embedding that can be applied to images as well as any other digital objects, including video, audio, and other structures with redundancy. We use the general principles as guidelines for designing efficient, simple, and high-capacity lossless embedding methods for three most common image format paradigms - raw, uncompressed formats (BMP), lossy or transform formats (JPEG), and palette formats (GIF, PNG). We close the paper with examples of how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of non-trivial tasks, including elegant lossless authentication using fragile watermarks. Note on terminology: some authors coined the terms erasable, removable, reversible, invertible, and distortion-free for the same concept.

338 citations


Journal ArticleDOI
07 Nov 2002
TL;DR: A tutorial-style review of the new JPEG2000, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards is provided.
Abstract: JPEG2000 is the latest image compression standard to emerge from the Joint Photographic Experts Group (JPEG) working under the auspices of the International Standards Organization. Although the new standard does offer superior compression performance to JPEG, JPEG2000 provides a whole new way of interacting with compressed imagery in a scalable and interoperable fashion. This paper provides a tutorial-style review of the new standard, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards. The paper also describes new work, exploiting the capabilities of JPEG2000 in client-server systems for efficient interactive browsing of images over the Internet.

275 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed method of measuring blocking artifacts is effective and stable across a wide variety of images, and the proposed blocking-artifact reduction method exhibits satisfactory performance as compared to other post-processing techniques.
Abstract: Blocking artifacts continue to be among the most serious defects that occur in images and video streams compressed to low bit rates using block discrete cosine transform (DCT)-based compression standards (e.g., JPEG, MPEG, and H.263). It is of interest to be able to numerically assess the degree of blocking artifact in a visual signal, for example, in order to objectively determine the efficacy of a compression method, or to discover the quality of video content being delivered by a web server. We propose new methods for efficiently assessing, and subsequently reducing, the severity of blocking artifacts in compressed image bitstreams. The method is blind, and operates only in the DCT domain. Hence, it can be applied to unknown visual signals, and it is efficient since the signal need not be compressed or decompressed. In the algorithm, blocking artifacts are modeled as 2-D step functions. A fast DCT-domain algorithm extracts all parameters needed to detect the presence of, and estimate the amplitude of blocking artifacts, by exploiting several properties of the human vision system. Using the estimate of blockiness, a novel DCT-domain method is then developed which adaptively reduces detected blocking artifacts. Our experimental results show that the proposed method of measuring blocking artifacts is effective and stable across a wide variety of images. Moreover, the proposed blocking-artifact reduction method exhibits satisfactory performance as compared to other post-processing techniques. The proposed technique has a low computational cost hence can be used for real-time image/video quality monitoring and control, especially in applications where it is desired that the image/video data be processed directly in the DCT-domain.

250 citations


01 Sep 2002
TL;DR: This paper describes several techniques to encrypt uncompressed and compressed images and develops a scheme called multiple selective encryption, which is proven not to interfere with the decoding process in the sense that it achieves a constant bit rate and that bitstreams remain compliant to the JPEG specifications.
Abstract: This paper describes several techniques to encrypt uncompressed and compressed images. We first present the aims of image encryption. In the usual ways to encryption, all the information is encrypted. But this is not mandatory. In this paper we follow the principles of a technique initially proposed by MAPLES et al. [1] and encrypt only a part of the image content in order to be able to visualize the encrypted images, although not with full precision. This concept leads to techniques that can simultaneously provide security functions and an overall visual check which might be suitable in some applications like, for example, searching through a shared image database. The principle of selective encryption is first applied to uncompressed images. Then we propose a simple technique applicable to the particular case of JPEG images. This technique is proven not to interfere with the decoding process in the sense that it achieves a constant bit rate and that bitstreams remain compliant to the JPEG specifications. Then we develop a scheme called multiple selective encryption, discuss its properties and conclude.

216 citations


Journal ArticleDOI
TL;DR: In this paper, the authors have evaluated and adopted a spectral transform called the discrete cosine transform (DCT), which is a widely used transform for compression of digital images such as MPEG and JPEG, but its use for atmospheric spectral analysis has not yet received widespread attention.
Abstract: For most atmospheric fields, the larger part of the spatial variance is contained in the planetary scales. When examined over a limited area, these atmospheric fields exhibit an aperiodic structure, with large trends across the domain. Trying to use a standard (periodic) Fourier transform on regional domains results in the aliasing of largescale variance into shorter scales, thus destroying all usefulness of spectra at large wavenumbers. With the objective of solving this particular problem, the authors have evaluated and adopted a spectral transform called the discrete cosine transform (DCT). The DCT is a widely used transform for compression of digital images such as MPEG and JPEG, but its use for atmospheric spectral analysis has not yet received widespread attention. First, it is shown how the DCT can be employed for producing power spectra from two-dimensional atmospheric fields and how this technique compares favorably with the more conventional technique that consists of detrending the data before applying a periodic Fourier transform. Second, it is shown that the DCT can be used advantageously for extracting information at specific spatial scales by spectrally filtering the atmospheric fields. Examples of applications using data produced by a regional climate model are displayed. In particular, it is demonstrated how the 2D-DCT spectral decomposition is successfully used for calculating kinetic energy spectra and for separating mesoscale features from large scales.

Book ChapterDOI
07 Oct 2002
TL;DR: In this paper, the authors present a steganalytic method that can reliably detect messages (and estimate their size) hidden in JPEG images using the steganographic algorithm F5.
Abstract: In this paper, we present a steganalytic method that can reliably detect messages (and estimate their size) hidden in JPEG images using the steganographic algorithm F5. The key element of the method is estimation of the cover-image histogram from the stego-image. This is done by decompressing the stego-image, cropping it by four pixels in both directions to remove the quantization in the frequency domain, and recompressing it using the same quality factor as the stego-image. The number of relative changes introduced by F5 is determined using the least square fit by comparing the estimated histograms of selected DCT coefficients with those of the stego-image. Experimental results indicate that relative modifications as small as 10% of the usable DCT coefficients can be reliably detected. The method is tested on a diverse set of test images that include both raw and processed images in the JPEG and BMP formats.

Journal ArticleDOI
TL;DR: An image protection method for image intellectual property that uses the visual secret sharing scheme to construct two shares, one of which is generated from the host image, and the other share is arbitrarily generated by the owner.

Journal ArticleDOI
TL;DR: The results show that the choice of the “best” standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.
Abstract: JPEG 2000, the new ISO/ITU-T standard for still image coding, has recently reached the International Standard (IS) status. Other new standards have been recently introduced, namely JPEG-LS and MPEG-4 VTC. This paper provides a comparison of JPEG 2000 with JPEG-LS and MPEG-4 VTC, in addition to older but widely used solutions, such as JPEG and PNG, and well established algorithms, such as SPIHT. Lossless compression efficiency, fixed and progressive lossy rate-distortion performance, as well as complexity and robustness to transmission errors, are evaluated. Region of Interest coding is also discussed and its behavior evaluated. Finally, the set of provided functionalities of each standard is also evaluated. In addition, the principles behind each algorithm are briefly described. The results show that the choice of the “best” standard depends strongly on the application at hand, but that JPEG 2000 supports the widest set of features among the evaluated standards, while providing superior rate-distortion performance in most cases.

Journal ArticleDOI
TL;DR: A generalized analysis of spatial relationships between the DCTs of any block and its sub-blocks reveals that DCT coefficients of any blocks can be directly obtained from the D CT coefficients of its sub -blocks and that the interblock relationship remains linear.
Abstract: At present, almost all digital images are stored and transferred in their compressed format in which discrete cosine transform (DCT)-based compression remains one of the most important data compression techniques due to the efforts from JPEG In order to save the computation and memory cost, it is desirable to have image processing operations such as feature extraction, image indexing, and pattern classifications implemented directly in the DCT domain To this end, we present in this paper a generalized analysis of spatial relationships between the DCTs of any block and its sub-blocks The results reveal that DCT coefficients of any block can be directly obtained from the DCT coefficients of its sub-blocks and that the interblock relationship remains linear It is useful in extracting global features in the compressed domain for general image processing tasks such as those widely used in pyramid algorithms and image indexing In addition, due to the fact that the corresponding coefficient matrix of the linear combination is sparse, the computational complexity of the proposed algorithms is significantly lower than that of the existing methods

Journal ArticleDOI
TL;DR: This paper describes the technology and artifacts commonly used in irreversible compression of medical images, routinely applied in teleradiology, and often in Picture Archiving and Communications Systems.
Abstract: The volume of data from medical imaging is growing at exponential rates, matching or exceeding the decline in the costs of digital data storage. While methods to reversibly compress image data do exist, current methods only achieve modest reductions in storage requirements. Irreversible compression can achieve substantially higher compression ratios without perceptible image degradation. These techniques are routinely applied in teleradiology, and often in Picture Archiving and Communications Systems. The practicing radiologist needs to understand how these compression techniques work and the nature of the degradation that occurs in order to optimize their medical practice. This paper describes the technology and artifacts commonly used in irreversible compression of medical images.

Journal ArticleDOI
TL;DR: A single-ended blockiness measure is proposed, i.e., one that uses only the coded image, based on detecting the low-amplitude edges that result from blocking and estimating the edge amplitudes.

Journal ArticleDOI
TL;DR: It is shown that the visual tool sets in JPEG 2000 are much richer than what is achievable in JPEG, where only spatially invariant frequency weighting can be exploited.
Abstract: The human visual system plays a key role in the final perceived quality of the compressed images. It is therefore desirable to allow system designers and users to take advantage of the current knowledge of visual perception and models in a compression system. In this paper, we review the various tools in JPEG 2000 that allow its users to exploit many properties of the human visual system such as spatial frequency sensitivity, color sensitivity, and the visual masking effects. We show that the visual tool sets in JPEG 2000 are much richer than what is achievable in JPEG, where only spatially invariant frequency weighting can be exploited. As a result, the visually optimized JPEG 2000 images can usually have much better visual quality than the visually optimized JPEG images at the same bit rates. Some visual comparisons between different visual optimization tools, as well as some visual comparisons between JPEG 2000 and JPEG, will be shown.

Proceedings ArticleDOI
TL;DR: Two new semi-fragile authentication techniques robust against lossy compression are proposed, using random bias and nonuniform quantization, to improve the performance of the methods proposed by Lin and Chang.
Abstract: Semi-fragile watermarking methods aim at detecting unacceptable image manipulations, while allowing acceptable manipulations such as lossy compression. In this paper, we propose new semi-fragile authentication watermarking techniques using random bias and non-uniform quantization, to improve the performance of the methods proposed by Lin and Chang. Specifically, the objective is to improve the performance tradeoff between the alteration detection sensitivity and the false detection rate.© (2002) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
26 Aug 2002
TL;DR: A semi-fragile watermarking technique based on block singular value decomposition (SVD) that embeds the binary watermark image of pseudo-random permutation into the biggest singular value by quantization process.
Abstract: With the popularity of digital images applied in journals, hospitals and courtrooms, it has become easier to modify or forge information using widely available editing software. Digital watermarking techniques have been proposed as an effective solution to the problem of image authentication. This paper presents a semi-fragile watermarking technique based on block singular value decomposition (SVD). It embeds the binary watermark image of pseudo-random permutation into the biggest singular value by quantization process. The scheme can extract the watermark without the original image. Experimental results show that the proposed scheme can prevent malicious attacks but allow JPEG lossy compression and can locate alterations made on the image.

Proceedings ArticleDOI
TL;DR: An appropriate information-theoretic model for steganography has been proposed by Cachin and two different schemes are investigated, which aim to maximize the amount of hidden information while preserving security against detection by unauthorized parties.
Abstract: Steganography is the art of communicating a message by embedding it into multimedia data. It is desired to maximize the amount of hidden information (embedding rate) while preserving security against detection by unauthorized parties. An appropriate information-theoretic model for steganography has been proposed by Cachin. A steganographic system is perfectly secure when the statistics of the cover data and the stego data are identical, which means that the relative entropy between the cover data and the stego data is zero. For image data, another constraint is that the stego data must look like a typical image. A tractable objective measure for this property is the (weighted) mean squared error between the cover image and the stego image (embedding distortion). Two different schemes are investigated. The first one is derived from a blind watermarking scheme. The second scheme is designed specifically for steganography such that perfect security is achieved, which means that the relative entropy between cover data and stego data tends to zero. In this case, a noiseless communication channel is assumed. Both schemes store the stego image in the popular JPEG format. The performance of the schemes is compared with respect to security, embedding distortion and embedding rate.© (2002) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: A novel approach to the problem of compressing the significant quantity of data required to represent integral 3D images is presented and it was found that the proposed algorithm improves the rate-distortion performance when compared to baseline JPEG and previously reported 3D-DCT compression scheme with respect to compression ratio and subjective and objective image quality.
Abstract: Integral imaging is employed as part of a three-dimensional imaging system, allowing the display of full colour images with continuous parallax within a wide viewing zone. A significant quantity of data is required to represent a captured integral 3D image with high resolution. A lossy compression scheme has been developed based on the use of a 3D-DCT, which make possible efficient storage and transmission of such images, while maintaining all information necessary to produce a high quality 3D display. In this paper, a novel approach to the problem of compressing the significant quantity of data required to represent integral 3D images is presented. The algorithm is based on using a variable number of microlens images (or sub-images) in the computation of the 3D-DCT. It involves segmentation of the planar mean image formed by the mean values of the microlens images and it takes advantage of the high cross-correlation between the sub-images generated by the microlens array. The algorithm has been simulated on several integral 3D images. It was found that the proposed algorithm improves the rate-distortion performance when compared to baseline JPEG and previously reported 3D-DCT compression scheme with respect to compression ratio and subjective and objective image quality.

Journal ArticleDOI
TL;DR: A novel design of content access and extraction algorithm for compressed image browsing and indexing by analyzing the relationship between DCT coefficients of one block of 8×8 pixels and its four sub-blocks of 4×4 pixels is proposed.

Journal ArticleDOI
TL;DR: A class of robust weighted median (WM) sharpening algorithms is developed that can prove useful in the enhancement of compressed or noisy images posted on the World Wide Web as well as in other applications where the underlying images are unavoidably acquired with noise.
Abstract: A class of robust weighted median (WM) sharpening algorithms is developed in this paper. Unlike traditional linear sharpening methods, weighted median sharpeners are shown to be less sensitive to background random noise or to image artifacts introduced by JPEG and other compression algorithms. These concepts are extended to include data dependent weights under the framework of permutation weighted medians leading to tunable sharpeners that, in essence, are insensitive to noise and compression artifacts. Permutation WM sharpeners are subsequently generalized to smoother/sharpener structures that can sharpen edges and image details while simultaneously filter out background random noise. A statistical analysis of the various algorithms is presented, theoretically validating the characteristics of the proposed sharpening structures. A number of experiments are shown for the sharpening of JPEG compressed images and sharpening of images with background film-grain noise. These algorithms can prove useful in the enhancement of compressed or noisy images posted on the World Wide Web (WWW) as well as in other applications where the underlying images are unavoidably acquired with noise.

Book
01 Jan 2002
TL;DR: This paper presents a meta-modelling framework for image and video compression that automates the very labor-intensive and therefore time-heavy and expensive process of manually encoding and decoding images.
Abstract: 1. Statistical Methods.- 1 Entropy.- 2 Variable-Size Codes.- 3 Decoding.- 4 Huffman Coding.- 5 Adaptive Huffman Coding.- 6 Facsimile Compression.- 7 Arithmetic Coding.- 8 Adaptive Arithmetic Coding.- 2. Dictionary Methods.- 1 LZ77 (Sliding Window).- 2 LZSS.- 3 LZ78.- 4 LZW.- 5 Summary.- 3. Image Compression.- 1 Introduction.- 2 Image Types.- 3 Approaches to Image Compression.- 4 Intuitive Methods.- 5 Image Transforms.- 6 Progressive Image Compression.- 7 JPEG.- 8 JPEG-LS.- 4. Wavelet Methods.- 1 Averaging and Differencing.- 2 The Haar Transform.- 3 Subband Transforms.- 4 Filter Banks.- 5 Deriving the Filter Coefficients.- 6 The DWT.- 7 Examples.- 8 The Daubechies Wavelets.- 9 SPIHT.- 5. Video Compression.- 1 Basic Principles.- 2 Suboptimal Search Methods.- 6. Audio Compression.- 1 Sound.- 2 Digital Audio.- 3 The Human Auditory System.- 4 Conventional Methods.- 5 MPEG-1 Audio Layers.- Joining the Data Compression Community.- Appendix of Algorithms.

Proceedings Article
01 Jan 2002
TL;DR: The experimental results show that there are a number of parameters that control the effectiveness of ROI coding, the most important being the size and number of regions of interest, code-block size, and target bit rate.
Abstract: This paper details work undertaken on the application of JPEG 2000, the recent ISO/ITU-T image compression standard based on wavelet technology, to region of interest (ROI) coding. The paper briefly outlines the JPEG 2000 encoding algorithm and explains how the packet structure of the JPEG 2000 bit-stream enables an encoded image to be decoded in a variety of ways dependent upon the application. The three methods by which ROI coding can be achieved in JPEG 2000 (tiling; coefficient scaling; and codeblock selection) are then outlined and their relative performance empirically investigated. The experimental results show that there are a number of parameters that control the effectiveness of ROI coding, the most important being the size and number of regions of interest, code-block size, and target bit rate. Finally, some initial results are presented on the application of ROI coding to face images.

Journal ArticleDOI
S. Lawson1, J. Zhu1
TL;DR: This paper aims in tutorial form to introduce the DWT, to illustrate its link with filters and filterbanks and to illustrate how it may be used as part of an image coding algorithm.
Abstract: The demand for higher and higher quality images transmitted quickly over the Internet has led to a strong need to develop better algorithms for the filtering and coding of such images. The introduction of the JPEG2000 compression standard has meant that for the first time the discrete wavelet transform (DWT) is to be used for the decomposition and reconstruction of images together with an efficient coding scheme. The use of wavelets implies the use of subband coding in which the image is iteratively decomposed into high- and low-frequency bands. Thus there is a need for filter pairs at both the analysis and synthesis stages. This paper aims in tutorial form to introduce the DWT, to illustrate its link with filters and filterbanks and to illustrate how it may be used as part of an image coding algorithm. It concludes with a look at the qualitative differences between images coded using JPEG2000 and those coded using the existing JPEG standard.

Proceedings ArticleDOI
09 Dec 2002
TL;DR: A new JPEG-compliant solution under the proposed framework but with different ECC and watermarking methods is introduced, to demonstrate the practicability of the method.
Abstract: We have introduced a robust and secure digital signature solution for multimedia content authentication, by integrating content feature extraction, error correction coding (ECC), watermarking, and cryptographic hashing into a unified framework. We have successfully applied it to JPEG2000 as well as generic wavelet transform based applications. In this paper, we shall introduce a new JPEG-compliant solution under our proposed framework but with different ECC and watermarking methods. System security analysis as well as system robustness evaluation will also be given to further demonstrate the practicability of our method.

Journal ArticleDOI
TL;DR: An online preprocessing technique is proposed, which, although very simple, is able to provide significant improvements in the compression ratio of the images that it targets and shows a good robustness on other images.
Abstract: This article addresses the problem of improving the efficiency of lossless compression of images with sparse histograms. An online preprocessing technique is proposed, which, although very simple, is able to provide significant improvements in the compression ratio of the images that it targets and shows a good robustness on other images.

Proceedings ArticleDOI
E. A. de Kock1
02 Oct 2002
TL;DR: The aim of the method is to improve the design time and design quality by providing a structured approach for implementing process networks by facilitating the cost-driven and constraint-driven source code transformation of process networks into architecture-specific implementations in the form of communicating tasks.
Abstract: We present a system-level design and programming method for embedded multiprocessor systems. The aim of the method is to improve the design time and design quality by providing a structured approach for implementing process networks. We use process networks as re-usable and architecture-independent functional specifications. The method facilitates the cost-driven and constraint-driven source code transformation of process networks into architecture-specific implementations in the form of communicating tasks. We apply the method to implement a JPEG decoding process network in software on a set of MIPS processors. We apply three transformations to optimize synchronization rates and data transfers and to exploit data parallelism for this target architecture. We evaluate the impact of the source code transformations and the performance of the resulting implementations in terms of design time, execution time, and code size. The results show that process networks can be implemented quickly and efficiently on embedded multiprocessor systems.