Scalable image coding using reversible integer wavelet transforms
TL;DR: A new fully scalable image coder is presented and the lossless and lossy performance of these transforms in the proposed coder are investigated, which are comparable to JPEG-LS.
Abstract: Reversible integer wavelet transforms allow both lossless and lossy decoding using a single bitstream. We present a new fully scalable image coder and investigate the lossless and lossy performance of these transforms in the proposed coder. The lossless compression performance of the presented method is comparable to JPEG-LS. The lossy performance is quite competitive with other efficient lossy compression methods.
Citations
More filters
TL;DR: Some of the most significant features of the standard are presented, such as region-of-interest coding, scalability, visual weighting, error resilience and file format aspects, and some comparative results are reported.
Abstract: One of the aims of the standardization committee has been the development of Part I, which could be used on a royalty- and fee-free basis. This is important for the standard to become widely accepted. The standardization process, which is coordinated by the JTCI/SC29/WG1 of the ISO/IEC has already produced the international standard (IS) for Part I. In this article the structure of Part I of the JPFG 2000 standard is presented and performance comparisons with established standards are reported. This article is intended to serve as a tutorial for the JPEG 2000 standard. The main application areas and their requirements are given. The architecture of the standard follows with the description of the tiling, multicomponent transformations, wavelet transforms, quantization and entropy coding. Some of the most significant features of the standard are presented, such as region-of-interest coding, scalability, visual weighting, error resilience and file format aspects. Finally, some comparative results are reported and the future parts of the standard are discussed.
1,842 citations
Additional excerpts
...8) [3], [45], [62], [64], [65]....
[...]
Journal Article•
TL;DR: The aim of this paper is to propose a modified high,capacity image steganography technique that depends on wavelet transform with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security.
Abstract: Steganography is the art and science of concealing information in unremarkable cover media so as not to arouse an eavesdropper's suspicion. It is an application under information security field. Being classified under information security, steganography will be characterized by having set of measures that rely on strengths and counter measures (attacks) that are driven by weaknesses and vulnerabilities. Today, computer and network technologies provide easy,to,use communication channels for steganography. The aim of this paper is to propose a modified high,capacity image steganography technique that depends on wavelet transform with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security.
128 citations
Cites background or methods from "Scalable image coding using reversi..."
...The technique that is followed in this paper will use secret key to encrypt the hidden message that will be encapsulated inside a cover media....
[...]
...Although such a system might work for a time, once it is known, it is simple enough to expose the entire received media (e.g., images) passing by to check for hidden messages ultimately, such a steganographic system fails....
[...]
01 Nov 2000
TL;DR: In this paper, the authors survey some of the recent advances in lossless compression of continuous-tone images and discuss the modeling paradigms underlying the state-of-the-art algorithms, and the principles guiding their design.
Abstract: In this paper, we survey some of the recent advances in lossless compression of continuous-tone images. The modeling paradigms underlying the state-of-the-art algorithms, and the principles guiding their design, are discussed in a unified manner. The algorithms are described and experimentally compared.
111 citations
TL;DR: A VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity and the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed.
Abstract: This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity.
97 citations
TL;DR: This paper proposes a novel scheme of scalable coding for encrypted images that quantizes the subimage and the Hadamard coefficients of each data set to reduce the data amount and can be reconstructed when more bitstreams are received.
Abstract: This paper proposes a novel scheme of scalable coding for encrypted images. In the encryption phase, the original pixel values are masked by a modulo-256 addition with pseudorandom numbers that are derived from a secret key. After decomposing the encrypted data into a downsampled subimage and several data sets with a multiple-resolution construction, an encoder quantizes the subimage and the Hadamard coefficients of each data set to reduce the data amount. Then, the data of quantized subimage and coefficients are regarded as a set of bitstreams. At the receiver side, while a subimage is decrypted to provide the rough information of the original content, the quantized coefficients can be used to reconstruct the detailed content with an iteratively updating procedure. Because of the hierarchical coding mechanism, the principal original content with higher resolution can be reconstructed when more bitstreams are received.
80 citations
References
More filters
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >
5,559 citations
TL;DR: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed and it is shown that the wavelet transform is particularly well adapted to progressive transmission.
Abstract: A scheme for image compression that takes into account psychovisual features both in the space and frequency domains is proposed. This method involves two steps. First, a wavelet transform used in order to obtain a set of biorthogonal subclasses of images: the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required to describe the image. Second, according to Shannon's rate distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise shaping bit allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed. In order to allow the receiver to recognize a picture as quickly as possible at minimum cost, a progressive transmission scheme is presented. It is shown that the wavelet transform is particularly well adapted to progressive transmission. >
3,925 citations
TL;DR: The motion compensation is applied for analysis and design of a hybrid coding scheme and the results show a factor of two gain at low bit rates.
Abstract: A new technique for estimating interframe displacement of small blocks with minimum mean square error is presented. An efficient algorithm for searching the direction of displacement has been described. The results of applying the technique to two sets of images are presented which show 8-10 dB improvement in interframe variance reduction due to motion compensation. The motion compensation is applied for analysis and design of a hybrid coding scheme and the results show a factor of two gain at low bit rates.
1,883 citations
TL;DR: Two approaches to build integer to integer wavelet transforms are presented and the precoder of Laroiaet al., used in information transmission, is adapted and combined with expansion factors for the high and low pass band in subband filtering.
Abstract: Invertible wavelet transforms that map integers to integers have important applications in lossless coding. In this paper we present two approaches to build integer to integer wavelet transforms. The first approach is to adapt the precoder of Laroiaet al.,which is used in information transmission; we combine it with expansion factors for the high and low pass band in subband filtering. The second approach builds upon the idea of factoring wavelet transforms into so-called lifting steps. This allows the construction of an integer version of every wavelet transform. Finally, we use these approaches in a lossless image coder and compare the results to those given in the literature.
1,269 citations
TL;DR: The CALIC obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature and can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach.
Abstract: We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts.
1,099 citations