scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2006"


Journal ArticleDOI
TL;DR: A new grayscale image quality measure that can be used as a graphical or a scalar measure to predict the distortion introduced by a wide range of noise sources based on singular value decomposition is presented.
Abstract: The important criteria used in subjective evaluation of distorted images include the amount of distortion, the type of distortion, and the distribution of error. An ideal image quality measure should, therefore, be able to mimic the human observer. We present a new grayscale image quality measure that can be used as a graphical or a scalar measure to predict the distortion introduced by a wide range of noise sources. Based on singular value decomposition, it reliably measures the distortion not only within a distortion type at different distortion levels, but also across different distortion types. The measure was applied to five test images (airplane, boat, Goldhill, Lena, and peppers) using six types of distortion (JPEG, JPEG 2000, Gaussian blur, Gaussian noise, sharpening, and DC-shifting), each with five distortion levels. Its performance is compared with PSNR and two recent measures.

350 citations


Journal ArticleDOI
TL;DR: A randomized arithmetic coding paradigm is introduced, which achieves encryption by inserting some randomization in the arithmetic coding procedure, and unlike previous works on encryption by arithmetic coding, this is done at no expense in terms of coding efficiency.
Abstract: We propose a novel multimedia security framework based on a modification of the arithmetic coder, which is used by most international image and video coding standards as entropy coding stage. In particular, we introduce a randomized arithmetic coding paradigm, which achieves encryption by inserting some randomization in the arithmetic coding procedure; notably, and unlike previous works on encryption by arithmetic coding, this is done at no expense in terms of coding efficiency. The proposed technique can be applied to any multimedia coder employing arithmetic coding; in this paper we describe an implementation tailored to the JPEG 2000 standard. The proposed approach turns out to be robust towards attempts to estimating the image or discovering the key, and allows very flexible protection procedures at the code-block level, allowing to perform total and selective encryption, as well as conditional access

230 citations


Journal ArticleDOI
TL;DR: A hybrid three-dimensional wavelet transform for spectral and spatial decorrelation in the framework of Part 2 of the JPEG 2000 standard is employed, which provides competitive performance with respect to state-of-the-art techniques.
Abstract: In this letter we propose a new technique for progressive coding of hyperspectral data. Specifically, we employ a hybrid three-dimensional wavelet transform for spectral and spatial decorrelation in the framework of Part 2 of the JPEG 2000 standard. Both onboard and on-the-ground compression are addressed. The resulting technique is compliant with the JPEG 2000 family of standards and provides competitive performance with respect to state-of-the-art techniques.

153 citations


Journal ArticleDOI
TL;DR: Two approaches are proposed that exploit a lattice structure in the discrete cosine transform (DCT) domain to solve the JPEG Compression History Estimation (CHEst) problem and provide robust JPEG CHEst performance in practice.
Abstract: We routinely encounter digital color images that were previously compressed using the Joint Photographic Experts Group (JPEG) standard. En route to the image's current representation, the previous JPEG compression's various settings-termed its JPEG compression history (CH)-are often discarded after the JPEG decompression step. Given a JPEG-decompressed color image, this paper aims to estimate its lost JPEG CH. We observe that the previous JPEG compression's quantization step introduces a lattice structure in the discrete cosine transform (DCT) domain. This paper proposes two approaches that exploit this structure to solve the JPEG Compression History Estimation (CHEst) problem. First, we design a statistical dictionary-based CHEst algorithm that tests the various CHs in a dictionary and selects the maximum a posteriori estimate. Second, for cases where the DCT coefficients closely conform to a 3-D parallelepiped lattice, we design a blind lattice-based CHEst algorithm. The blind algorithm exploits the fact that the JPEG CH is encoded in the nearly orthogonal bases for the 3-D lattice and employs novel lattice algorithms and recent results on nearly orthogonal lattice bases to estimate the CH. Both algorithms provide robust JPEG CHEst performance in practice. Simulations demonstrate that JPEG CHEst can be useful in JPEG recompression; the estimated CH allows us to recompress a JPEG-decompressed image with minimal distortion (large signal-to-noise-ratio) and simultaneously achieve a small file-size.

142 citations


Journal ArticleDOI
TL;DR: The work proposed in this paper provides an adaptive way of representing images as a sum of two-dimensional features by presenting a low bit-rate image coding method based on a matching pursuit (MP) expansion, over a dictionary built on anisotropic refinement and rotation of contour-like atoms.
Abstract: New breakthroughs in image coding possibly lie in signal decomposition through nonseparable basis functions that can efficiently capture edge characteristics, present in natural images. The work proposed in this paper provides an adaptive way of representing images as a sum of two-dimensional features. It presents a low bit-rate image coding method based on a matching pursuit (MP) expansion, over a dictionary built on anisotropic refinement and rotation of contour-like atoms. This method is shown to provide, at low bit rates, results comparable to the state of the art in image compression, represented here by JPEG2000 and SPIHT, with generally a better visual quality in the MP scheme. The coding artifacts are less annoying than the ringing introduced by wavelets at very low bit rate, due to the smoothing performed by the basis functions used in the MP algorithm. In addition to good compression performances at low bit rates, the new coder has the advantage of producing highly flexible streams. They can easily be decoded at any spatial resolution, different from the original image, and the bitstream can be truncated at any point to match diverse bandwidth requirements. The spatial adaptivity is shown to be more flexible and less complex than transcoding operations generally applied to state of the art codec bitstreams. Due to both its ability for capturing the most important parts of multidimensional signals, and a flexible stream structure, the image coder proposed in this paper represents an interesting solution for low to medium rate image coding in visual communication applications.

141 citations


Journal ArticleDOI
01 Mar 2006
TL;DR: A systematic derivation of VLSI architectures and algorithms for efficient implementation of lifting based Discrete Wavelet Transform for both 1-dimensional and 2-dimensional DWT is provided.
Abstract: In this paper, we review recent developments in VLSI architectures and algorithms for efficient implementation of lifting based Discrete Wavelet Transform (DWT). The basic principle behind the lifting based scheme is to decompose the finite impulse response (FIR) filters in wavelet transform into a finite sequence of simple filtering steps. Lifting based DWT implementations have many advantages, and have recently been proposed for the JPEG2000 standard for image compression. Consequently, this has become an area of active research and several architectures have been proposed in recent years. In this paper, we provide a survey of these architectures for both 1-dimensional and 2-dimensional DWT. The architectures are representative of many design styles and range from highly parallel architectures to DSP-based architectures to folded architectures. We provide a systematic derivation of these architectures along with an analysis of their hardware and timing complexities.

126 citations


Journal ArticleDOI
TL;DR: A new semi-fragile lossless digital watermarking scheme based on integer wavelet transform is presented which takes special measures to prevent overflow/underflow and hence does not suffer from annoying salt-and-pepper noise.
Abstract: In this paper, a new semi-fragile lossless digital watermarking scheme based on integer wavelet transform is presented. The wavelet family applied is the 5/3 filter bank which serves as the default transformation in the JPEG2000 standard for image lossless compression. As a result, the proposed scheme can be integrated into the JPEG2000 standard smoothly. Different from the only existing semi-fragile lossless watermarking scheme which uses modulo-256 addition, this method takes special measures to prevent overflow/underflow and hence does not suffer from annoying salt-and-pepper noise. The original cover image can be losslessly recovered if the stego-image has not been altered. Furthermore, the hidden data can be retrieved even after incidental alterations including image compression have been applied to the stego-image

119 citations


Journal ArticleDOI
TL;DR: It is found that the performance of steganalysis techniques is affected by the JPEG quality factor, and JPEG recompression artifacts serve as a source of confusion for almost all steganalysed techniques.
Abstract: We investigate the performance of state of the art univer- sal steganalyzers proposed in the literature. These universal stega- nalyzers are tested against a number of well-known steganographic embedding techniques that operate in both the spatial and transform domains. Our experiments are performed using a large data set of JPEG images obtained by randomly crawling a set of publicly avail- able websites. The image data set is categorized with respect to size, quality, and texture to determine their potential impact on ste- ganalysis performance. To establish a comparative evaluation of techniques, undetectability results are obtained at various embed- ding rates. In addition to variation in cover image properties, our comparison also takes into consideration different message length definitions and computational complexity issues. Our results indi- cate that the performance of steganalysis techniques is affected by the JPEG quality factor, and JPEG recompression artifacts serve as a source of confusion for almost all steganalysis techniques. © 2006

113 citations


Patent
12 Jun 2006
TL;DR: In this article, a method of transmitting selected regions of interest of digital video data at selected resolutions was proposed, including: capturing digital video, compressing the digital video into a sequence of individual high resolution JPEG 2000 frames and simultaneously extracting from the compressed high resolution MPEG 2000 frames a lower resolution.
Abstract: A method of transmitting selected regions of interest of digital video data at selected resolutions, including: capturing digital video; compressing the digital video into a sequence of individual high resolution JPEG 2000 frames and simultaneously extracting from the compressed high resolution JPEG 2000 frames a lower resolution; storing the individual high resolution JPEG 2000 frames in a storage device; creating a video sequence of the lower resolution representation; transmitting the video sequence of the lower resolution representation to a user; creating one or more video sequences of selected regions of interest at selected resolutions greater than the resolution of the video sequence of the lower resolution representation; transmitting one or more video sequences of selected regions of interest at selected resolutions; and repeating the prior two steps at incrementally increased resolutions until a desired resolution of the selected region of interest is reached according to a viewing objective.

102 citations


Journal ArticleDOI
TL;DR: This paper presents two JPEG steganographic methods using quantization index modulation (QIM) in the discrete cosine transform (DCT) domain that approximately preserve the histogram of quantized DCT coefficients, aiming at secure JPEG Steganography against histogram-based attacks.

94 citations


01 Jan 2006
TL;DR: A new image retrieval scheme for JPEG formatted images is presented, which doesn't require decompressing the images but directly retrieving in the discrete cosine transform domain, and the computation complexity can be greatly reduced.
Abstract: Nowadays, a large number of images are compressed in JPEG (Joint Photo- graphic Experts Group) format. Therefore, content-based image retrieval (CBIR) for the JPEG images has attracted many people's attention and a series of algorithms directly based on the discrete cosine transform (DCT) domain have been proposed. However, the existing methods are far from the practical application. Thus, in this paper, a new image retrieval scheme for JPEG formatted images is presented. The color, spatial and frequency (texture) features based on the DCT domain are extracted for the later image retrieval. It doesn't require decompressing the images but directly retrieving in the DCT domain. Thus, compared with the spatial domain based retrieval methods for JPEG im- ages, the computation complexity can be greatly reduced. In addition, this retrieval system is suitable for all color images with different sizes. Experimental results demonstrate the advantages of the proposed retrieval scheme.

Journal ArticleDOI
TL;DR: This paper presents a novel, effective, and efficient characterization of wavelet subbands by bit-plane extractions, which can be extracted directly from the code-block code-stream, rather than from the de-quantized wavelet coefficients, making this method particularly adaptable for image retrieval in the compression domain such as JPEG2000 format images.
Abstract: This paper presents a novel, effective, and efficient characterization of wavelet subbands by bit-plane extractions. Each bit plane is associated with a probability that represents the frequency of 1-bit occurrence, and the concatenation of all the bit-plane probabilities forms our new image signature. Such a signature can be extracted directly from the code-block code-stream, rather than from the de-quantized wavelet coefficients, making our method particularly adaptable for image retrieval in the compression domain such as JPEG2000 format images. Our signatures have smaller storage requirement and lower computational complexity, and yet, experimental results on texture image retrieval show that our proposed signatures are much more cost effective to current state-of-the-art methods including the generalized Gaussian density signatures and histogram signatures

Journal ArticleDOI
TL;DR: A novel perceptually lossless coder is presented for the compression of medical images, built on the JPEG 2000 coding framework and embedded with an advanced human vision model to identify and to remove visually insignificant/irrelevant information.
Abstract: A novel perceptually lossless coder is presented for the compression of medical images. Built on the JPEG 2000 coding framework, the heart of the proposed coder is a visual pruning function, embedded with an advanced human vision model to identify and to remove visually insignificant/irrelevant information. The proposed coder offers the advantages of simplicity and modularity with bit-stream compliance. Current results have shown superior compression ratio gains over that of its information lossless counterparts without any visible distortion. In addition, a case study consisting of 31 medical experts has shown that no perceivable difference of statistical significance exists between the original images and the images compressed by the proposed coder.

Journal ArticleDOI
TL;DR: A decoding scheme is proposed with two main characteristics: a complete scheme takes place in a field-programmable gate array without accessing any external memory, allowing integration in a secured system, and a customizable level of parallelization allows to satisfy a broad range of constraints, depending on the signal resolution.
Abstract: The image compression standard JPEG 2000 proposes a large set of features that is useful for today's multimedia applications. Unfortunately, it is much more complex than older standards. Real-time applications, such as digital cinema, require a specific, secure, and scalable hardware implementation. In this paper, a decoding scheme is proposed with two main characteristics. First, the complete scheme takes place in a field-programmable gate array without accessing any external memory, allowing integration in a secured system. Second, a customizable level of parallelization allows to satisfy a broad range of constraints, depending on the signal resolution. The resulting architecture is therefore ready to meet upcoming digital cinema specifications

Journal ArticleDOI
TL;DR: A scheme that uses JPEG2000 and JPIP to transmit data in a multi-resolution and progressive fashion and a prioritization that enables the client to progressively visualize scene content from a compressed file is presented.
Abstract: One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint

Proceedings ArticleDOI
TL;DR: The goal of the Secure JPEG is to allow the efficient integration and use of security tools enabling a variety of security services such as confidentiality, integrity verification, source authentication or conditional access.
Abstract: In this paper, we propose a Secure JPEG, an open and flexible standardized framework to secure JPEG images. Its goal is to allow the efficient integration and use of security tools enabling a variety of security services such as confidentiality, integrity verification, source authentication or conditional access. In other words, Secure JPEG aims at accomplishing for JPEG what JPSEC enables for JPEG 2000. We describe in more details three specific examples of security tools. The first one addresses integrity verification using a hash function to compute local digital signatures. The second one considers the use of encryption for confidentiality. Finally, the third describes a scrambling technique.

Journal ArticleDOI
TL;DR: Theoretical analysis and experimental results show that the proposed scheme can achieve various purposes of selective encryption and is computationally secure, and does not decrease the compressibility of the standard JPEG 2000 coding scheme.

Proceedings ArticleDOI
01 Jul 2006
TL;DR: This paper proposes a lossy compression scheme for hyperspectral data based on a new low-complexity version of the Karhunen-Lo` eve transform, in which complexity and performance can be balanced in a scalable way, allowing one to choose the best trade off that better matches a specific application.
Abstract: Transform-based lossy compression has a huge po- tential for hyperspectral data reduction. In this paper we propose a lossy compression scheme for hyperspectral data based on a new low-complexity version of the Karhunen-Lo` eve transform, in which complexity and performance can be balanced in a scalable way, allowing one to choose the best trade off that better matches a specific application. Moreover, we integrate this transform in the framework of Part 2 of the JPEG 2000 standard, taking advantage of the high coding efficiency of JPEG 2000, and exploiting the interoperability of an international standard. Hyperspectral imaging amounts to collecting the energy reflected or emitted by ground targets at a typically very high number of wavelengths, resulting in a data cube consisting of tens to hundreds of bands. These data have become increas- ingly popular, since they enable plenty of new applications, in- cluding detection and identification of surface and atmospheric constituents present, analysis of soil type, agriculture and forest monitoring, environmental studies and military surveil- lance. The data are usually acquired by a remote platform (a satellite or an aircraft), and then downlinked to a ground sta- tion. Due to the huge size of the datasets, compression is nec- essary to match the available transmission bandwidth. In the past, scientific data have been almost exclusively compressed by means of lossless methods, in order to preserve their full quality. However, more recently, there has been an increasing interest in their lossy compression. Many of these techniques are based on decorrelating transforms, in order to exploit spatial and inter-band (i.e., spectral) correlation, followed by a quantization stage and an entropy coder. Examples include the

Proceedings ArticleDOI
02 May 2006
TL;DR: Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security.
Abstract: In this paper, we address the problem privacy in video surveillance. We propose an efficient solution based on transform-domain scrambling of regions of interest in a video sequence. More specifically, the sign of selected transform coefficients is flipped during encoding. We address more specifically the case of Motion JPEG 2000. Simulation results show that the technique can be successfully applied to conceal information in regions of interest in the scene while providing with a good level of security. Furthermore, the scrambling is flexible and allows adjusting the amount of distortion introduced. This is achieved with a small impact on coding performance and negligible computational complexity increase. In the proposed video surveillance system, heterogeneous clients can remotely access the system through the Internet or 2G/3G mobile phone network. Thanks to the inherently scalable Motion JPEG 2000 codestream, the server is able to adapt the resolution and bandwidth of the delivered video depending on the usage environment of the client.

Journal ArticleDOI
TL;DR: This work examines the existing MQ arithmetic coder architectures and develops novel techniques capable of absorbing the high symbol rate from high performance bit-plane coders, as well as providing flexible design choices.
Abstract: JPEG2000 is a recently standardized image compression algorithm. The heart of this algorithm is the coding scheme known as embedded block coding with optimal truncation (EBCOT). This contributes the majority of processing time to the compression algorithm. The EBCOT scheme consists of a bit-plane coder coupled to a MQ arithmetic coder. Recent bit-plane coder architectures are capable of producing symbols at a higher rate than the existing MQ arithmetic coders can absorb. Thus, there is a requirement for a high throughput MQ arithmetic coder. We examine the existing MQ arithmetic coder architectures and develop novel techniques capable of absorbing the high symbol rate from high performance bit-plane coders, as well as providing flexible design choices

Journal ArticleDOI
TL;DR: The results suggest that the addition of the nonlinearities to a channelized Hotelling model may add complexity to the model observers without great impact on rank order evaluation of image processing and/or acquisition algorithms.
Abstract: Linear model observers based on statistical decision theory have been used successfully to predict human visual detection of aperiodic signals in a variety of noisy backgrounds. However, some models have included nonlinearities such as a transducer or nonlinear decision rules to handle intrinsic uncertainty. In addition, masking models used to predict human visual detection of signals superimposed on one of two identical backgrounds (masks) usually include a number of nonlinear components in the channels that reflect properties of the firing of cells in the primary visual cortex (V1). The effect of these nonlinearities on the ability of linear model observers to predict human signal detection in real patient structured backgrounds is unknown. We evaluate the effect of including different nonlinear human visual system components into a linear channelized Hotelling observer (CHO) using a signal known exactly but variable (SKEV) task. In particular, we evaluate whether the rank order of two compression algorithms (JPEG versus JPEG 2000) and two compression encoder settings (JPEG 2000 default versus JPEG 2000 optimized) based on model observer signal detection performance in X-ray coronary angiograms is altered by inclusion of nonlinear components. The results show: 1) the simpler linear CHO model observer outperforms CHO model with the nonlinear components; 2) the rank order of model observer performance for the compression algorithms/parameters does not change when the nonlinear components are included. For the present task and images, the results suggest that the addition of the nonlinearities to a channelized Hotelling model may add complexity to the model observers without great impact on rank order evaluation of image processing and/or acquisition algorithms

Journal ArticleDOI
TL;DR: The aim of this work is to show that great complexity reduction with excellent performance can be achieved going through the derivation of the 9/7 taps values.
Abstract: This brief proposes a novel low-complexity, efficient 9/7 wavelet filters VLSI architecture for image compression applications. The performance of a hardware implementation of the 9/7 filter bank depends on the accuracy of coefficients representation. The aim of this work is to show that great complexity reduction with excellent performance can be achieved going through the derivation of the 9/7 taps values

Journal ArticleDOI
TL;DR: A new image compression algorithm is proposed based on the efficient construction of wavelet coefficient lower trees, which presents state-of-the-art compression performance, whereas its complexity is lower than the one presented in other wavelet coders, like SPIHT and JPEG 2000.
Abstract: In this paper, a new image compression algorithm is proposed based on the efficient construction of wavelet coefficient lower trees. The main contribution of the proposed lower-tree wavelet (LTW) encoder is the utilization of coefficient trees, not only as an efficient method of grouping coefficients, but also as a fast way of coding them. Thus, it presents state-of-the-art compression performance, whereas its complexity is lower than the one presented in other wavelet coders, like SPIHT and JPEG 2000. Fast execution is achieved by means of a simple two-pass coding and one-pass decoding algorithm. Moreover, its computation does not require additional lists or complex data structures, so there is no memory overhead. A formal description of the algorithm is provided, while reference software is also given. Numerical results show that our codec works faster than SPIHT and JPEG 2000 (up to three times faster than SPIHT and fifteen times faster than JPEG 2000), with similar coding efficiency

Journal ArticleDOI
TL;DR: It is demonstrated that image degradation highly depends on image information, which indicates that the degree of image degradation cannot be guaranteed in CR but only in QF compression mode, and CR is therefore not a measure of choice for expressing the degreeof image degradation in medical image compression.
Abstract: The aim of the study was to demonstrate and critically discuss the influence of image information on compressibility and image degradation. The influence of image information on image compression was demonstrated on the axial computed tomography images of a head. The standard Joint Photographic Expert Group (JPEG) and JPEG 2000 compression methods were used in compression ratio (CR) and in quality factor (QF) compression modes. Image information was estimated by calculating image entropy, while the effects of image compression were evaluated quantitatively, by file size reduction and by local and global mean square error (MSE), and qualitatively, by visual perception of distortion in high and low contrast test patterns. In QF compression mode, a strong correlation between image entropy and file size was found for JPEG (r=0.87, p < 0.001) and JPEG 2000 (r=0.84, p < 0.001), while corresponding local MSE was constant (4.54) or nearly constant (2.36-2.37), respectively. For JPEG 2000 CR compression mode, CR was nearly constant (1:25), while local MSE varied considerably (2.26 and 10.09). The obtained qualitative and quantitative results clearly demonstrate that image degradation highly depends on image information, which indicates that the degree of image degradation cannot be guaranteed in CR but only in QF compression mode. CR is therefore not a measure of choice for expressing the degree of image degradation in medical image compression. Moreover, even when using QF compression modes, objective evaluation, and comparison of the compression methods within and between studies is often not possible due to the lack of standardization of compression quality scales.

Journal ArticleDOI
TL;DR: A new framework of multitree dictionaries is developed, which includes some previously proposed dictionaries as special cases and shows how to efficiently find the best representation in a multitree dictionary using a recursive tree-pruning algorithm.
Abstract: We address the best basis problem - or, more generally, the best representation problem: Given a signal, a dictionary of representations, and an additive cost function, the aim is to select the representation from the dictionary which minimizes the cost for the given signal. We develop a new framework of multitree dictionaries, which includes some previously proposed dictionaries as special cases. We show how to efficiently find the best representation in a multitree dictionary using a recursive tree-pruning algorithm. We illustrate our framework through several examples, including a novel block image coder, which significantly outperforms both the standard JPEG and quadtree-based methods and is comparable to embedded coders such as JPEG2000 and SPIHT.

Proceedings ArticleDOI
01 Dec 2006
TL;DR: A 4K SHD real-time video streaming system that can encode (JPEG 2000), transmit and display live images at up to 4096 times 2160 pixel resolution and 36-bit color makes it feasible to implement networked professional audio/video applications over long-distance IP networks even at SHD quality.
Abstract: This paper describes a 4K SHD (super high definition) real-time video streaming system that can encode (JPEG 2000), transmit and display live images at up to 4096 times 2160 pixel resolution and 36-bit color. The total bit rate of a 4K SHD video to be shown at 30 frames per second in 4:4:4 format is 9.5 Gbps. A JPEG 2000 parallel codec reduces this to 200-400 Mbps and allows live image distribution via common 1 Gbps links with the highest image quality and minimal delay. This system makes it feasible to implement networked professional audio/video applications over long-distance IP networks even at SHD quality

Proceedings ArticleDOI
01 Oct 2006
TL;DR: A model for the electronic representation of bank checks based on the mixed raster content (MRC) model for compression of color and gray-scale images is proposed and a new watermarking technique to embed a digital signature into the check image for protection is proposed.
Abstract: The substitution of physical bank check exchange by electronic check image transfer brings agility, security and cost reduction to the clearing system. In this paper, we propose a model for the electronic representation of bank checks based on the mixed raster content (MRC) model for compression of color and gray-scale images. The binary image is sent first and only if necessary the other MRC image planes are sent to reconstruct the color image of the check. Careful subjective evaluation helped us indicate the best binarization technique and studies led us to use JPEG 2000 to compress the MRC image planes. Furthermore, we propose a new watermarking technique to embed a digital signature into the check image for protection.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper proposes a pre-processing and compression scheme that aims to enhance the compression efficiency of integral images, first transforming a still integral image into a pseudo video sequence consisting of sub-images, which is then compressed using an H.264 video encoder.
Abstract: The next evolutionary step in enhancing video communication fidelity is taken by adding scene depth. 3D video using integral imaging (II) is widely considered as the technique able to take this step. However, an increase in spatial resolution of several orders of magnitude from todays 2D video is required to provide a sufficient depth fidelity, which includes motion parallax. In this paper we propose a pre-processing and compression scheme that aims to enhance the compression efficiency of integral images. We first transform a still integral image into a pseudo video sequence consisting of sub-images, which is then compressed using an H.264 video encoder. The improvement in compression efficiency of using this scheme is evaluated and presented. An average PSNR increase of 5.7 dB or more, compared to JPEG 2000, is observed on a set of reference images.

Proceedings ArticleDOI
Xi-Xin Cao1, Qingqing Xie1, Chungan Peng1, Qingchun Wang1, Dunshan Yu1 
01 Oct 2006
TL;DR: The periodicity and symmetry of DWT is considered to optimize the performance and reduce the computational redundancy and the result is a low hardware complexity DWT processor for 9/7 transforms, which allows two times faster clock than the direct implementation.
Abstract: This paper proposes an efficient and simple architecture for 9/7 Discrete Wavelet Transform based on Distributed Arithmetic. To derive new proposed architecture, we consider the periodicity and symmetry of DWT to optimize the performance and reduce the computational redundancy. The inner product of coefficient matrix of DWT is distributed over the input by careful analysis of input, output and coefficient word lengths. In the coefficient matrix, linear maps are used to assign the necessary computation to processing elements in space domain. Moreover, the proposed architecture has regular data flow, and low control complexity. The result is a low hardware complexity DWT processor for 9/7 transform, which allows two times faster clock than the direct implementation. This design is very suitable for image compression systems, e.g., JPEG2000 and MPEG4.

Journal ArticleDOI
TL;DR: Extensive experimentation reported in this paper indicates that pipelines which employ a JPEG 2000 coding scheme achieve significant performance improvements compared to similar processing pipelines equipped with a JPEG coder.
Abstract: This paper presents digital camera image compression solutions suitable for the use in single-sensor consumer electronic devices equipped with the Bayer color filter array (CFA). The proposed solutions code camera images available either in the CFA format or as the full-color demosaicked data, thus offering different design characteristics, performance and computational efficiency. Extensive experimentation reported in this paper indicates that pipelines which employ a JPEG 2000 coding scheme achieve significant performance improvements compared to similar processing pipelines equipped with a JPEG coder. Other improvements, both objective and subjective, are observed in terms of color appearance, image sharpness and the presence of visual artifacts in the captured images.