scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2012"


Journal ArticleDOI
TL;DR: Experimental results show that this approach successfully draws a secret image imperceptibly and reconstructs the recovered secret image with high quality.

122 citations


Journal ArticleDOI
TL;DR: The lossless coding mode of the High Efficiency Video Coding (HEVC) main profile that bypasses transform, quantization, and in-loop filters is described and a sample-based angular intra prediction method is presented to improve the coding efficiency.
Abstract: The lossless coding mode of the High Efficiency Video Coding (HEVC) main profile that bypasses transform, quantization, and in-loop filters is described. Compared to the HEVC nonlossless coding mode with the smallest quantization parameter value (i.e., 0 for 8-b video and -12 for 10-b video), the HEVC lossless coding mode provides perfect fidelity and an average bit-rate reduction of 3.2%-13.2%. It also significantly outperforms the existing lossless compression solutions, such as JPEG2000 and JPEG-LS for images as well as 7-Zip and WinRAR for data archiving. To further improve the coding efficiency of the HEVC lossless mode, a sample-based angular intra prediction (SAP) method is presented. The SAP employs the same prediction mode signaling method and the sample interpolation method as the HEVC block-based angular prediction, but uses adjacent neighbors for better intra prediction accuracy and performs prediction sample by sample. The experimental results reveal that the SAP provides an additional bit-rate reduction of 1.8%-11.8% on top of the HEVC lossless coding mode.

114 citations


Journal ArticleDOI
TL;DR: The compressive sensing (CS) principles are studied and an alternative coding paradigm with a number of descriptions is proposed based upon CS for high packet loss transmission and Experimental results show that the proposed CS-based codec is much more robust against lossy channels, while achieving higher rate-distortion performance.
Abstract: Multiple description coding (MDC) is one of the widely used mechanisms to combat packet-loss in non-feedback systems. However, the number of descriptions in the existing MDC schemes is very small (typically 2). With the number of descriptions increasing, the coding complexity increases drastically and many decoders would be required. In this paper, the compressive sensing (CS) principles are studied and an alternative coding paradigm with a number of descriptions is proposed based upon CS for high packet loss transmission. Two-dimentional discrete wavelet transform (DWT) is applied for sparse representation. Unlike the typical wavelet coders (e.g., JPEG 2000), DWT coefficients here are not directly encoded, but re-sampled towards equal importance of information instead. At the decoder side, by fully exploiting the intra-scale and inter-scale correlation of multiscale DWT, two different CS recovery algorithms are developed for the low-frequency subband and high-frequency subbands, respectively. The recovery quality only depends on the number of received CS measurements (not on which of the measurements that are received). Experimental results show that the proposed CS-based codec is much more robust against lossy channels, while achieving higher rate-distortion (R-D) performance compared with conventional wavelet-based MDC methods and relevant existing CS-based coding schemes.

80 citations


Journal ArticleDOI
TL;DR: A novel architecture for JPEG XR encoding is proposed and discussion of image partition and windowing techniques is given, including frequency transform and quantization.
Abstract:  Abstract—JPEG XR is an emerging image coding standard, based on HD Photo developed by Microsoft technology. It supports high compression performance twice as high as the de facto image coding system, namely JPEG, and also has an advantage over JPEG 2000 in terms of computational cost. JPEG XR is expected to be widespread for many devices including embedded systems in the near future. This review-based paper proposes a novel architecture for JPEG XR encoding. This paper gives discussion of image partition and windowing techniques. Further frequency transform and quantization is also addressed. A brief insight into Predication, Adaptive Encode and Packetization has been provided in the paper.

66 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed scheme is robust against most common attacks including JPEG lossy compression, JPEG 2000 compression, low-pass filtering, cropping, noise addition, blurring, rotation, scaling and sharpening et al.

65 citations


Proceedings ArticleDOI
07 May 2012
TL;DR: Evaluated the intra-only coding modes of HEVC and H.264/AVC together with popular image-coding standards like JPEG, JPEG 2000, and JPEG XR as well as the proprietary WebP scheme and obtained a clear ranking in terms of objective coding performance.
Abstract: Preliminary performance comparisons between the current test model encoder of the upcoming high efficiency video coding (HEVC) standard and an H.264/AVC High Profile conforming encoder report average bit-rate savings of about 40% and 25% for a random-access and an all-intra configuration, respectively. Even though H.264/AVC and HEVC are primarily designed for video coding applications, it might be interesting to find out how their corresponding intra-coding tools perform when applied to still images, especially when compared to current state-of-the-art still image coding standards. To this end, we evaluated the intra-only coding modes of HEVC and H.264/AVC together with popular image-coding standards like JPEG, JPEG 2000, and JPEG XR as well as the proprietary WebP scheme. By using a representative test set of photographic still images, we obtained a clear ranking in terms of objective coding performance with average bit-rate savings in the range of 17–44% for HEVC intra-only coding relative to all other competing coding schemes.

58 citations


Book
30 Oct 2012
TL;DR: This edition of Introduction to Data Compression provides an extensive introduction to the theory underlying todays compression techniques with detailed instruction for their applications using several examples to explain the concepts.
Abstract: Each edition of Introduction to Data Compression has widely been considered the best introduction and reference text on the art and science of data compression, and thefourth edition continues in this tradition Data compression techniques and technology are ever-evolving with new applications in image, speech, text, audio, and video The fourth edition includes all the cutting edge updates the reader will need during the work day and in class Khalid Sayood provides an extensive introduction to the theory underlying todays compression techniques with detailed instruction for their applications using several examples to explain the concepts Encompassing the entire field of data compression, Introduction to Data Compression includes lossless and lossy compression, Huffman coding, arithmetic coding, dictionary techniques, context based compression, scalar and vector quantization Khalid Sayood provides a working knowledge of data compression, giving the reader the tools to develop a complete and concise compression package upon completion of his book New content added to include a more detailed description of the JPEG 2000 standard New content includes speech coding for internet applications Explains established and emerging standards in depth including JPEG 2000, JPEG-LS, MPEG-2, H264, JBIG 2, ADPCM, LPC, CELP, MELP, and iLBC Source code provided via companion web site that gives readers the opportunity to build their own algorithms, choose and implement techniques in their own applications

49 citations


Proceedings Article
11 Apr 2012
TL;DR: This paper proposes an input-adaptive compression approach, which encodes each input image over a dictionary specifically trained for it, based on the sparse dictionary structure, whose compact representation allows relatively low-cost transmission of the dictionary along with the compressed data.
Abstract: Transform coding is a widely used image compression technique, where entropy reduction can be achieved by decomposing the image over a dictionary which provides compaction. Existing algorithms, such as JPEG and JPEG2000, utilize fixed dictionaries which are shared by the encoder and decoder. Recently, works utilizing content-specific dictionaries show promising results by focusing on specific classes of images and using highly specialized dictionaries. However, such approaches lose the ability to compress arbitrary images. In this paper we propose an input-adaptive compression approach, which encodes each input image over a dictionary specifically trained for it. The scheme is based on the sparse dictionary structure, whose compact representation allows relatively low-cost transmission of the dictionary along with the compressed data. In this way, the process achieves both adaptivity and generality. Our results show that although this method involves transmitting the dictionary, it remains competitive with the JPEG and JPEG2000 algorithms.

47 citations


Proceedings ArticleDOI
01 Dec 2012
TL;DR: Compared to forensic techniques aiming at the detection of resampling in JPEG images, the proposed approach moves a step further, since it also provides an estimation of both the resize factor and the quality factor of the previous JPEG compression.
Abstract: In this paper, we propose a forensic technique for the reverse engineering of double JPEG compression in the presence of image resizing between the two compressions Our approach is based on the fact that previously JPEG compressed images tend to have a near lattice distribution property (NLDP), and that this property is usually maintained after a simple image processing step and subsequent recompression The proposed approach represents an improvement with respect to existing techniques analyzing double JPEG compression Moreover, compared to forensic techniques aiming at the detection of resampling in JPEG images, the proposed approach moves a step further, since it also provides an estimation of both the resize factor and the quality factor of the previous JPEG compression Such additional information can be used to reconstruct the history of an image and perform more detailed forensic analyses

46 citations


Journal ArticleDOI
TL;DR: A survey of image compression algorithms for visual sensor networks, ranging from the conventional standards such as JPEG and JPEG2000 to a new compression method, for example, compressive sensing, is provided.
Abstract: With the advent of visual sensor networks (VSNs), energy-aware compression algorithms have gained wide attention. That is, new strategies and mechanisms for power-efficient image compression algorithms are developed, since the application of the conventional methods is not always energy beneficial. In this paper, we provide a survey of image compression algorithms for visual sensor networks, ranging from the conventional standards such as JPEG and JPEG2000 to a new compression method, for example, compressive sensing. We provide the advantages and shortcomings of the application of these algorithms in VSN, a literature review of their application in VSN, as well as an open research issue for each compression standard/method. Moreover, factors influencing the design of compression algorithms in the context of VSN are presented. We conclude by some guidelines which concern the design of a compression method for VSN.

44 citations


Proceedings ArticleDOI
10 Apr 2012
TL;DR: A low-complexity integer-reversible spectral-spatial transform that allows for efficient loss less and lossy compression of color-filter-array images, allowing for very high quality offline post-processing, but with camera-raw files that can be half the size of those of existingcamera-raw formats.
Abstract: We present a low-complexity integer-reversible spectral-spatial transform that allows for efficient loss less and lossy compression of color-filter-array images (also referred to as camera-raw images). The main advantage of this new transform is that it maps the pixel array values into a format that can be directly compressed in a loss less, lossy, or progressive-to-loss less manner by an existing typical image coder such as JPEG 2000 or JPEG XR. Thus, no special codec design is needed for compressing the camera-raw data. Another advantage is that the new transform allows for mild compression of camera-raw data in a near-loss less format, allowing for very high quality offline post-processing, but with camera-raw files that can be half the size of those of existing camera-raw formats.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: The dithering operation will inevitably destroy the statistical correlations among the 8 × 8 intrablock and interblock within an image and the transition probability matrix of the DCT coefficients is employed for identifying the forged images from those original JPEG decompressed images and uncompressed ones.
Abstract: The quantization artifacts and blocking artifacts are the two significant properties in the JPEG compressed images. Most relative forensic techniques usually use such inherent properties to provide some evidences on how image data is acquired and/or processed. A wise attacker, however, may perform some post-operations to confuse the two artifacts to fool current forensic techniques. Recently, Stamm et al. in [1] propose a novel anti-JPEG compression method via adding anti-forensic dither to the DCT coefficients and further reducing the blocking artifacts. In this paper, we found that the dithering operation will inevitably destroy the statistical correlations among the 8 × 8 intrablock and interblock within an image. In the view of JPEG steganalysis, we employ the transition probability matrix of the DCT coefficients to measure such modifications for identifying the forged images from those original JPEG decompressed images and uncompressed ones. On average, we can obtain a detection accuracy as high as 99% on the image database of UCID [2].

Proceedings ArticleDOI
01 Dec 2012
TL;DR: There exists a realistic chance to fool state-of-the-art image file forensic methods using available software tools and the analysis of ordered data structures on the example of JPEG file formats and the EXIF metadata format as countermeasure is introduced.
Abstract: JPEG file format standards define only a limited number of mandatory data structures and leave room for interpretation. Differences between implementations employed in digital cameras, image processing software, and software to edit metadata provide valuable clues for basic authentication of digital images. We show that there exists a realistic chance to fool state-of-the-art image file forensic methods using available software tools and introduce the analysis of ordered data structures on the example of JPEG file formats and the EXIF metadata format as countermeasure. The proposed analysis approach enables basic investigations of image authenticity and documents a much better trustworthiness of EXIF metadata than commonly accepted. Manipulations created with the renowned metadata editor ExifTool and various image processing software can be reliably detected. Analysing the sequence of elements in complex data structures is not limited to JPEG files and might be a general principle applicable to different multimedia formats.

Proceedings ArticleDOI
01 Nov 2012
TL;DR: An effective algorithm to compress and to reconstruct digital imaging and communications in medicine (DICOM) images is presented and the comparison of compression methods such as JEPG,JEPG 2000 with SPIHT encoding on the basis of compression ratio and compression quality is outlined.
Abstract: Image compression has become one of the most important disciplines in digital electronics because of the ever growing popularity and usage of the internet and multimedia systems combined with the high requirements of the bandwidth and storage space. The increasing volume of data generated by some medical imaging modalities which justifies the use of different compression techniques to decrease the storage space and efficiency of transfer the images over the network for access to electronic patient records. This paper addresses the area of data compression as it is applicable to image processing. Here we are presented an effective algorithm to compress and to reconstruct digital imaging and communications in medicine (DICOM) images. Various image compression algorithms exist in today's commercial market. This paper outlines the comparison of compression methods such as JEPG, JEPG 2000 with SPIHT encoding on the basis of compression ratio and compression quality. The comparison of these compression methods are classified according to different medical images like MRI and CT. For JPEG based image compression RLE and Huffman encoding techniques are used by varying the bits per pixel. For JPEG 2000 based image compression SPIHT encoding method is used. The DCT and DWT methods are compared by varying bits per pixel and measured the performance parameters of MSE, PSNR and compression ratio. In JPEG 2000 method, compared the different wavelets like Haar, CDF 9/7, CDF 5/3 etc. and evaluated the compression ratio and compression quality. Also varied the decomposition levels of wavelet transform with different images.

Journal ArticleDOI
TL;DR: The 5-level design has been successfully implemented on a Xilinx Spartan 3E FPGA, utilising only 1104 slices for a 512-by-512 pixel test image, the lowest hardware requirements for a 5- level discrete wavelet transform processor reported to date.

Journal ArticleDOI
TL;DR: The study reveals that GPUs represent a source of computational power that is both accessible and applicable to obtaining compression results in valid response times in information extraction applications from remotely sensed hyperspectral imagery.
Abstract: Hyperspectral image compression has received considerable interest in recent years due to the enormous data volumes collected by imaging spectrometers for Earth Observation. JPEG2000 is an important technique for data compression, which has been successfully used in the context of hyperspectral image compression, either in lossless and lossy fashion. Due to the increasing spatial, spectral, and temporal resolution of remotely sensed hyperspectral data sets, fast (on-board) compression of hyperspectral data is becoming an important and challenging objective, with the potential to reduce the limitations in the downlink connection between the Earth Observation platform and the receiving ground stations on Earth. For this purpose, implementation of hyperspectral image compression algorithms on specialized hardware devices are currently being investigated. We have developed an implementation of the JPEG2000 compression standard in commodity graphics processing units (GPUs). These hardware accelerators are characterized by their low cost and weight and can bridge the gap toward on-board processing of remotely sensed hyperspectral data. Specifically, we develop GPU implementations of the lossless and lossy modes of JPEG2000. For the lossy mode, we investigate the utility of the compressed hyperspectral images for different compression ratios, using a standard technique for hyperspectral data exploitation such as spectral unmixing. Our study reveals that GPUs represent a source of computational power that is both accessible and applicable to obtaining compression results in valid response times in information extraction applications from remotely sensed hyperspectral imagery.

Proceedings ArticleDOI
01 Sep 2012
TL;DR: For each step of image processing chain, a statistical study of pixels' properties is performed to finally obtain a model of Discrete Cosine Transform (DCT) coefficients distribution.
Abstract: We propose a statistical model of natural images in JPEG format. The image acquisition is composed of three principal stages. First, a RAW image is obtained from sensor of Digital Still Cameras (DSC). Then, the RAW image is subject to some post-acquisition processes such as demosaicking, white-balancing and γ-correction to improve its visual quality. Finally, the processed images goes through the JPEG compression process. For each step of image processing chain, a statistical study of pixels' properties is performed to finally obtain a model of Discrete Cosine Transform (DCT) coefficients distribution.

Journal ArticleDOI
TL;DR: Generalized block-lifting factorization of M-channel (M >; 2) biorthogonal filter banks (BOFBs) for lossy-to-lossless image coding is presented in this paper and achieves better result in both objective measure and perceptual visual quality for the images with a lot of high-frequency components.
Abstract: Generalized block-lifting factorization of M-channel (M >; 2) biorthogonal filter banks (BOFBs) for lossy-to-lossless image coding is presented in this paper. Since the proposed block-lifting structure is more general than the conventional lifting factorizations and does NOT require many restrictions such as paraunitary, number of channels, and McMillan degree in each building block unlike the conventional lifting factorizations, its coding gain is higher than that of the previous methods. Several proposed BOFBs are designed and applied to image coding. Comparing the results with conventional lossy-to-lossless image coding structures, including the 5/3- and 9/7-tap discrete wavelet transforms in JPEG 2000 and a 4 × 8 hierarchical lapped biorthogonal transform in JPEG XR, the proposed BOFBs achieve better result in both objective measure and perceptual visual quality for the images with a lot of high-frequency components.

Proceedings ArticleDOI
10 Dec 2012
TL;DR: The experimental results show that the improved double quantization detection method introduced in this paper can support a reliable large-scale digital image evidence authenticity verification with consistent good accuracy in practical applications.
Abstract: Double JPEG image compression detection, or more specifically, double quantization detection, is an important digital image forensic method to detect the presence of image forgery or tampering. In this paper, we introduce an improved double quantization detection method to improve the accuracy of JPEG image tampering detection. We evaluate our detection method using the publicly available CASIA authentic and tampered image data set of 9501 JPEG images. We carry out 20 rounds of experiments with stringent parameter setting placed on our detection method to demonstrate its robustness. Each round of classifier is generated from a unique, non-overlapping and small subset composing of 1/20 of the tampered and 1/72 of the authentic images, to obtain a training data set of about 100 images per class, with the rest of the 19/20 of the tampered and 71/72 of the authentic images used for testing. Through the experiments, we show an average improvement of 40.31% and 44.85% in the true negative (TN) rate and true positive (TP) rate, respectively, when compared with the current state-of-the-art method. The average TN and TP rates obtained from 20 rounds of experiments carried out using our detection method, are 90.81% and 76.95%, respectively. The experimental results show that our JPEG image forensics method can support a reliable large-scale digital image evidence authenticity verification with consistent good accuracy. The low training to testing data ratio also indicates that our method is robust in practical applications even with a relatively limited or small training data set available.

Journal ArticleDOI
TL;DR: The binary tree is proposed as a novel and robust way of coding remote sensing image in wavelet domain and an adaptive scanning order to traverse the binary tree level by level from the bottom to the top is developed so that better performance and visual effect are attained.
Abstract: Remote sensing images offer a large amount of information but require on-board compression because of the storage and transmission constraints of on-board equipment. JPEG2000 is too complex to become a recommended standard for the mission, and CCSDS-IDC fixes most of the parameters and only provides quality scalability. In this paper, we present a new, low-complexity, low-memory, and efficient embedded wavelet image codec for on-board compression. First, we propose the binary tree as a novel and robust way of coding remote sensing image in wavelet domain. Second, we develop an adaptive scanning order to traverse the binary tree level by level from the bottom to the top, so that better performance and visual effect are attained. Last, the proposed method is processed with a scan-based mode, which significantly reduces the memory requirement. The proposed method is very fast because it does not use any entropy coding and rate-distortion optimization, while it provides quality, position, and resolution scalability. Being less complex, it is very easy to implement in hardware and very suitable for on-board compression. Experimental results show that the proposed method can significantly improve peak signal-to-noise ratio compared with SPIHT without arithmetic coding and scan-based CCSDS-IDC, and is similar to scan-based JPEG2000.

Journal ArticleDOI
TL;DR: The proposed listless implementation of WBTC algorithm uses special markers instead of lists to reduce dynamic memory requirement and is combined with discrete cosine transform and discrete wavelet transform to show its superiority over DCT and DWT based embedded coders, including JPEG 2000 at lower rates.
Abstract: This paper presents a listless implementation of wavelet based block tree coding (WBTC) algorithm of varying root block sizes. WBTC algorithm improves the image compression performance of set partitioning in hierarchical trees (SPIHT) at lower rates by efficiently encoding both inter and intra scale correlation using block trees. Though WBTC lowers the memory requirement by using block trees compared to SPIHT, it makes use of three ordered auxiliary lists. This feature makes WBTC undesirable for hardware implementation; as it needs a lot of memory management when the list nodes grow exponentially on each pass. The proposed listless implementation of WBTC algorithm uses special markers instead of lists. This reduces dynamic memory requirement by 88% with respect to WBTC and 89% with respect to SPIHT. The proposed algorithm is combined with discrete cosine transform (DCT) and discrete wavelet transform (DWT) to show its superiority over DCT and DWT based embedded coders, including JPEG 2000 at lower rates. The compression performance on most of the standard test images is nearly same as WBTC, and outperforms SPIHT by a wide margin particularly at lower bit rates.

Proceedings Article
Tilo Strutz1
18 Oct 2012
TL;DR: It is shown that, for a reasonably large set of natural images, there is a colour transform which performs better in the context of lossless image compression than the reversible colour transform defined in the JPEG2000 standard, while having only slightly increased complexity.
Abstract: This paper presents and investigates a new family of reversible low-complexity colour transformations. It shows that, for a reasonably large set of natural images, there is a colour transform which performs better in the context of lossless image compression than the reversible colour transform defined in the JPEG2000 standard, while having only slightly increased complexity. The optimal selection of a colour space for each single image can distinctly decrease the bitrate of the compressed image. A novel approach is proposed, which automatically selects a suitable colour space with negligible loss of performance compared to the optimal selection.

Journal ArticleDOI
TL;DR: The experimental results have demonstrated that the proposed scheme has satisfied the basic requirements of watermarking such as robustness and imperceptible, and can used to resist the JPEG attach and avoid the some weaknesses of JPEG quantification.
Abstract: A watermarking technique based on the frequency domain is presented in this paper. The one of the basic demands for the robustness in the watermarking mechanism should be able to dispute the JPEG attack since the JPEG is a usually file format for transmitting the digital content on the network. Thus, the proposed algorithm can used to resist the JPEG attach and avoid the some weaknesses of JPEG quantification. And, the information of the original host image and watermark are not needed in the extracting process. In addition, two important but conflicting parameters are adopted to trade-off the qualities between the watermarked image and the retrieve watermark. The experimental results have demonstrated that the proposed scheme has satisfied the basic requirements of watermarking such as robustness and imperceptible.

Book ChapterDOI
01 Jan 2012
TL;DR: This chapter examines a number of schemes used for lossless compression of images, highlighting schemes for compression of grayscale and color images as well as schemes for compressed binary images that are a part of international standards.
Abstract: This chapter examines a number of schemes used for lossless compression of images. It highlights schemes for compression of grayscale and color images as well as schemes for compression of binary images. Among these schemes are several that are a part of international standards. The joint photographic experts group (JPEG) is a joint ISO/ITU committee responsible for developing standards for continuous-tone still-picture coding. The more famous standard produced by this group is the lossy image compression standard. However, at the time of the creation of the famous JPEG standard, the committee also created a lossless standard. The old JPEG lossless still compression standard provides eight different predictive schemes from which the user can select. In addition, the context adaptive lossless image compression (CALIC) scheme, which came into being in response to a call for proposal for a new lossless image compression scheme in 1994, uses both context and prediction of the pixel values. The CALIC scheme actually functions in two modes, one for gray-scale images, and another for bi-level images. One of the approaches used by CALIC to reduce the size of its alphabet is to use a modification of a technique called recursive indexing. Recursive indexing is a technique for representing a large range of numbers using only a small set.

Journal Article
TL;DR: DWT and SPIHT Algorithm with Huffman encoder for further compression and Retinex Algorithm to get enhanced quality improved image is proposed.
Abstract: Traditional image coding technology mainly uses the statistical redundancy between pixels to reach the goal of compressing. The research on wavelet transform image coding technology has made a rapid progress. Because of its high speed, low memory requirements and complete reversibility, digital wavelet transform (IWT) has been adopted by new image coding standard, JPEG 2000. The embedded zero tree wavelet (EZW) algorithms have obtained not bad effect in low bit-rate image compression. Set Partitioning in Hierarchical Trees (SPIHT) is an improved version of EZW and has become the general standard of EZW So, In this paper we are proposing DWT and SPIHT Algorithm with Huffman encoder for further compression and Retinex Algorithm to get enhanced quality improved image.

Proceedings Article
11 Apr 2012
TL;DR: The recognition performance of two different feature extraction schemes applied to correspondingly compressed images is compared to the usage of the dyadic decomposition structure of JPEG2000 Part 1 in the compression stage.
Abstract: The impact of using wavelet subband stuctures as allowed in JPEG2000 Part 2 in polar iris image compression is investigated. The recognition performance of two different feature extraction schemes applied to correspondingly compressed images is compared to the usage of the dyadic decomposition structure of JPEG2000 Part 1 in the compression stage.

Journal ArticleDOI
TL;DR: These methods allow customizing the precision of a multilevel DWT to a given error tolerance requirement and ensuring an energy-minimal implementation, which increases the applicability of DWT-based algorithms such as JPEG 2000 to energy-constrained platforms and environments.
Abstract: This paper presents designs for both bit-parallel (BP) and digit-serial (DS) precision-optimized implementations of the discrete wavelet transform (DWT), with specific consideration given to the impact of depth (the number of levels of DWT) on the overall computational accuracy. These methods thus allow customizing the precision of a multilevel DWT to a given error tolerance requirement and ensuring an energy-minimal implementation, which increases the applicability of DWT-based algorithms such as JPEG 2000 to energy-constrained platforms and environments. Additionally, quantization of DWT coefficients to a specific target step size is performed as an inherent part of the DWT computation, thereby eliminating the need to have a separate downstream quantization step in applications such as JPEG 2000. Experimental measurements of design performance in terms of area, speed, and power for 90-nm complementary metal-oxide-semiconductor implementation are presented. Results indicate that while BP designs exhibit inherent speed advantages, DS designs require significantly fewer hardware resources with increasing precision and DWT level. A four-level DWT with medium precision, for example, while the BP design is four times faster than the digital-serial design, occupies twice the area. In addition to the BP and DS designs, a novel flexible DWT processor is presented, which supports run-time configurable DWT parameters.

Journal ArticleDOI
TL;DR: A Joint Source Channel Coding solution optimized for a wireless JPEG 2000 (JPWL ISO/IEC 15444-11) image transmission scheme over a MIMO channel that allows flexible and reactive coding of a JPWL codestream adapted to the instantaneous channel status.
Abstract: This paper proposes a Joint Source Channel Coding solution optimized for a wireless JPEG 2000 (JPWL ISO/IEC 15444-11) image transmission scheme over a MIMO channel. To ensure robustness of the transmission, channel diversity is exploited with a Closed-Loop MIMO-OFDM scheme. This relies on the Channel State Information (CSI) knowledge on the transmitter side, which allows the MIMO channel to be decomposed into several hierarchical SISO subchannels. In the proposed scheme, the JPWL codestream is divided into hierarchical quality layer passing through the SISO subchannels. With the CSI, a global and optimal method for adjusting all the system parameters of each SISO subchannel is provided. Accordingly, adaptive modulation, Unequal Error Protection (UEP), Unequal Power Allocation (UPA) and source coding rate is provided for each quality layers. The major strength of this work is to provide an optimal method that parameterizes several variables. These have an effect on the rate-distortion trade-off under bitrate, Quality of Service (QoS) and power constraints. Finally, the proposed work allows flexible and reactive coding of a JPWL codestream adapted to the instantaneous channel status. The performance of this technique is evaluated over a realistic time-varying MIMO channel provided by a 3D-ray tracing propagation model. A significant improvement in the quality of the image is demonstrated.

Book ChapterDOI
04 Dec 2012
TL;DR: This paper introduces a JPEG compressed domain retrieval algorithm that is based not directly on DCT coefficients but on differences of these, which are readily available in a JPEG compression stream and builds histograms of these differences and utilise them as image features, thus eliminating the need to undo the differential coding as in other methods.
Abstract: The vast majority of images are stored in compressed JPEG format. When performing content-based image retrieval, faster feature extraction is possible when calculating them directly in the compressed domain, avoiding full decompression of the images. Algorithms that operate in this way calculate image features based on DCT coefficients and hence still require partial decoding of the image to arrive at these. In this paper, we introduce a JPEG compressed domain retrieval algorithm that is based not directly on DCT coefficients but on differences of these, which are readily available in a JPEG compression stream. In particular, we utilise solely the DC stream of JPEG files and make direct use of the fact that DC terms are differentially coded. We build histograms of these differences and utilise them as image features, thus eliminating the need to undo the differential coding as in other methods. In combination with a colour histogram, also extracted from DC data, we show our approach to give (to our knowledge) the best retrieval accuracy of a JPEG domain retrieval algorithm, outperforming other compressed domain methods and reaching a performance close to that of the best performing MPEG-7 descriptor.

Journal ArticleDOI
TL;DR: An object-oriented application for image analysis using color orthophotos and a Quickbird image and a hierarchical classification algorithm for segmentation-based classification results, which shows that this classification evaluation approach must be used with caution because it may underestimate the classification errors.