scispace - formally typeset
Search or ask a question

Showing papers on "JPEG 2000 published in 2008"


Journal ArticleDOI
TL;DR: A method for the detection of double JPEG compression and a maximum-likelihood estimator of the primary quality factor are presented, essential for construction of accurate targeted and blind steganalysis methods for JPEG images.
Abstract: This paper presents a method for the detection of double JPEG compression and a maximum-likelihood estimator of the primary quality factor. These methods are essential for construction of accurate targeted and blind steganalysis methods for JPEG images. The proposed methods use support vector machine classifiers with feature vectors formed by histograms of low-frequency discrete cosine transformation coefficients. The performance of the algorithms is compared to selected prior art.

284 citations


Journal ArticleDOI
TL;DR: The experimental results show that the high visual quality of stego-images, the data embedding capacity, and the robustness of the proposed lossless data hiding scheme against compression are acceptable for many applications, including semi-fragile image authentication.
Abstract: Recently, among various data hiding techniques, a new subset, lossless data hiding, has received increasing interest. Most of the existing lossless data hiding algorithms are, however, fragile in the sense that the hidden data cannot be extracted out correctly after compression or other incidental alteration has been applied to the stego-image. The only existing semi-fragile (referred to as robust in this paper) lossless data hiding technique, which is robust against high-quality JPEG compression, is based on modulo-256 addition to achieve losslessness. In this paper, we first point out that this technique has suffered from the annoying salt-and-pepper noise caused by using modulo-256 addition to prevent overflow/underflow. We then propose a novel robust lossless data hiding technique, which does not generate salt-and-pepper noise. By identifying a robust statistical quantity based on the patchwork theory and employing it to embed data, differentiating the bit-embedding process based on the pixel group's distribution characteristics, and using error correction codes and permutation scheme, this technique has achieved both losslessness and robustness. It has been successfully applied to many images, thus demonstrating its generality. The experimental results show that the high visual quality of stego-images, the data embedding capacity, and the robustness of the proposed lossless data hiding scheme against compression are acceptable for many applications, including semi-fragile image authentication. Specifically, it has been successfully applied to authenticate losslessly compressed JPEG2000 images, followed by possible transcoding. It is expected that this new robust lossless data hiding algorithm can be readily applied in the medical field, law enforcement, remote sensing and other areas, where the recovery of original images is desired.

214 citations


Proceedings ArticleDOI
05 Nov 2008
TL;DR: Using the probabilities of the first digits of quantized DCT (discrete cosine transform) coefficients from individual AC (alternate current) modes to detect doubly compressed JPEG images and combining the MBFDF with a multi-class classification strategy can be exploited to identify the quality factor in the primary JPEG compression.
Abstract: In this paper, we utilize the probabilities of the first digits of quantized DCT (discrete cosine transform) coefficients from individual AC (alternate current) modes to detect doubly compressed JPEG images. Our proposed features, named by mode based first digit features (MBFDF), have been shown to outperform all previous methods on discriminating doubly compressed JPEG images from singly compressed JPEG images. Furthermore, combining the MBFDF with a multi-class classification strategy can be exploited to identify the quality factor in the primary JPEG compression, thus successfully revealing the double JPEG compression history of a given JPEG image.

181 citations


Journal ArticleDOI
TL;DR: This paper introduces a novel framework for image compression that makes use of the interpolation qualities of edge-enhancing diffusion, and shows that this anisotropic diffusion equation with a diffusion tensor outperforms many other PDEs when sparse scattered data must be interpolated.
Abstract: Compression is an important field of digital image processing where well-engineered methods with high performance exist. Partial differential equations (PDEs), however, have not much been explored in this context so far. In our paper we introduce a novel framework for image compression that makes use of the interpolation qualities of edge-enhancing diffusion. Although this anisotropic diffusion equation with a diffusion tensor was originally proposed for image denoising, we show that it outperforms many other PDEs when sparse scattered data must be interpolated. To exploit this property for image compression, we consider an adaptive triangulation method for removing less significant pixels from the image. The remaining points serve as scattered interpolation data for the diffusion process. They can be coded in a compact way that reflects the B-tree structure of the triangulation. We supplement the coding step with a number of amendments such as error threshold adaptation, diffusion-based point selection, and specific quantisation strategies. Our experiments illustrate the usefulness of each of these modifications. They demonstrate that for high compression rates, our PDE-based approach does not only give far better results than the widely-used JPEG standard, but can even come close to the quality of the highly optimised JPEG2000 codec.

159 citations


Journal ArticleDOI
TL;DR: It is shown that it is possible to compress iris images to as little as 2000 bytes with minimal impact on recognition performance, approaching a convergence of image data size and template size.
Abstract: We investigate three schemes for severe compression of iris images in order to assess what their impact would be on recognition performance of the algorithms deployed today for identifying people by this biometric feature. Currently, standard iris images are 600 times larger than the IrisCode templates computed from them for database storage and search; but it is administratively desired that iris data should be stored, transmitted, and embedded in media in the form of images rather than as templates computed with proprietary algorithms. To reconcile that goal with its implications for bandwidth and storage, we present schemes that combine region-of-interest isolation with JPEG and JPEG2000 compression at severe levels, and we test them using a publicly available database of iris images. We show that it is possible to compress iris images to as little as 2000 bytes with minimal impact on recognition performance. Only some 2% to 3% of the bits in the IrisCode templates are changed by such severe image compression, and we calculate the entropy per code bit introduced by each compression scheme. Error tradeoff curve metrics document very good recognition performance despite this reduction in data size by a net factor of 150, approaching a convergence of image data size and template size.

130 citations


Journal ArticleDOI
TL;DR: This paper focuses on the optimization of a full wavelet compression system for hyperspectral images and shows that a specific fixed decomposition significantly improves the classical isotropic decomposition.
Abstract: Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.

119 citations


Proceedings ArticleDOI
TL;DR: A set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity.
Abstract: This paper reviews a set of color spaces that allow reversible mapping between red-green-blue and luma-chroma representations in integer arithmetic. The YCoCg transform and its reversible form YCoCg-R can improve coding gain by over 0.5 dB with respect to the popular YCrCb transform, while achieving much lower computational complexity. We also present extensions of the YCoCg transform for four-channel CMYK pixel data. Thanks to their reversibility under integer arithmetic, these transforms are useful for both lossy and lossless compression. Versions of these transforms are used in the HD Photo image coding technology (which is the basis for the upcoming JPEG XR standard) and in recent editions of the H.264/MPEG-4 AVC video coding standard. Keywords: Image coding, color transforms, lossless coding, YCoCg, JPEG, JPEG XR, HD Photo. 1. INTRODUCTION In color image compression, usually the input image has three color values per pixel: red, green, and blue (RGB). Independent compression of each of the R, G, and B color planes is possible (and explicitly allowed in standards such as JPEG 2000

114 citations


Proceedings ArticleDOI
01 Dec 2008
TL;DR: Experiments have demonstrated that the proposed machine learning based scheme to distinguish between double and single JPEG compressed images has outperformed the prior arts.
Abstract: Double JPEG compression detection is of significance in digital forensics. We propose an effective machine learning based scheme to distinguish between double and single JPEG compressed images. Firstly, difference JPEG 2D arrays, i.e., the difference between the magnitude of JPEG coefficient 2D array of a given JPEG image and its shifted versions along various directions, are used to enhance double JPEG compression artifacts. Markov random process is then applied to modeling difference 2-D arrays so as to utilize the second-order statistics. In addition, a thresholding technique is used to reduce the size of the transition probability matrices, which characterize the Markov random processes. All elements of these matrices are collected as features for double JPEG compression detection. The support vector machine is employed as the classifier. Experiments have demonstrated that our proposed scheme has outperformed the prior arts.

103 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed compression algorithm in the proposed scheme greatly prolongs the lifetime of the network under a specific image quality requirement and applies transmission range adjustment to save communication energy dissipation.

83 citations


Journal ArticleDOI
TL;DR: A Selective Coefficient Mask Shift (SCMShift) coding method, implemented over regions of interest (ROIs), is proposed, based on shifting the wavelet coefficients that belong to different subbands, depending on the coefficients relative to the original image.

63 citations


Journal ArticleDOI
TL;DR: The proposed compression algorithm is based on JPEG 2000 and provides better near-lossless compression performance than 3D-CALIC and, in some cases, better than JPEG 2000.
Abstract: We propose a compression algorithm for hyperspectral images featuring both lossy and near-lossless compression. The algorithm is based on JPEG 2000 and provides better near-lossless compression performance than 3D-CALIC. We also show that its effect on the results of selected applications is negligible and, in some cases, better than JPEG 2000.

Journal ArticleDOI
TL;DR: An embedded lossy-to-lossless coder for hyperspectral images is presented and consistently outperforms not only JPEG2000 but, often, several prominent purely lossless methods.
Abstract: An embedded lossy-to-lossless coder for hyperspectral images is presented. The proposed coder couples a reversible integer-valued Karhunen-Loeve transform with an extension into 3-D of the tarp-based coding with classification for embedding (TCE) algorithm that was originally developed for lossy coding of 2-D images. The resulting coder obtains lossy-to-lossless operation while closely matching the lossy performance of JPEG2000. Additionally, for lossless compression, it consistently outperforms not only JPEG2000 but, often, several prominent purely lossless methods.

Journal ArticleDOI
TL;DR: The test results validate the pixel design and demonstrate that lossy prediction based focal plane image compression can be realized inside the sensor pixel array to achieve a high frame rate with much lower data readout volume.
Abstract: An alternative image decomposition method that exploits prediction via nearby pixels has been integrated on the CMOS image sensor focal plane. The proposed focal plane decomposition is compared to the 2-D discrete wavelet transform (DWT) decomposition commonly used in state of the art compression schemes such as SPIHT and JPEG2000. The method achieves comparable compression performance with much lower computational complexity and allows image compression to be implemented directly on the sensor focal plane in a completely pixel parallel structure. A CMOS prototype chip has been fabricated and tested. The test results validate the pixel design and demonstrate that lossy prediction based focal plane image compression can be realized inside the sensor pixel array to achieve a high frame rate with much lower data readout volume. The features of the proposed decomposition scheme also benefit real-time, low rate and low power applications.

Proceedings ArticleDOI
TL;DR: A set of universal steganalytic features are proposed, which are extracted from the normalized histograms of the local linear transform coefficients of an image, which show that the proposed feature set is very effective on a hybrid image database.
Abstract: This paper takes the task of image steganalysis as a texture classification problem. The impact of steganography to an image is viewed as the alteration of image texture in a fine scale. Specifically, stochastic textures are more likely to appear in a stego image than in a cover image from our observation and analysis. By developing a feature extraction technique previously used in texture classification, we propose a set of universal steganalytic features, which are extracted from the normalized histograms of the local linear transform coefficients of an image. Extensive experiments are conducted to make comparison of our proposed feature set with some existing universal steganalytic feature sets on gray-scale images by using Fisher Linear Discriminant (FLD). Some classical non-adaptive spatial domain steganographic algorithms, as well as some newly presented adaptive spatial domain steganographic algorithms that have never been reported to be broken by any universal steganalytic algorithm, are used for benchmarking. We also report the detection performance on JPEG steganography and JPEG2000 steganography. The comparative experimental results show that our proposed feature set is very effective on a hybrid image database.

Proceedings ArticleDOI
TL;DR: An overview of the key ideas behind the transform design in JPEG XR is provided, and how the transform is constructed from simple building blocks is described.
Abstract: JPEG XR is a draft international standard undergoing standardization within the JPEG committee, based on a Microsoft technology known as HD Photo. One of the key innovations in the draft JPEG XR standard is its integer-reversible hierarchical lapped transform. The transform can provide both bit-exact lossless and lossy compression in the same signal flow path. The transform requires only a small memory footprint while providing the compression benefits of a larger block transform. The hierarchical nature of the transform naturally provides three levels of multi-resolution signal representation. Its small dynamic range expansion, use of only integer arithmetic and its amenability to parallelized implementation lead to reduced computational complexity. This paper provides an overview of the key ideas behind the transform design in JPEG XR, and describes how the transform is constructed from simple building blocks.

Journal ArticleDOI
TL;DR: A removable visible watermarking scheme, which operates in the discrete cosine transform (DCT) domain, is proposed for combating copyright piracy and test results show that the introduced scheme succeeds in preventing the embedded watermark from illegal removal.
Abstract: A removable visible watermarking scheme, which operates in the discrete cosine transform (DCT) domain, is proposed for combating copyright piracy. First, the original watermark image is divided into 16×16 blocks and the preprocessed watermark to be embedded is generated by performing element-by-element matrix multiplication on the DCT coefficient matrix of each block and a key-based matrix. The intention of generating the preprocessed watermark is to guarantee the infeasibility of the illegal removal of the embedded watermark by the unauthorized users. Then, adaptive scaling and embedding factors are computed for each block of the host image and the preprocessed watermark according to the features of the corresponding blocks to better match the human visual system characteristics. Finally, the significant DCT coefficients of the preprocessed watermark are adaptively added to those of the host image to yield the watermarked image. The watermarking system is robust against compression to some extent. The performance of the proposed method is verified, and the test results show that the introduced scheme succeeds in preventing the embedded watermark from illegal removal. Moreover, experimental results demonstrate that legally recovered images can achieve superior visual effects, and peak signal-to-noise ratio values of these images are >50 dB.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: In this article, the authors proposed an improved method of the reversible data hiding for JPEG images proposed by Xuan et al. They found that the blocks, located in the noisy part of the image, are not suitable for embedding data and proposed a method to judge whether a block consisted of 8 times 8 DCT coefficients is located in smooth part of image by using the DC coefficient of neighboring blocks.
Abstract: This paper presents an improved method of the reversible data hiding for JPEG images proposed by Xuan, et al. The conventional method embeds data into the JPEG quantized 8 times 8 block DCT coefficients. In this method, we found that the blocks, located in the noisy part of the image, are not suitable for embedding data. The proposed method can judge whether a block consisted of 8 times 8 DCT coefficients is located in smooth part of image by using the DC coefficient of neighboring blocks. Our method can avoid the noisy part for embedding. It results in a better performance in terms of capacity-distortion behavior.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method for watermarking stereo images is a semifragile one that is robust toward JPEG and JPEG2000 compression and fragile with respect to other signal manipulations.
Abstract: We present an object-oriented method for watermarking stereo images. Since stereo images are characterized by the perception of depth, the watermarking scheme we propose relies on the extraction of a depth map from the stereo pairs to embed the mark. The watermark embedding is performed in the wavelet domain using the quantization index modulation method. Experimental results show that the proposed method is a semifragile one that is robust toward JPEG and JPEG2000 compression and fragile with respect to other signal manipulations.

Journal ArticleDOI
TL;DR: Results demonstrate that compression up to 50:1 can be used with minimal effects on recognition of an iris recognition system, and the imagery used includes both the CASIA iris database as well as the University of Bath database.
Abstract: The human iris is perhaps the most accurate biometric for use in identification. Commercial iris recognition systems currently can be found in several types of settings where a person’s true identity is required: to allow passengers in some airports to be rapidly processed through security; for access to secure areas; and for secure access to computer networks. The growing employment of iris recognition systems and the associated research to develop new algorithms will require large databases of iris images. If the required storage space is not adequate for these databases, image compression is an alternative. Compression allows a reduction in the storage space needed to store these iris images. This may, however, come at a cost: some amount of information may be lost in the process. We investigate the effects of image compression on the performance of an iris recognition system. Compression is performed using JPEG-2000 and JPEG, and the iris recognition algorithm used is an implementation of the Daugman algorithm. The imagery used includes both the CASIA iris database as well as the iris database collected by the University of Bath. Results demonstrate that compression up to 50:1 can be used with minimal effects on recognition.

Proceedings ArticleDOI
TL;DR: This paper aims at comparing the performance of three of the existing alternatives for compression of digital pictures by using different objective Full Reference metrics and considering also perceptual quality metrics which take into account the color information of the data under analysis.
Abstract: The task of comparing the performance of different codecs is strictly related to the research in the field of objective quality metrics. Even if several objective quality metrics have been proposed in literature, the lack of standardization in the field of objective quality assessment and the lack of extensive and reliable comparisons of the performance of the different state-of-the-art metrics often make the results obtained using objective metrics not very reliable. In this paper we aim at comparing the performance of three of the existing alternatives for compression of digital pictures, i.e. JPEG, JPEG 2000, and JPEG XR compression, by using different objective Full Reference metrics and considering also perceptual quality metrics which take into account the color information of the data under analysis.

Journal ArticleDOI
01 Jul 2008
TL;DR: Performance evaluations on real 4-D medical images of varying modalities show an improvement in compression efficiency of up to three times that of other state-of-the-art compression methods such as 3D-JPEG2000.
Abstract: This paper presents an efficient lossless compression method for 4-D medical images based on the advanced video coding scheme (H.264/AVC). The proposed method efficiently reduces data redundancies in all four dimensions by recursively applying multiframe motion compensation. Performance evaluations on real 4-D medical images of varying modalities including functional magnetic resonance show an improvement in compression efficiency of up to three times that of other state-of-the-art compression methods such as 3D-JPEG2000.

Proceedings ArticleDOI
05 Nov 2008
TL;DR: This paper proposes a new approach to analyse the blocking periodicity by developing a linearly dependency model of pixel differences, constructing a probability map of each pixelpsilas belonging to this model, and finally extracting a peak window from the Fourier spectrum of the probability map.
Abstract: Since JPEG image format has been a popularly used image compression standard, tampering detection in JPEG images now plays an important role. The artifacts introduced by lossy JPEG compression can be seen as an inherent signature for compressed images. In this paper, we propose a new approach to analyse the blocking periodicity by, 1) developing a linearly dependency model of pixel differences, 2) constructing a probability map of each pixelpsilas belonging to this model, and 3) finally extracting a peak window from the Fourier spectrum of the probability map. We will show that, for single and double compressed images, their peakspsila energy distribution behave very differently. We exploit this property and derive statistic features from peak windows to classify whether an image has been tampered by cropping and recompression. Experimental results demonstrate the validity of the proposed approach.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed image MDC scheme can achieve good coding performance, and shows how this difficulty can be overcome by an adaptive directional lifting (ADL) transform that is particularly suitable for decorrelating samples on the quincunx lattice.
Abstract: This paper proposes an efficient two-description image coding technique. The two side descriptions of an image are generated by quincunx subsampling. The decoding from any side description is done by an interpolation process that exploits sample correlation. Although the quincunx subsampling is a natural choice for the best use of sample correlations in image multiple-description coding (MDC), each side description is not amenable to existing image coding techniques because the pixels are not aligned rectilinearly. We show how this difficulty can be overcome by an adaptive directional lifting (ADL) transform that is particularly suitable for decorrelating samples on the quincunx lattice. The ADL transform can be embedded into JPEG 2000 to construct a practical MD image encoder. Experimental results demonstrate that the proposed image MDC scheme can achieve good coding performance.

Book ChapterDOI
17 Dec 2008
TL;DR: A print-scan resilient watermarking method which takes advantage of multiple watermarks out of which two are used for inverting the geometrical transformations and the third is the multibit message watermark.
Abstract: In this paper, we propose a print-scan resilient watermarking method which takes advantage of multiple watermarking. The method presented here consists of three separate watermarks out of which two are used for inverting the geometrical transformations and the third is the multibit message watermark. A circular template watermark is embedded in magnitudes of the Fourier transform to invert rotation and scale after print-scan process and another template watermark is embedded in spatial domain to invert translations. The message watermark is embedded in wavelet domain and watermark robustness in both approximation coefficient and detail coefficients is tested. Blind, cross-correlation based methods are utilized to extract the watermarks. The obtained success ratios were at least 91% with JPEG and JPEG200 quality factors of 80-100 and scanner resolution of 300dpi. The BER obtained with the previous settings was less than 1.5%.

Proceedings ArticleDOI
04 Mar 2008
TL;DR: The experiment result shows that the proposed approach can provide a higher information- hiding capacity than Jpeg-Jsteg and Chang et al. methods based on the conventional blocks of 8times8 pixels.
Abstract: The two most important aspects of any image-based steganographic system are the quality of the stego-image and the capacity of the cover image. This paper proposes a novel and high capacity steganographic approach based on Discrete Cosine Transformation (DCT) and JPEG compression. JPEG technique divides the input image into non-overlapping blocks of 8times8 pixels and uses the DCT transformation. However, our proposed method divides the cover image into non- overlapping blocks of 16times16 pixels. For each quantized DCT block, the least two-significant bits (2-LSBs) of each middle frequency coefficient are modified to embed two secret bits. Our aim is to investigate the data hiding efficiency using larger blocks for JPEG compression. Our experiment result shows that the proposed approach can provide a higher information- hiding capacity than Jpeg-Jsteg and Chang et al. methods based on the conventional blocks of 8times8 pixels. Furthermore, the produced stego-images are almost identical to the original cover images.

Journal ArticleDOI
TL;DR: This letter proposes a new two-dimensional nonseparable adaptive interpolation filter, calculated for every fractional-pel direction, which enables coding gains of up to 0.98 dB, compared to ADL coder, and up to 2.4 dB compared to the JPEG 2000 for typical test images.
Abstract: The adaptive directional lifting-based wavelet transform (ADL) locally adapts the filtering directions to the local properties of the image. In this letter, instead of using the conventional interpolation filter for the directional prediction with fractional-pel accuracy, a new two-dimensional nonseparable adaptive interpolation filter is proposed. The adaptive filter is calculated for every fractional-pel direction so as to minimize the energy of the prediction error. The tradeoff between reducing the prediction error and the overhead to code the interpolation filter is discussed. This enables coding gains of up to 0.98 dB, compared to ADL coder, and up to 2.4 dB, compared to the JPEG 2000 for typical test images.

DOI
21 Jan 2008
TL;DR: This paper presents a novel technique for the suspension and resumption of the decoder, making it possible to carve JPEG images in an acceptable time and develops an accurate, fully automated carver.
Abstract: Data carving is a digital forensic technique which aims to reconstitute a file from unstructured data sources with no knowledge of the previously stored file system. This paper presents an approach for the carving of JPEG files. Since JPEG is one of the most popular image formats in the storage and distribution of digital photographic imagery it is frequently of great interest for certain types of forensic investigations.We apply the previously developed carving theory to the JPEG image format and develop an accurate, fully automated carver.We present a novel technique for the suspension and resumption of the decoder, making it possible to carve JPEG images in an acceptable time.

Book ChapterDOI
01 Jan 2008
TL;DR: It is shown that most researches agree that compression does not significantly deteriorate recognition accuracy in both cases, and a lot of work is still to be done to reach the real life implementation stage of compression in face recognition systems.
Abstract: In this chapter we give and up to date overview of papers that combine research in image compression and face recognition. In almost every imaginable real life face recognition scenario, image compression seems unavoidable, and standard JPEG and JPEG2000 compression schemes seem to be the logical choice. As a prerequisite to implementing compression in face recognition systems, two important questions have to be answered. The first one is how does image degradation that naturally comes from a lossy compression scheme affect recognition accuracy. The second one is can face recognition be performed directly in compressed domain, without fully decompressing the images, and how that kind of approach affect the recognition accuracy. This paper will present conclusions from papers published up to now that tried to answer those questions. We shall show that most researches agree that compression does not significantly deteriorate recognition accuracy in both cases. Nevertheless, as this survey will show, research on this subject is still at its pioneering stage and a lot of work is still to be done (especially in the compressed domain recognition) to reach the real life implementation stage.

Journal ArticleDOI
TL;DR: Compression artifacts in thin sections are significantly attenuated in AIP images, and it is justifiable to compress them to a compression level currently accepted for thick sections.
Abstract: OBJECTIVE. The objective of our study was to assess the effects of compressing source thin-section abdominal CT images on final transverse average-intensity-projection (AIP) images.MATERIALS AND METHODS. At reversible, 4:1, 6:1, 8:1, 10:1, and 15:1 Joint Photographic Experts Group (JPEG) 2000 compressions, we compared the artifacts in 20 matching compressed thin sections (0.67 mm), compressed thick sections (5 mm), and AIP images (5 mm) reformatted from the compressed thin sections. The artifacts were quantitatively measured with peak signal-to-noise ratio (PSNR) and a perceptual quality metric (High Dynamic Range Visual Difference Predictor [HDR-VDP]). By comparing the compressed and original images, three radiologists independently graded the artifacts as 0 (none, indistinguishable), 1 (barely perceptible), 2 (subtle), or 3 (significant). Friedman tests and exact tests for paired proportions were used.RESULTS. At irreversible compressions, the artifacts tended to increase in the order of AIP, thick-sect...

Journal ArticleDOI
TL;DR: This paper presents a major extension of the 3D-SPIHT (set partitioning in hierarchical trees) image compression algorithm that enables random access decoding of any specified region of the image volume at a given spatial resolution and given bit rate from a single codestream.
Abstract: End users of large volume image datasets are often interested only in certain features that can be identified as quickly as possible. For hyperspectral data, these features could reside only in certain ranges of spectral bands and certain spatial areas of the target. The same holds true for volume medical images for a certain volume region of the subject's anatomy. High spatial resolution may be the ultimate requirement, but in many cases a lower resolution would suffice, especially when rapid acquisition and browsing are essential. This paper presents a major extension of the 3D-SPIHT (set partitioning in hierarchical trees) image compression algorithm that enables random access decoding of any specified region of the image volume at a given spatial resolution and given bit rate from a single codestream. Final spatial and spectral (or axial) resolutions are chosen independently. Because the image wavelet transform is encoded in tree blocks and the bit rates of these tree blocks are minimized through a rate-distortion optimization procedure, the various resolutions and qualities of the images can be extracted while reading a minimum amount of bits from the coded data. The attributes and efficiency of this 3D-SPIHT extension are demonstrated for several medical and hyperspectral images in comparison to the JPEG2000 Multicomponent algorithm.