scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2003"


Journal ArticleDOI
TL;DR: Circular interpretation of bijective transformations is proposed to implement a method that fulfills all quality and functionality requirements of lossless watermarking.
Abstract: The need for reversible or lossless watermarking methods has been highlighted in the literature to associate subliminal management information with losslessly processed media and to enable their authentication. The paper first analyzes the specificity and the application scope of lossless watermarking methods. It explains why early attempts to achieve reversibility are not satisfactory. They are restricted to well-chosen images, strictly lossless context and/or suffer from annoying visual artifacts. Circular interpretation of bijective transformations is proposed to implement a method that fulfills all quality and functionality requirements of lossless watermarking. Results of several bench tests demonstrate the validity of the approach.

438 citations


Proceedings ArticleDOI
05 May 2003
TL;DR: It is shown that with several typical compression tools, there is a net energy increase when compression is applied before transmission, and hardware-aware programming optimizations are demonstrated, which improve energy efficiency by 51%.
Abstract: Wireless transmission of a bit can require over 1000 times more energy than a single 32-bit computation. It would therefore seem desirable to perform significant computation to reduce the number of bits transmitted. If the energy required to compress data is less than the energy required to send it, there is a net energy savings and consequently, a longer battery life for portable computers. This paper reports on the energy of lossless data compressors as measured on a StrongARM SA-110 system. We show that with several typical compression tools, there is a net energy increase when compression is applied before transmission. Reasons for this increase are explained, and hardware-aware programming optimizations are demonstrated. When applied to Unix compress, these optimizations improve energy efficiency by 51%. We also explore the fact that, for many usage models, compression and decompression need not be performed by the same algorithm. By choosing the lowest-energy compressor and decompressor on the test platform, rather than using default levels of compression, overall energy to send compressible web data can be reduced 31%. Energy to send harder-to-compress English text can be reduced 57%. Compared with a system using a single optimized application for both compression and decompression, the asymmetric scheme saves 11% or 12% of the total energy depending on the dataset.

269 citations


Book
03 Jan 2003
TL;DR: Part I Theory: Information Theory behind Source Coding Complexity Measures Part II Compression Techniques: Universal Codes Huffman Coding Arithmetic Coding Dictionary-Based Data Compression: An Algorithmic Perspective.
Abstract: Part I Theory: Information Theory behind Source Coding Complexity Measures Part II Compression Techniques: Universal Codes Huffman Coding Arithmetic Coding Dictionary-Based Data Compression: An Algorithmic Perspective Burrows-Wheeler Compression Symbol-Ranking and ACB Compression Part III Applications: Lossless Image Compression Text Compression Compression of Telemetry Lossless Compression of Audio Data Algorithms for Delta Compression and Remote File Synchronization Compression of Unicode Files Part IV Standards: JPEG-LS Lossless and Near Lossless Image Compression The CCDS Losless Data Compression Recommendation for Space Applications Lossless Bilevel Image Compression JPEG2000: Highly Scalable Image Compression PNG Losless Image Compression Facsimile Compression Part V Hardware: Hardware Implementation of Data Compression

190 citations


Journal ArticleDOI
TL;DR: This paper presents a new compression method for embedded core-based system-on-a-chip test based on a new variable-length input Huffman coding scheme, which proves to be the key element that determines all the factors that influence the TDCE parameters.
Abstract: This paper presents a new compression method for embedded core-based system-on-a-chip test. In addition to the new compression method, this paper analyzes the three test data compression environment (TDCE) parameters: compression ratio, area overhead, and test application time, and explains the impact of the factors which influence these three parameters. The proposed method is based on a new variable-length input Huffman coding scheme, which proves to be the key element that determines all the factors that influence the TDCE parameters. Extensive experimental comparisons show that, when compared with three previous approaches, which reduce some test data compression environment's parameters at the expense of the others, the proposed method is capable of improving on all the three TDCE parameters simultaneously.

178 citations


Journal ArticleDOI
TL;DR: In this paper, a 3D wavelet-based coding algorithm was proposed for medical volumetric data compression. But, the proposed algorithm is not suitable for medical applications and does not meet the requirements of quality and resolution scalability.
Abstract: Several techniques based on the three-dimensional (3-D) discrete cosine transform (DCT) have been proposed for volumetric data coding. These techniques fail to provide lossless coding coupled with quality and resolution scalability, which is a significant drawback for medical applications. This paper gives an overview of several state-of-the-art 3-D wavelet coders that do meet these requirements and proposes new compression methods exploiting the quadtree and block-based coding concepts, layered zero-coding principles, and context-based arithmetic coding. Additionally, a new 3-D DCT-based coding scheme is designed and used for benchmarking. The proposed wavelet-based coding algorithms produce embedded data streams that can be decoded up to the lossless level and support the desired set of functionality constraints. Moreover, objective and subjective quality evaluation on various medical volumetric datasets shows that the proposed algorithms provide competitive lossy and lossless compression results when compared with the state-of-the-art.

176 citations


Journal ArticleDOI
TL;DR: Two state-of-the-art 3-D wavelet video coding techniques are modified and applied to compression of medical volumetric data, achieving the best performance published so far in the literature-both in terms of lossy and lossless compression.
Abstract: We study lossy-to-lossless compression of medical volumetric data using three-dimensional (3-D) integer wavelet transforms. To achieve good lossy coding performance, it is important to have transforms that are unitary. In addition to the lifting approach, we first introduce a general 3-D integer wavelet packet transform structure that allows implicit bit shifting of wavelet coefficients to approximate a 3-D unitary transformation. We then focus on context modeling for efficient arithmetic coding of wavelet coefficients. Two state-of-the-art 3-D wavelet video coding techniques, namely, 3-D set partitioning in hierarchical trees (Kim et al., 2000) and 3-D embedded subband coding with optimal truncation (Xu et al., 2001), are modified and applied to compression of medical volumetric data, achieving the best performance published so far in the literature-both in terms of lossy and lossless compression.

151 citations


Journal ArticleDOI
R. Benzid1, Farhi Marir1, A. Boussaad1, M. Benyoucef1, D. Arar1 
TL;DR: The lossless Huffman's coding is used to increase the compression ratio and quality preservation for good compression ratios is reported.
Abstract: A new method for ECG compression is presented. After the pyramidal wavelet decomposition, the resultant coefficients are subjected to an iterative threshold until a fixed percentage target of wavelet coefficients to be zeroed is reached. Next, the lossless Huffman's coding is used to increase the compression ratio. Quality preservation for good compression ratios is reported.

128 citations


Journal ArticleDOI
TL;DR: A simple method for compressing very large and regularly sampled scalar fields based on the new Lorenzo predictor, which is well suited for out‐of‐core compression and decompression and often outperforms wavelet compression in an L∞sense.
Abstract: We present a simple method for compressing very large and regularly sampled scalar fields. Our method is particularly attractive when the entire data set does not fit in memory and when the sampling rate is high relative to the feature size of the scalar field in all dimensions. Although we report results for R{sup 3} and R{sup 4} data sets, the proposed approach may be applied to higher dimensions. The method is based on the new Lorenzo predictor, introduced here, which estimates the value of the scalar field at each sample from the values at processed neighbors. The predicted values are exact when the n-dimensional scalar field is an implicit polynomial of degree n-1. Surprisingly, when the residuals (differences between the actual and predicted values) are encoded using arithmetic coding, the proposed method often outperforms wavelet compression in an L{infinity} sense. The proposed approach may be used both for lossy and lossless compression and is well suited for out-of-core compression and decompression, because a trivial implementation, which sweeps through the data set reading it once, requires maintaining only a small buffer in core memory, whose size barely exceeds a single n-1 dimensional slice of the data.

124 citations


Journal ArticleDOI
TL;DR: The results show that the proposed lossless compression method for hyperspectral images outperforms previous methods.
Abstract: A clustered differential pulse code modulation lossless compression method for hyperspectral images is presented. The spectra of a hyperspectral image is clustered, and an optimized predictor is calculated for each cluster. Prediction is performed using a linear predictor. After prediction, the difference between the predicted and original values is computed. The difference is entropy-coded using an adaptive entropy coder for each cluster. The achieved compression ratios presented here are compared with those of existing methods. The results show that the proposed lossless compression method for hyperspectral images outperforms previous methods.

122 citations


Journal ArticleDOI
TL;DR: A speedup metric is defined that combines space gains due to compression with temporal overheads due to the compression routine and the transmission serialization and is empirically verified using a special-purpose Internet-based networking application.
Abstract: We compress phase-shift digital holograms (whole Fresnel fields) for the transmission of three-dimensional images. For real-time networking applications, the time required to compress can be as critical as the compression rate. We achieve lossy compression through quantization of both the real and imaginary streams, followed by a bit packing operation. Compression losses in the reconstructed objects were quantified. We define a speedup metric that combines space gains due to compression with temporal overheads due to the compression routine and the transmission serialization. We empirically verify transmission speedup due to compression using a special-purpose Internet-based networking application.

113 citations


Journal ArticleDOI
TL;DR: The dictionary-based approach not only reduces test data volume but it also eliminates the need for additional synchronization and handshaking between the SOC and the ATE, and generally provides higher compression for the same amount of hardware overhead.
Abstract: We present a dictionary-based test data compression approach for reducing test data volume in SOCs. The proposed method is based on the use of a small number of ATE channels to deliver compressed test patterns from the tester to the chip and to drive a large number of internal scan chains in the circuit under test. Therefore, it is especially suitable for a reduced pin-count and low-cost DFT test environment, where a narrow interface between the tester and the SOC is desirable. The dictionary-based approach not only reduces test data volume but it also eliminates the need for additional synchronization and handshaking between the SOC and the ATE. The dictionary entries are determined during the compression procedure by solving a variant of the well-known clique partitioning problem from graph theory. Experimental results for the ISCAS-89 benchmarks and representative test data from IBM show that the proposed method outperforms a number of recently-proposed test data compression techniques. Compared to the previously proposed test data compression approach based on selective Huffman coding with variable-length indices, the proposed approach generally provides higher compression for the same amount of hardware overhead.

Journal ArticleDOI
TL;DR: A compression technique is proposed which is based on motion compensation, optimal three-dimensional (3-D) linear prediction and context based Golomb-Rice entropy coding, which is compared with 3-D extensions of the JPEG-LS standard for still image compression.
Abstract: We consider the problem of lossless compression of video by taking into account temporal information. Video lossless compression is an interesting possibility in the line of production and contribution. We propose a compression technique which is based on motion compensation, optimal three-dimensional (3-D) linear prediction and context based Golomb-Rice (1966, 1979) entropy coding. The proposed technique is compared with 3-D extensions of the JPEG-LS standard for still image compression. A compression gain of about 0.8 bit/pel with respect to static JPEG-LS, applied on a frame-by-frame basis, is achievable at a reasonable computational complexity.

Journal ArticleDOI
TL;DR: A combined architecture for the 5-3 and 9-7 transforms with minimum area is presented, and compared to existing architectures, memory resource and area can be reduced thanks to the proposed solution.
Abstract: The wavelet transform is a very promising tool for image compression. In JPEG2000, two filter banks are used, one an integer lossless 5-3 filter, and one a lossy 9-7. A combined architecture for the 5-3 and 9-7 transforms with minimum area is presented. The lifting scheme is used to realize a fast wavelet transform. Two lines are processed at a time. This line-based architecture allows minimum memory requirement and fast calculation. The pipeline and the optimization of the operations provide speed, while the combination of the two transforms in one structure contributes to saving the area. On a VIRTEXE1000-8 FPGA implementation, decoding of 2 pixels per clock cycle can be performed at 110 MHz. Only 19% of the total area of the VIRTEXE1000 is needed. Compared to existing architectures, memory resource and area can be reduced thanks to the proposed solution.

Patent
Wei-ge Chen1, Chao He1
03 Sep 2003
TL;DR: In this article, mixed lossless audio compression has been proposed to combine lossy and lossless compression within a same audio signal, which can then be used for frames that exhibit poor compression performance.
Abstract: A mixed lossless audio compression has application to a unified lossy and lossless audio compression scheme that combines lossy and lossless audio compression within a same audio signal. The mixed lossless compression codes a transition frame between lossy and lossless coding frames to produce seamless transitions. The mixed lossless coding performs a lapped transform and inverse lapped transform to produce an appropriately windowed and folded pseudo-time domain frame, which can then be losslessly coded. The mixed lossless coding also can be applied for frames that exhibit poor lossy compression performance.

Journal ArticleDOI
TL;DR: A new algorithm for electrocardiogram (ECG) compression based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal, which reduces the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level.

Patent
Wei-ge Chen1, Chao He1
14 Jul 2003
TL;DR: In this paper, a unified lossy and lossless audio compression scheme was proposed, which combines lossy audio compression within a same audio signal, and employs mixed lossless coding of a transition frame between lossy coding frames to produce seamless transitions.
Abstract: A unified lossy and lossless audio compression scheme combines lossy and lossless audio compression within a same audio signal. This approach employs mixed lossless coding of a transition frame between lossy and lossless coding frames to produce seamless transitions. The mixed lossless coding performs a lapped transform and inverse lapped transform to produce an appropriately windowed and folded pseudo-time domain frame, which can then be losslessly coded. The mixed lossless coding also can be applied for frames that exhibit poor lossy compression performance.

Journal ArticleDOI
TL;DR: A “fast evolutionary algorithm” (FEA) that does not evaluate all new individuals, thus operating faster and finding on average 4% better fitness values or compression ratios using only 58% of the number of evaluations needed by an EA in lossless (lossy) compression mode.

Journal ArticleDOI
01 Jun 2003
TL;DR: This paper presents the X-MatchPRO high-speed lossless data compression algorithm and its hardware implementation, which enables data independent throughputs of 1.6 Gbit/s compression and decompression using contemporary low-cost reprogrammable field-programmable gate array technology.
Abstract: This paper presents the X-MatchPRO high-speed lossless data compression algorithm and its hardware implementation, which enables data independent throughputs of 1.6 Gbit/s compression and decompression using contemporary low-cost reprogrammable field-programmable gate array technology. A full-duplex implementation is presented that allows a combined compression and decompression performance of 3.2 Gbit/s. The features of the compression algorithm and architecture that have enabled the high throughputs are described in detail. A comparison between this device and other commercially available data compressors is made in terms of technology, compression ratio, and throughput. X-MatchPRO is a fully synchronous design proven in silicon specially targeted to improve the performance of Gbit/s storage and communication applications.

Proceedings ArticleDOI
25 Mar 2003
TL;DR: A locally adaptive partitioning algorithm is introduced that performs comparably in this application to a more expensive globally optimal one that employs dynamic programming.
Abstract: High dimensional source vectors, such as those that occur in hyperspectral imagery, are partitioned into a number of subvectors of different length and then each subvector is vector quantized (VQ) individually with an appropriate codebook. A locally adaptive partitioning algorithm is introduced that performs comparably in this application to a more expensive globally optimal one that employs dynamic programming. The VQ indices are entropy coded and used to condition the lossless or near-lossless coding of the residual error. Motivated by the need for maintaining uniform quality across all vector components, a percentage maximum absolute error distortion measure is employed. Experiments on the lossless and near-lossless compression of NASA AVIRIS images are presented. A key advantage of the approach is the use of independent small VQ codebooks that allow fast encoding and decoding.

01 Jan 2003
TL;DR: In this article, a linear-time approximation algorithm for the grammar-based compression problem is presented, which is an optimization problem to minimize the size of a context-free grammar deriving a given string.
Abstract: A linear-time approximation algorithm for the grammar-based compression is presented. This is an optimization problem to minimize the size of a context-free grammar deriving a given string. For each string of length n , the algorithm guarantees O ( log n g ∗ ) approximation ratio without suffix tree construction.

Proceedings ArticleDOI
Pengwei Hao1, Q. Shi
24 Nov 2003
TL;DR: Experiments with KLT and wavelet based JPEG-2000 show that reversible KLT (RKLT) outperforms other approaches for all of the test images in the case of both lossy and lossless compression.
Abstract: In this paper, we presented a method for integer reversible implementation of KLT for multiple component image compression The progressive-to-lossless compression algorithm employed the JPEG-2000 transform coding strategy using the multiple component transform (MCT) across the components, followed by a 2-dimensional wavelet transform on individual eigen images The linear MCTs we tested and compared are KLT, discrete wavelet transform (DWT), and a tasselled cap transform (TCT) for TM satellite images only The computational complexity of the reversible integer implementation is no more than that of naive transformation, and the overhead data is very small Its effectiveness was evaluated using two 6-band landsat TM satellite images and an 80-component hyper-spectral remotely-sensed image Experiments with KLT and wavelet based JPEG-2000 show that reversible KLT (RKLT) outperforms other approaches for all of the test images in the case of both lossy and lossless compression

Proceedings ArticleDOI
06 Apr 2003
TL;DR: A new steganalytic technique based on the difference image histogram aimed at LSB steganography that can not only detect the existence of hidden messages embedded using sequential or random LSB replacement in images reliably, but also estimate the amount ofhidden messages exactly.
Abstract: A new steganalytic technique based on the difference image histogram aimed at LSB steganography is proposed. Translation coefficients between difference image histograms are defined as a measure of the weak correlation between the least significant bit (LSB) plane and the remaining bit planes, and then used to construct a classifier to discriminate the stego-image from the carrier-image. The algorithm can not only detect the existence of hidden messages embedded using sequential or random LSB replacement in images reliably, but also estimate the amount of hidden messages exactly. Experimental results show that for raw lossless compressed images the new algorithm has a better performance than the RS analysis method and improves the computation speed significantly.

Proceedings ArticleDOI
04 Aug 2003
TL;DR: In this article, a fixed-length asymptotically optimal scheme for lossless compression of stationary ergodic tree sources with memory is proposed based on the concatenation of the Burrows-Wheeler block sorting transform with the syndrome former of a linear error correcting code.
Abstract: A new fixed-length asymptotically optimal scheme for lossless compression of stationary ergodic tree sources with memory is proposed. Our scheme is based on the concatenation of the Burrows-Wheeler block sorting transform with the syndrome former of a linear error correcting code. Low-density parity-check (LDPC) codes together with belief propagation decoding lead to linear compression and decompression times, and to natural universal implementation of the algorithm.

Proceedings ArticleDOI
25 Mar 2003
TL;DR: It is remarkable that a simple model, which recursively updates a small number of parameters, is able to reach the state of the art compression ratio for DNA sequences with much more complex models.
Abstract: The use of normalized maximum likelihood (NML) model for encoding sequences known to have regularities in the form of approximate repetitions was discussed. A particular version of the NML model was presented for discrete regression, which was shown to provide a very powerful yet simple model for encoding the approximate repeats in DNA sequences. Combining the model of repeats with a simple first order Markov model, a fast lossless compression method was obtained that compares favorably with the existing DNA compression programs. It is remarkable that a simple model, which recursively updates a small number of parameters, is able to reach the state of the art compression ratio for DNA sequences with much more complex models. Being a minimum description length (MDL) model, the NML model may later prove to be useful in studying global and local features of DNA or possibly of other biological sequences.

Proceedings ArticleDOI
19 May 2003
TL;DR: In this article, the authors investigated the use of data compression to reduce the battery consumed by handheld devices when downloading data from proxy servers over a wireless LAN and found that the gzip compression software (based on LZ77) to be far superior to bzip2 (Based on BWT).
Abstract: We investigate the use of data compression to reduce the battery consumed by handheld devices when downloading data from proxy servers over a wireless LAN. To make a careful trade-off between the communication energy and the overhead to perform decompression, we experiment with three universal lossless compression schemes, using a popular handheld device in a wireless LAN environment and we find interesting facts. The results show that, from the battery-saving perspective, the gzip compression software (based on LZ77) to be far superior to bzip2 (based on BWT) and compress (based on LZW). We then present an energy model to estimate the energy consumption for the compressed downloading. With this model, we further reduce the energy cost of gzip by interleaving communication with computation and by using a block-by-block selective scheme based on the compression factor of each block. We also use a threshold file size below which the file is not to be compressed before transferring.

Proceedings ArticleDOI
06 Jul 2003
TL;DR: A novel approach for light field compression that incorporates disparity compensation into 4-D wavelet coding using disparity-compensated lifting is proposed, which solves the irreversibility limitations of previous wave let coding approaches.
Abstract: We propose a novel approach for light field compression that incorporates disparity compensation into 4-D wavelet coding using disparity-compensated lifting. With this approach, we obtain the benefits of wavelet coding, including compression efficiency and scalability in all dimensions. Additionally, our proposed approach solves the irreversibility limitations of previous wavelet coding approaches. Experimental results show that the compression efficiency of the proposed technique outperforms current state-of-the-art wavelet coding techniques by a wide margin.

Patent
Chen Wei-Ge1, Chao He1
14 Jul 2003
TL;DR: In this paper, a lossless audio compression scheme is adapted for use in a unified lossy and lossless compression scheme, where the adaptive filter is varied based on transient detection, such as increasing the adaptation rate where a transient is detected.
Abstract: A lossless audio compression scheme is adapted for use in a unified lossy and lossless audio compression scheme. In the lossless compression, the adaptation rate of an adaptive filter is varied based on transient detection, such as increasing the adaptation rate where a transient is detected. A multi-channel lossless compression uses an adaptive filter that processes samples from multiple channels in predictive coding a current sample in a current channel. The lossless compression also encodes using an adaptive filter and Golomb coding with non-power of two divisor.

Proceedings ArticleDOI
17 Sep 2003
TL;DR: To meet the integrity, this paper presents a lossless scheme for the medical image processing, which can be used in e-diagnosis and can provide relative high data embedding rate whereas keep a relative lower distortion.
Abstract: With the development of digital techniques, traditional businesses are moving to a digital world for effectiveness, convenience and security. As a particular application, embedding to medical images with patient information such as patient personal data, history, test and diagnosis result before transmitting and storing, and recovering the embedded information and the original images exactly after receiving is an efficient way to execute correct medical practice and reduce storage, memory requirement and transmission time. It provides integrity of medical images and corresponding documentations, and protection of information. However, existing medical systems take them separately. To meet the integrity, this paper presents a lossless scheme for the medical image processing, which can be used in e-diagnosis. The scheme can provide relative high data embedding rate whereas keep a relative lower distortion. However, the method is distortion-tolerant in application. The original image can be recovered with distortion-free.

01 Jan 2003
TL;DR: This paper focuses on compression of video or picture data, a field in which data of vast sizes are processed and the amount of information stored in databases grows fast, while their contents often exhibit much redundancy.
Abstract: 4 Improved compression algorithm based on the Burrows–Wheeler transform 61 4.1 Modifications of the basic version of the compression algorithm. 61 5 Conclusions 141 iii Acknowledgements 145 Bibliography 147 Appendices 161 A Silesia corpus 163 B Implementation details 167 C Detailed options of examined compression programs 173 D Illustration of the properties of the weight functions 177 E Detailed compression results for files of different sizes and similar contents 185 List of Symbols and Abbreviations 191 List of Figures 195 List of Tables 198 Index 200 Chapter 1 Preface I am now going to begin my story (said the old man), so please attend. Contemporary computers process and store huge amounts of data. Some parts of these data are excessive. Data compression is a process that reduces the data size, removing the excessive information. Why is a shorter data sequence often more suitable? The answer is simple: it reduces the costs. A full-length movie of high quality could occupy a vast part of a hard disk. The compressed movie can be stored on a single CD-ROM. Large amounts of data are transmitted by telecommunication satellites. Without compression we would have to launch many more satellites that we do to transmit the same number of television programs. The capacity of Internet links is also limited and several methods reduce the immense amount of transmitted data. Some of them, as mirror or proxy servers, are solutions that minimise a number of transmissions on long distances. The other methods reduce the size of data by compressing them. Multimedia is a field in which data of vast sizes are processed. The sizes of text documents and application files also grow rapidly. Another type of data for which compression is useful are database tables. Nowadays, the amount of information stored in databases grows fast, while their contents often exhibit much redundancy. Data compression methods can be classified in several ways. One of the most important criteria of classification is whether the compression algorithm 1 2 CHAPTER 1. PREFACE removes some parts of data which cannot be recovered during the decompres-sion. The algorithms removing irreversibly some parts of data are called lossy, while others are called lossless. The lossy algorithms are usually used when a perfect consistency with the original data is not necessary after the decom-pression. Such a situation occurs for example in compression of video or picture data. If the recipient of the video …

Journal ArticleDOI
TL;DR: The proposed3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones.
Abstract: We propose a fully three-dimensional (3-D) wavelet-based coding system featuring 3-D encoding/two-dimensional (2-D) decoding functionalities. A fully 3-D transform is combined with context adaptive arithmetic coding; 2-D decoding is enabled by encoding every 2-D subband image independently. The system allows a finely graded up to lossless quality scalability on any 2-D image of the dataset. Fast access to 2-D images is obtained by decoding only the corresponding information thus avoiding the reconstruction of the entire volume. The performance has been evaluated on a set of volumetric data and compared to that provided by other 3-D as well as 2-D coding systems. Results show a substantial improvement in coding efficiency (up to 33%) on volumes featuring good correlation properties along the z axis. Even though we did not address the complexity issue, we expect a decoding time of the order of one second/image after optimization. In summary, the proposed 3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones.