scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Three-dimensional encoding/two-dimensional decoding of medical data

21 May 2003-IEEE Transactions on Medical Imaging (IEEE)-Vol. 22, Iss: 3, pp 424-440
TL;DR: The proposed3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones.
Abstract: We propose a fully three-dimensional (3-D) wavelet-based coding system featuring 3-D encoding/two-dimensional (2-D) decoding functionalities. A fully 3-D transform is combined with context adaptive arithmetic coding; 2-D decoding is enabled by encoding every 2-D subband image independently. The system allows a finely graded up to lossless quality scalability on any 2-D image of the dataset. Fast access to 2-D images is obtained by decoding only the corresponding information thus avoiding the reconstruction of the entire volume. The performance has been evaluated on a set of volumetric data and compared to that provided by other 3-D as well as 2-D coding systems. Results show a substantial improvement in coding efficiency (up to 33%) on volumes featuring good correlation properties along the z axis. Even though we did not address the complexity issue, we expect a decoding time of the order of one second/image after optimization. In summary, the proposed 3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones.

Summary (2 min read)

Introduction

  • This reduces the number of markers while preserving 2-D decoding capabilities, improving the compression efficiency at the expense of the SNR scalability in lossy regime.
  • The lifting scheme provides a way to perform any DWT with finite filters with a finite number oflifting steps.
  • The proposed solution exploits the inherent recursive nature of the wavelet transform.
  • Accordingly, identifies the -coordinates of all the images in subband ( ) that are necessary to recover the image of interest.

B. MLZC Coding Principle

  • MLZC applies the same quantization and entropy coding policy as LZC to a different subband structure.
  • The significance map consists of the sequence of symbols if otherwise (10) where defines the position of the considered sample and the operator quantizes with step .
  • Since the authors expect a more pronounced correlation among the significance states of adjacent samples within the same scan, they decided to give more degrees of freedom to the extension of the interscale conditioning term in the previous ( ) than the next ( ) subband images.
  • The signal approximation is encoded first, and all the subbands at level ( ) are processed before any subband at the next finer level .

A. G-PROG Mode

  • The set of quantizers is applied to the whole set of subband images before passing to the next subband.
  • The scanning order follows the decomposition level: all subbands at level are scanned before passing to level ( ).
  • This enables scalability on the whole volume: decoding can be stopped at any point into the bitstream.
  • The compression ratio is maximized, but the 3-D encoding/2-D decoding functionalities are not enabled.

B. LPL-PROG Mode

  • This scheme is derived from the G-PROG mode by adding a marker into the bitstream after encoding every quantization layer of every subband image (see Fig. 2).
  • Since the quantizers are successively applied – as in the G-PROG mode – subband-by-subband and, within each subband, image-by-image, progressiveness by quality is allowed on both the whole volume and any 2-D image, provided that 2-D local-scale conditioning is used.
  • The drawback of this solution is the overloading of the encoded information.

C. LPL Mode

  • One way of reducing the overloading implied by the LPL-PROG mode is to apply the whole set of quantizers to each subband image of position along the axis before switching to the next one ( ).
  • The 3-D/2-D MLZC system is a good the trade off between the gain in coding efficiency provided by fully 3-D algorithms and the fast access to data provided by 2-D coding systems, where each image is treated independently.
  • In the case of MRI [Fig. 16(b)], the shape of the curve reflects the trend of the number of “nonbackground” pixels of the images with the position along the axis.
  • The need of markers affects the coding system performance both directly, as additional information to be written into the bitstream, and indirectly, degrading the efficiency of the entropy coder by increasing the number of information units associated to symbol .

Did you find this useful? Give us your feedback

Figures (15)

Content maybe subject to copyright    Report

424 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 3, MARCH 2003
Three-Dimensional Encoding/Two-Dimensional
Decoding of Medical Data
Gloria Menegaz*, Member, IEEE, and Jean-Philippe Thiran, Member, IEEE
Abstract—We propose a fully three-dimensional (3-D)
wavelet-based coding system featuring 3-D encoding/two-dimen-
sional (2-D) decoding functionalities. A fully 3-D transform is
combined with context adaptive arithmetic coding; 2-D decoding
is enabled by encoding every 2-D subband image independently.
The system allows a finely graded up to lossless quality scalability
on any 2-D image of the dataset. Fast access to 2-D images is
obtained by decoding only the corresponding information thus
avoiding the reconstruction of the entire volume. The performance
has been evaluated on a set of volumetric data and compared to
that provided by other 3-D as well as 2-D coding systems. Results
show a substantial improvement in coding efficiency (up to 33%)
on volumes featuring good correlation properties along the
axis.
Even though we did not address the complexity issue, we expect a
decoding time of the order of one second/image after optimization.
In summary, the proposed 3-D/2-D multidimensional layered zero
coding system provides the improvement in compression efficiency
attainable with 3-D systems without sacrificing the effectiveness
in accessing the single images characteristic of 2-D ones.
Index Terms—3-D/2-D, compression, lossless, volumetric data,
wavelets.
I. INTRODUCTION
M
OST of the current medical imaging techniques produce
three-dimensional (3-D) data distributions. Some of
them are intrinsically volumetric, like magnetic resonance
(MR), computerized tomography (CT), positron emission to-
mography (PET), and 3-D ultrasound, while others describe the
temporal evolution of a dynamic phenomenon as a sequence of
two-dimensional (2-D) images, so that they are more properly
labeled as 2-D+time. The huge amount of data generated every
day in the clinical environment has triggered considerable
research in the field of volumetric data compression for their
efficient storage and transmission. The basic idea is to take
advantage of the correlation among the data samples in the 3-D
space to improve compression efficiency. The most widespread
approach combines a 3-D decorrelating transform with the
extension of a coding algorithm that has proved to be effective
on 2-D images. In [1], the 3-D version of the set partitioning
in hierarchical trees (SPIHT) [2] algorithm for image com-
pression is applied to volumetric medical images. The same
guideline is followed in [3], where the authors also address the
Manuscript received December 10, 2000; revised August 30, 2002. Asterisk
indicates corresponding author.
*G. Menegaz is with the Department of Computer Science, University of
Fribourg, CH-1700 Fribourg, Switzerland (e-mail: gloria@ieee.org).
J.- P. Thiran is with the Signal Processing Institute, School of Engineering
Techniques and Sciences, Swiss Federal Institute of Technology, CH-1015
Lausanne, Switzerland.
Digital Object Identifier 10.1109/TMI.2003.809689
problem of context modeling for efficient entropy coding. The
performance of the 3-D extension of the embedded zerotree
wavelet (EZW)-based coding algorithm [4] is analyzed in
[5]–[7]. A slightly different approach is described in [8], where
a 3-D-DCT is followed by quantization, adaptive bit allocation
and Huffman encoding. In [9]–[11], a 3-D separable wavelet
transform is used to remove interslice redundancy, while in
[12] different sets of wavelet filters are used in in the (
)
plane and
direction, respectively, to account for the difference
between the intraslice and and interslice resolution.
This led to the common consensus that the exploitation of the
full 3-D data correlation potentially improves compression. The
main drawback of 3-D systems is computational complexity. If
an increase in the encoding time might be tolerated, a swift de-
coding is of prime importance for the efficient access to the data.
A possible solution has been proposed in [1] and [5]. It consists
in splitting the volume in coding units of 8 or 16 images each
and processing those independently in order to save memory
and reduce the coding time. Coding units are fixed a priori,as
well as the number of images which are decoded at one time.
Our solutionthatisbasedonthe observationthat it is common
practice to analyze 3-D data distributions one image at a time
for medical examination. Accordingly, in order to be suitable
within a picture archiving and communication system (PACS)
a coding system must provide a fast access to the single 2-D
images. In the proposed solution, the decoding time is kept low
by minimizing the amount of information to be decoded to re-
construct any 2-D image (or, more in general, subset of images)
of the dataset. This is accomplished by independently encoding
each subband image, and making the corresponding information
accessible through the introduction of some special characters
(i.e., markers) into the bitstream. Once the user has specified
the position of the image of interest along the
axis, the set
of subband images that are needed for its reconstruction is de-
termined and the related information is decoded. The inverse
discrete wavelet transform (IDWT) is performed locally and the
single image is recovered. The coding scheme is based on the
multirate 3-D subband coding of video described in [13]. What
we retain is the strategy used for entropy coding, namely the
multidimensional context-adaptive arithmetic coding [14]. The
subtended subband structure is nevertheless different. We per-
form a 3-D-DWT on the volume instead of treating differently
the spatial and temporal dimensions.
The paper is organized as follows. Section II gives an
overview on the global system. In Section III, the lifting
scheme and integer wavelet transform are revisited. Section IV
describes the procedure followed to determine the set of sub-
band images needed to reconstruct a given image of interest.
0278-0062/03$17.00 © 2003 IEEE

MENEGAZ AND THIRAN: THREE-DIMENSIONAL ENCODING/TW-DIMENSIONAL DECODING OF MEDICAL DATA 425
Fig. 1. Volumetric data. We call
z
the third dimension, and assume that the
images are the intersections of the volume with a plan orthogonal to
z
axis.
The coding principle is presented in Section V and Section VI
illustrates the different working modalities. The compression
performance is analyzed in Section VII, and Section VIII
derives conclusions.
II. T
HE 3-D/2-D MULTIDIMENSIONAL LAYERED
ZERO CODING (MLZC) SYSTEM
The combination of the 3-D wavelettransform with an ad-hoc
coding strategy provides high coding efficiency and fast access
to any 2-D image of the dataset. Given the index of the image
of interest along the
axis ( coordinate in Fig. 1), the corre-
sponding portion of the bitstream is accessed and decoded to
recover it at the desired quality. At the encoder, the data are first
decorrelated by a 3-D DWT and then encoded via the MLZC
technique. At the decoder, the set of wavelet coefficients nec-
essary to reconstruct an image of index
is automatically de-
termined and only the corresponding parts of the bitstream are
decoded. The IDWT is performed locally, reducing the memory
requirements and the computational cost.
The wavelet transform has many features that make it suitable
for our application. The approximation properties of reasonably
smooth signals have determined the success of wavelet-based
techniques for image compression. Noteworthy, the JPEG2000
standard [15] follows the same approach. The implementation
via the lifting steps scheme [16] is particularly advantageous
in this framework. First, it provides a very simple way of con-
structing nonlinear wavelet transforms mapping integer-to-in-
tegervalues[17].This is veryimportant for medical applications
because it enables lossless coding. Second, perfect reconstruc-
tion is guaranteed by construction for any kind of signal exten-
sion along borders. This greatly simplifies the management of
the boundary conditions and facilitates the selection of the coef-
ficientsneeded to reconstruct an image. Third, it is computation-
ally efficient. It can be shown that the lifting steps implementa-
tion asymptotically reduces the computational complexity by a
factor 4 with respect to the classical filter-bank implementation
[18]. Finally, the transformation can be implemented in-place,
namely progressively updating the values of the original sam-
ples, without allocating auxiliary memory.
The 3-D-DWT is followed by successive approximation
quantization and context adaptive arithmetic coding. Some
markers are placed in the codestream for the random access to
the encoded information. By combining the 3-D-DWT with 2-D
spatial neighborhoods for entropy coding, the resulting MLZC
algorithm features 3-D encoding/2-D decoding capabilities.
However, many degrees of freedom are left for the design of
the system. The shape of the spatial support of the neighborhood
defining the context and the placement rule of the markers in the
bitstream lead to different working modes. The global-progres-
sive (G-PROG) mode is obtained by encoding the volume as a
whole and without putting any marker. This mode provides the
best compression efficiency. Both 2-D and 3-D contexts can be
used. The resulting bitstream is fully embedded, supporting a
finely-graded range of bit-rates ensuring scalable quality on the
volume, but 2-D decoding is not possible. The layer-per-layer
(LPL), and LPL progressive (LPL-PROG) modes are obtained
by adding some markers in order to enable random access to
the information of interest in the bitstream. More specifically,
the LPL mode provides random access to every subband image.
The idea is to decode the entire information concerning the set
of subband images needed to reconstruct the image of interest
at full quality (i.e., lossless). To achieve quality scalability on
the final 2-D image, other markers must be added, leading to
the LPL-PROG mode. Direct access is possible to every quan-
tization layer of every subband image. Scalable quality is ob-
tained by successively decoding the quantization layers, i.e.,
the bitplanes, of the concerned subband images. The drawback
is the bitstream over-heading due to the additional informa-
tion needed for data addressing, which reduces compression ef-
ficiency. Fig. 2 summarizes the three working modalities and
illustrates the position of the markers in the bitstream. In the
figure, H is for the Header of the bitstream, and
represents
the quantization layer
of the subband image at position in a
given 3-D subband. In the G-PROG mode, the whole informa-
tion concerning the quantization layer
is encoded for all the
subband images. In the LPL mode, all quantization layers are
located in the same segment and markers are placed only be-
tween
and , being the number of quantization steps.
This reduces the number of markers while preserving 2-D de-
coding capabilities, improving the compression efficiency at the
expense of the SNR scalability in lossy regime. Indeed, such a
mode is intended for recovering the 2-D image of interest at full
quality. Finally, in the LPL-PROG mode, the order is the same
but markers are put between
and , , .
III. I
NTEGER WAVELET TRANSFORM VIA LIFTING
The spatial correlation among data samples is exploited by
a fully 3-D separable wavelet transform. The signal is suc-
cessively filtered and down-sampled in all spatial dimensions.
The decomposition is iterated on the approximation low-pass
band, which contains most of the energy [19]. Fig. 3 shows the
classical filter-bank implementation of the DWT. The forward
transform uses two analysis filters,
(low-pass) and
(bandpass), followed by subsampling, while the inverse trans-
form first up-samples and then applies two synthesis filters,

426 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 3, MARCH 2003
Fig. 2. MLZC working modalities. H: bitstream header;
L
quantization layer
i
of the subband image at position
j
in a given 3-D subband. In the G-PROG mode,
the whole information concerning the quantization layer
i
is encoded for all the subband images. In LPL mode, all the quantization layers are located in the same
segment and markers are only between
L
and
L
,
n
being the number of quantization steps. In the LPL-PROG mode, the order is the same but markers are
put between
L
and
L
,
8
i
,
j
.
Fig. 3. DWT.
(low-pass) and (bandpass). Fig. 4 shows a two levels
DWT on a natural image. The approximation subband is a
coarser version of the original, while the other subbands rep-
resent the high frequencies (details) in the horizontal, vertical
and diagonal direction, respectively.
In the proposed system, the DWT is implemented according
to the recently developed lifting steps scheme [16]. The lifting
scheme provides a way to perform any DWT with finite filters
with a finite number of lifting steps. The lifting steps represen-
tation of a given filter is obtained by the Euclidean factoriza-
tion of the polyphase matrix (see Fig. 5) of the filter bank into
a sequence of 2
2 upper and lower triangular matrices. The
polyphase matrix
is defined as
(1)
where
(2)
and , respectively, and , are the even and
odd polyphase components of the synthesis filter
, respec-
tively,
. If the determinant of is equal to one, then the
filter pair (
)iscomplementary. In this case, the following
theorem holds [16]:
Theorem 1: Given a complementary filter pair (
), then
there always exist Laurent polynomials
and for
and a nonzero constant so that
(3)
The block diagrams for the forward and inverse transforms are
illustrated in Figs. 6 and 7, respectively. Each triangular matrix
corresponds to one lifting step. The number of lifting steps
depends on both the length of the filters and the factorization.
It is worth noticing that the result of the Euclidean factoriza-
tion is not unique, so many lifting representations are possible
for the same
. From Figs. 6 and 7 it is easy to realize that
the synthesis chain can be obtained by mirroring the filter-bank
from the analysis counterpart and changing the sign of the fil-
ters. The global system can be seen as a sequence of do/undo
steps, for which the perfect reconstruction property is ensured
by construction. This provides additional degrees of freedom in
the design of the filters, allowing any nonlinear operations into
thebasic blocks and any kindof signal extension outside the bor-
ders. In particular, the integer DWT is obtained by introducing
a rounding operation after each lifting step [17]. As mentioned
in Section II, the availability of an integer version of the trans-
form enables lossless coding and makes the algorithm suitable
for the implementation on a device. However, the integer coef-
ficients are approximations of those that would be obtained by
projecting the signal on the original wavelet basis. This can be
modeled by an equivalent noise which becomes noticeablewhen
the hypothesis of high-resolution quantization holds. It can be
shownthat it introduces an additional contribution to the quanti-

MENEGAZ AND THIRAN: THREE-DIMENSIONAL ENCODING/TW-DIMENSIONAL DECODING OF MEDICAL DATA 427
Fig. 4. DWT of a natural image. (a) Original; (b) DWT subbands for a two level decomposition. The approximation subband is a coarser version of the original,
while the other subbands represent the high frequencies (details) in the horizontal, vertical and diagonal direction.
Fig. 5. Polyphase representation of the wavelet transform.
zation noise, which degrades the rate/distortion performance of
the coding system [20]. Furthermore, it is responsible for an os-
cillatory trend of the PSNR along the
axis, making the quality
of the reconstructed image dependent on its position within the
volume. The analysis of such a phenomenon is out of the scope
of this paper. We refer to [21] for more details. What it is impor-
tant to mention here is that the amount of such noise is propor-
tional to the number of rounding operations, which in turn de-
pends on the decomposition depth and the lifting chain length.
Accordingly,we haverestrictedthe choice of filters to the family
of the interpolating filters [22] admitting a two-steps chain
(4)
As the choice of the filter-bank is not critical for compression
performances, we choose the 5
3 [22] filter. Being extremely
short two steps of length two each it minimizes the number
of subband images to decode for recovering the 2-D image of
interest, as will be discussed in Section IV.
The particular structure of the lifting chain facilitates the
determination of the set of subband images needed for the
point-wise IDWT (PW-IDWT). The separability of the trans-
form allows to map such a task to the one-dimensional (1-D)
case. The core of the problem consists in finding the set of
subband coefficients needed to recover one signal sample.
Then, results can be easily extended to intervals (i.e., signal
segments) and, eventually, multiple dimensions.
IV. P
OINT-WISE IDWT
In this section, we formalize the PW-IDWT. It is basically a
1-D problem: each pixel of the image to recover is regarded as
the sample
in position of the 1-D signal observed along
the parallel to the
axis passing trough it. Correspondingly, the
set of subband coefficients that are needed for its reconstruction
by IDWT maps to the coordinates of the subband images along
the
axis.
The proposed solution exploits the inherent recursive nature
of the wavelet transform. The IDWT is an iterative process
starting at the coarsest scale: the approximation subband at
the finer (
) level is reconstructed by filtering the set of
coefficients at the coarser
level according to [19]
(5)

428 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 3, MARCH 2003
Fig. 6. Forward wavelet transform using lifting.
Fig. 7. Inverse wavelet transform using lifting.
where and are the approximation and detail subbands,
respectively, and
is the decomposition level, which
increases with the depth of the decomposition. The signal
is
reconstructed by iterating such a procedure for
. The
number of coefficients taking part to the convolution in a given
subband depends on the length of the filter and on the number
of decomposition levels. The method used to determine the po-
sitions of the involved coefficients in each subband consists in
climbing back the synthesis filter-bank and keeping track of the
positions of the subband coefficients that get involved step by
step. Given the position
of the sample of interest in the signal
domain, we start by identifying the set of coefficients
that are needed at the finest resolution (i.e., ). Here, is
the subband index and takes the values
for approximation and
for details, respectively. For doing this, we look into the syn-
thesis chain from its output, and follow it step by step, keeping
track of the samples needed by the lifting steps filters. Due to
the recursiveness of the IDWT, given
the procedure is
iterated to get
, at the next finer resolution (i.e., ).
The only difference is that now there is a set of samples to be
recovered {
} instead of a single one. The iteration of
such a procedure for
results in the complete
set of necessary subband coefficients.
The procedure can be easily generalized to sets {
} of samples in the signal space. Let identify the
coefficients in subband
at level needed to reconstruct the
signal sample in position
. Then, the solution for the set of
samples {
}is
(6)
Formula (6) also applies to subband intervals. It is worth men-
tioning here that
depends on being even or odd. In
general, with the usual structure of the lifting scheme starting
with an
-type step, odd indexed samples correspond to larger
. We refer to the Appendix A for the details.
TABLE I
N
UMBER OF SAMPLES
GP(
l
;j
)
IN SUBBAND (
l; j
)NEEDED TO RECOVER
THE
SAMPLE AT POSITION
k
USING THE 5
2
3FILTER AND FOR
L
=3
LEVELS OF DECOMPOSITION.
TABLE II
N
UMBER OF SAMPLES
GP(
l;j
)
IN SUBBAND (
l; j
)NEEDED TO RECOVER
THE
SAMPLE AT POSITION
k
USING THE 9/7 FILTER AND FOR
L
=3
LEVELS OF DECOMPOSITION.
Tables I and IIgive as a function of the sample posi-
tion
for , . The number of samples required
in each subband turns out to be a periodic function of
with pe-
riod
2 . To outline the dependency of from ,
results are provided for 2
successive values of .Asthe
filter used is very short, the number of wavelet coefficients in-
volved in the PW-IDWT is very small. For comparison, Table II

Citations
More filters
Journal ArticleDOI
TL;DR: A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability.
Abstract: We propose a novel symmetry-based technique for scalable lossless compression of 3D medical image data. The proposed method employs the 2D integer wavelet transform to decorrelate the data and an intraband prediction method to reduce the energy of the sub-bands by exploiting the anatomical symmetries typically present in structural medical images. A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability. Performance evaluations on a wide range of real 3D medical images show an average improvement of 15% in lossless compression ratios when compared to other state-of-the art lossless compression methods that also provide resolution and quality scalability including 3D-JPEG2000, JPEG2000, and H.264/AVC intra-coding.

86 citations

Journal ArticleDOI
TL;DR: The proposed 3-D scalable compression method achieves a higher reconstruction quality, in terms of the peak signal-to-noise ratio, than that achieved by 3D-JPEG2000 with VOI coding, when using the MAXSHIFT and general scaling-based methods.
Abstract: We present a novel 3-D scalable compression method for medical images with optimized volume of interest (VOI) coding. The method is presented within the framework of interactive telemedicine applications, where different remote clients may access the compressed 3-D medical imaging data stored on a central server and request the transmission of different VOIs from an initial lossy to a final lossless representation. The method employs the 3-D integer wavelet transform and a modified EBCOT with 3-D contexts to create a scalable bit-stream. Optimized VOI coding is attained by an optimization technique that reorders the output bit-stream after encoding, so that those bits belonging to a VOI are decoded at the highest quality possible at any bit-rate, while allowing for the decoding of background information with peripherally increasing quality around the VOI. The bit-stream reordering procedure is based on a weighting model that incorporates the position of the VOI and the mean energy of the wavelet coefficients. The background information with peripherally increasing quality around the VOI allows for placement of the VOI into the context of the 3-D image. Performance evaluations based on real 3-D medical imaging data showed that the proposed method achieves a higher reconstruction quality, in terms of the peak signal-to-noise ratio, than that achieved by 3D-JPEG2000 with VOI coding, when using the MAXSHIFT and general scaling-based methods.

58 citations

Journal ArticleDOI
TL;DR: A scheme that uses JPEG2000 and JPIP to transmit data in a multi-resolution and progressive fashion and a prioritization that enables the client to progressively visualize scene content from a compressed file is presented.
Abstract: One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint

57 citations


Cites background from "Three-dimensional encoding/two-dime..."

  • ...In [ 10 ], the authors present a 3-D wavelet based scheme combined with a 2-D context adaptive arithmetic encoder for compression....

    [...]

Journal ArticleDOI
01 Mar 2005
TL;DR: The experimental results presented in this paper show that the proposed compression scheme achieves better lossy and lossless compression performance on 4-D medical images when compared with JPEG-2000 and volumetric compression based on 3-D SPIHT.
Abstract: This paper proposes a method for progressive lossy-to-lossless compression of four-dimensional (4-D) medical images (sequences of volumetric images over time) by using a combination of three-dimensional (3-D) integer wavelet transform (IWT) and 3-D motion compensation. A 3-D extension of the set-partitioning in hierarchical trees (SPIHT) algorithm is employed for coding the wavelet coefficients. To effectively exploit the redundancy between consecutive 3-D images, the concepts of key and residual frames from video coding is used. A fast 3-D cube matching algorithm is employed to do motion estimation. The key and the residual volumes are then coded using 3-D IWT and the modified 3-D SPIHT. The experimental results presented in this paper show that our proposed compression scheme achieves better lossy and lossless compression performance on 4-D medical images when compared with JPEG-2000 and volumetric compression based on 3-D SPIHT.

49 citations


Cites background from "Three-dimensional encoding/two-dime..."

  • ...A lossy-to-lossless compression scheme can offer better image quality with increasing bit rate until the original image is recovered [1], [9]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, it is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2 /sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions.
Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

20,028 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations

Journal ArticleDOI
J.M. Shapiro1
TL;DR: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code.
Abstract: The embedded zerotree wavelet algorithm (EZW) is a simple, yet remarkably effective, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream In addition to producing a fully embedded bit stream, the EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images Yet this performance is achieved with a technique that requires absolutely no training, no pre-stored tables or codebooks, and requires no prior knowledge of the image source The EZW algorithm is based on four key concepts: (1) a discrete wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, and (4) universal lossless data compression which is achieved via adaptive arithmetic coding >

5,559 citations


"Three-dimensional encoding/two-dime..." refers methods in this paper

  • ...The benchmark for the 3-D case is the 3-D generalization of the well known EZW coding algorithm [4]....

    [...]

  • ...performance of the 3-D extension of the embedded zerotree wavelet (EZW)-based coding algorithm [4] is analyzed in [5]–[7]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, it was shown that under fairly general conditions, exact reconstruction schemes with synthesis filters different from the analysis filters give rise to two dual Riesz bases of compactly supported wavelets.
Abstract: Orthonormal bases of compactly supported wavelet bases correspond to subband coding schemes with exact reconstruction in which the analysis and synthesis filters coincide. We show here that under fairly general conditions, exact reconstruction schemes with synthesis filters different from the analysis filters give rise: to two dual Riesz bases of compactly supported wavelets. We give necessary and sufficient conditions for biorthogonality of the corresponding scaling functions, and we present a sufficient condition for the decay of their Fourier transforms. We study the regularity of these biorthogonal bases. We provide several families of examples, all symmetric (corresponding to “linear phase” filters). In particular we can construct symmetric biorthogonal wavelet bases with arbitrarily high preassigned regularity; we also show how to construct symmetric biorthogonal wavelet bases “close” to a (nonsymmetric) orthonormal basis.

2,854 citations

Book ChapterDOI
TL;DR: In this paper, a self-contained derivation from basic principles such as the Euclidean algorithm, with a focus on applying it to wavelet filtering, is presented, which asymptotically reduces the computational complexity of the transform by a factor two.
Abstract: This article is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filtering steps, which we call lifting steps but that are also known as ladder structures. This decomposition corresponds to a factorization of the polyphase matrix of the wavelet or subband filters into elementary matrices. That such a factorization is possible is well-known to algebraists (and expressed by the formulaSL(n;R[z, z−1])=E(n;R[z, z−1])); it is also used in linear systems theory in the electrical engineering community. We present here a self-contained derivation, building the decomposition from basic principles such as the Euclidean algorithm, with a focus on applying it to wavelet filtering. This factorization provides an alternative for the lattice factorization, with the advantage that it can also be used in the biorthogonal, i.e., non-unitary case. Like the lattice factorization, the decomposition presented here asymptotically reduces the computational complexity of the transform by a factor two. It has other applications, such as the possibility of defining a wavelet-like transform that maps integers to integers.

2,357 citations

Frequently Asked Questions (1)
Q1. What have the authors contributed in "Three-dimensional encoding/two-dimensional decoding of medical data" ?

The authors propose a fully three-dimensional ( 3-D ) wavelet-based coding system featuring 3-D encoding/two-dimensional ( 2-D ) decoding functionalities. The performance has been evaluated on a set of volumetric data and compared to that provided by other 3-D as well as 2-D coding systems. In summary, the proposed 3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones.