scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2004"


Journal ArticleDOI
TL;DR: In this article, the authors presented an intensive discussion on two distributed source coding (DSC) techniques, namely Slepian-Wolf coding and Wyner-Ziv coding, and showed that separate encoding is as efficient as joint coding for lossless compression in channel coding.
Abstract: In recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf previous milestones in the information revolution. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies. It relies on many components working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies in sensor networks is the distributed source coding (DSC), which refers to the compression of the multiple correlated sensor outputs that does not communicate with each other. DSC allows a many-to-one video coding paradigm that effectively swaps encoder-decoder complexity with respect to conventional video coding, thereby representing a fundamental concept shift in video processing. This article has presented an intensive discussion on two DSC techniques, namely Slepian-Wolf coding and Wyner-Ziv coding. The Slepian and Wolf coding have theoretically shown that separate encoding is as efficient as joint coding for lossless compression in channel coding.

819 citations


Journal ArticleDOI
TL;DR: The resulting algorithm is suitable for compression of data in band-interleaved-by-line format and outperforms 3-D-CALIC as well as other state-of-the-art compression algorithms.
Abstract: We propose a new lossless and near-lossless compression algorithm for hyperspectral images based on context-based adaptive lossless image coding (CALIC). Specifically, we propose a novel multiband spectral predictor, along with optimized model parameters and optimization thresholds. The resulting algorithm is suitable for compression of data in band-interleaved-by-line format; its performance evaluation on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data shows that it outperforms 3-D-CALIC as well as other state-of-the-art compression algorithms.

205 citations


Journal Article
TL;DR: The results show that some objective measures correlate well with the perceived picture quality for a given compression algorithm but they are not reliable for an evaluation across dieren t algorithms, and objective measures are found, which serve well in all tested image compression systems.
Abstract: This paper investigates a set of objective picture quality measures for application in still image compression systems and emphasizes the correlation of these measures with subjective picture quality measures. Picture quality is measured using nine dieren t objective picture quality measures and subjectively using Mean Opinion Score (MOS ) as measure of perceived picture quality. The correlation between each objective measure and MOS is found. The eects of dieren t image compression algorithms, image contents and compression ratios are assessed. Our results show that some objective measures correlate well with the perceived picture quality for a given compression algorithm but they are not reliable for an evaluation across dieren t algorithms. So, we compared objective picture quality measures across dieren t algorithms and we found measures, which serve well in all tested image compression systems. K e y w o r d s: correlation, JPEG, JPEG2000, objective assessment, picture quality measures, SPIHT With the increasing use of multimedia technologies, image compression requires higher performance. To address needs and requirements of multimedia and Internet applications, many ecien t image compression techniques, with considerably dieren t features, have recently been developed. Image compression techniques exploit a common characteristic of most images that the neighboring picture elements (pixels, pels) are highly correlated [1]. It means that a typical still image contains a large amount of spatial redundancy in plain areas where adjacent pixels have almost the same values. In addition, still image can contain subjective redundancy, which is determined by properties of human visual system (HVS). HVS presents some tolerance to distortion depending upon the image content and viewing conditions. Consequently, pixels must not always be reproduced exactly as originated and HVS will not detect the dierence between original image and reproduced image [2]. The redundancy (both statistical and subjective) can be removed to achieve compression of the image data. The basic measures for the performance of a compression system are picture quality and compression ratio (dened as ratio between original data size and compressed data size). In lossy compression scheme, image compression algorithm should achieve trade o between compression ratio and picture quality. Higher compression ratios will produce lower picture quality and vice versa. The evaluation of lossless image compression techniques is a simple task where compression ratio and execution time are employed as standard criteria. The picture quality before and after compression is unchanged. Contrary, the evaluation of lossy techniques is dicult task because of inherent drawbacks associated with both objective and subjective measures of picture quality. Objective measures of picture quality do not correlate well with subjective quality measures [3], [4]. Subjective assessment of picture quality is time consuming process and results of measurements should be processed very carefully. In many applications (photos, medical images where loss is tolerated, network applications, World Wide Web, etc.) it is very important to choose image compression system which gives the best subjective quality, but the quality has to be evaluated objectively. Therefore, it is important to use objective picture quality measure, which has high correlation with subjective picture quality. In this paper we attempt to evaluate and compare objective and subjective picture quality measures. As test images we used images with dieren t spatial and frequency characteristics. Images are coded using JPEG, JPEG2000 and SPIHT compression algorithms. The paper is structured as follows. In section 2 we dene picture quality measures. In section 3 we briey present image compression systems used in our experiment. In Section 4 we evaluate statistical and frequency properties of test images. Section 5 contains numerical results of picture quality measures. In this section we analyze correlation of objective measures with subjective grades and we propose objective measures, which should be used in relation to each image compression system, and objective measures, which are suitable for the comparison of picture quality between dieren t compression systems.

129 citations


Journal ArticleDOI
TL;DR: A new lossy to lossless progressive compression scheme for triangular meshes, based on a wavelet multiresolution theory for irregular 3D meshes, is proposed, where the algorithm performs better than previously published approaches for both lossless and progressive compression.
Abstract: We propose a new lossy to lossless progressive compression scheme for triangular meshes, based on a wavelet multiresolution theory for irregular 3D meshes. Although remeshing techniques obtain better compression ratios for geometric compression, this approach can be very effective when one wants to keep the connectivity and geometry of the processed mesh completely unchanged. The simplification is based on the solving of an inverse problem. Optimization of both the connectivity and geometry of the processed mesh improves the approximation quality and the compression ratio of the scheme at each resolution level. We show why this algorithm provides an efficient means of compression for both connectivity and geometry of 3D meshes and it is illustrated by experimental results on various sets of reference meshes, where our algorithm performs better than previously published approaches for both lossless and progressive compression.

126 citations


01 Jan 2004
TL;DR: In this paper, an embedded block-based, image wavelet transform coding algorithm of low complexity, 3D-SPECK, has been proposed for 3D volumetric image data.
Abstract: We propose an embedded, block-based, image wavelet transform coding algorithm of low complexity. The embedded coding of Set Partitioned Embedded bloCK (SPECK) algorithm is modified and extended to three dimensions. The resultant algorithm, three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), efficiently encodes 3D volumetric image data by exploiting the dependencies in all dimensions. 3D-SPECK generates embedded bit stream and therefore provides progressive transmission. We describe the use of this coding algorithm in two implementations, including integer wavelet transform as well as floating point wavelet transform, where the former one enables lossy and lossless decompression from the same bit stream, and the latter one achieves better performance in lossy compression. Wavelet packet structure and coefficient scaling are used to make the integer filter transform approximately unitary. The structure of hyperspectral images reveals spectral responses that would seem ideal candidate for compression by 3D-SPECK. We demonstrate that 3D-SPECK, a wavelet domain compression algorithm, can preserve spectral profiles well. Compared with the lossless version of the benchmark JPEG2000 (multi-component), the 3D-SPECK lossless algorithm produces average of 3.0% decrease in compressed file size for Airborne Visible Infrared Imaging Spectrometer images, the typical hyperspectral imagery. We also conduct comparisons of the lossy implementation with other the state-of-the-art algorithms such as three-Dimensional Set Partitioning In Hierarchical Trees (3D-SPIHT) and JPEG2000. We conclude that this algorithm, in addition to being very flexible, retains all the desirable features of these algorithms and is highly competitive to 3D-SPIHT and better than JPEG2000 in compression efficiency.

118 citations


Proceedings ArticleDOI
02 Jun 2004
TL;DR: With this algorithm design compact codes are produced that are scalable with respect to rate, quality, and resolution of point-based 3D models of surfaces and competitive rate-distortion performances were achieved with excellent reconstruction quality.
Abstract: In order to efficiently archive and transmit large 3D models, lossy and lossless compression methods are needed. We propose a compression scheme for coordinate data of point-based 3D models of surfaces. A point-based model is processed for compression in a pipeline of three subsequent operations, partitioning, parameterization, and coding. First the point set is partitioned yielding a suitable number of point clusters. Each cluster corresponds to a surface patch, that can be parameterized as a height field and resampled on a regular grid. The domains of the height fields have irregular shapes that are encoded losslessly. The height fields themselves are encoded using a shape-adaptive wavelet coder, producing a progressive bitstream for each patch. A rate-distortion optimization provides for an optimal bit allocation for the individual patch codes. With this algorithm design compact codes are produced that are scalable with respect to rate, quality, and resolution. In our encodings of complex 3D models competitive rate-distortion performances were achieved with excellent reconstruction quality at under 3 bits per point (bpp).

108 citations


Proceedings ArticleDOI
23 Mar 2004
TL;DR: This paper provides a brief overview of an emerging ISO/IEC standard for lossless audio coding, MPEG-4 ALS, and explains the choice of algorithms used in its design, and compares it to current state-of-the-art algorithms for Lossless audio compression.
Abstract: This paper provides a brief overview of an emerging ISO/IEC standard for lossless audio coding, MPEG-4 ALS and explains the choice of algorithms used in its design, and compare it to current state-of-the-art algorithms for lossless audio compression.

106 citations


Book ChapterDOI
31 Aug 2004
TL;DR: This work presents a lossless compression strategy to store and access large boolean matrices efficiently on disk, and adapts classical TSP heuristics by means of instance-partitioning and sampling.
Abstract: Large boolean matrices are a basic representational unit in a variety of applications, with some notable examples being interactive visualization systems, mining large graph structures, and association rule mining. Designing space and time efficient scalable storage and query mechanisms for such large matrices is a challenging problem. We present a lossless compression strategy to store and access such large matrices efficiently on disk. Our approach is based on viewing the columns of the matrix as points in a very high dimensional Hamming space, and then formulating an appropriate optimization problem that reduces to solving an instance of the Traveling Salesman Problem on this space. Finding good solutions to large TSP's in high dimensional Hamming spaces is itself a challenging and little-explored problem -- we cannot readily exploit geometry to avoid the need to examine all N2 inter-city distances and instances can be too large for standard TSP codes to run in main memory. Our multi-faceted approach adapts classical TSP heuristics by means of instance-partitioning and sampling, and may be of independent interest. For instances derived from interactive visualization and telephone call data we obtain significant improvement in access time over standard techniques, and for the visualization application we also make significant improvements in compression.

100 citations


Journal ArticleDOI
TL;DR: The techniques are suitable for a range of secure three-dimensional object storage and transmission applications and achieve compression ratios lower than 1.05, with good decryption and reconstruction quality.
Abstract: We present the results of applying data compression techniques to encrypted three-dimensional objects. The objects are captured using phase-shift digital holography and encrypted using a random phase mask in the Fresnel domain. Lossy quantization is combined with lossless coding techniques to quantify compression ratios. Lossless compression alone applied to the encrypted holographic data achieves compression ratios lower than 1.05. When combined with quantization and an integer encoding scheme, this rises to between 12 and 65 (depending on the hologram chosen and the method of measuring compression ratio), with good decryption and reconstruction quality. Our techniques are suitable for a range of secure three-dimensional object storage and transmission applications.

96 citations


Proceedings ArticleDOI
28 Mar 2004
TL;DR: A new reversible (lossless) watermarking algorithm for digital images that exploits the inherent correlation among the adjacent pixels in an image region using a predictor to embed a large payload while keeping the distortion low.
Abstract: We propose a new reversible (lossless) watermarking algorithm for digital images. Being reversible, the algorithm enables the recovery of the original host information upon the extraction of the embedded information. The proposed technique exploits the inherent correlation among the adjacent pixels in an image region using a predictor. The information bits are embedded into the prediction errors, which enables us to embed a large payload while keeping the distortion low. A histogram shift at the encoder enables the decoder to identify the embedded location.

89 citations


Patent
13 Feb 2004
TL;DR: In this article, an efficient method for compressing sampled analog signals in real-time, without loss, or at a user-specified rate or distortion level, is described, where the preprocessor apparatus measures one or more signal parameters and, under program control, appropriately modifies the pre-processor input signal to create the output signals that are more effectively compressed by a follow-on compressor.
Abstract: An efficient method for compressing sampled analog signals in real time, without loss, or at a user-specified rate or distortion level, is described. The present invention is particularly effective for compressing and decompressing high-speed, bandlimited analog signals that are not appropriately or effectively compressed by prior art speech, audio, image, and video compression algorithms due to various limitations of such prior art compression solutions. The present invention's preprocessor apparatus measures one or more signal parameters and, under program control, appropriately modifies the preprocessor input signal to create one or more preprocessor output signals that are more effectively compressed by a follow-on compressor. In many instances, the follow-on compressor operates most effectively when its input signal is at baseband. The compressor creates a stream of compressed data tokens and compression control parameters that represent the original sampled input signal using fewer bits. The decompression subsystem uses a decompressor to decompress the stream of compressed data tokens and compression control parameters. After decompression, the decompressor output signal is processed by a post-processor, which reverses the operations of the preprocessor during compression, generating a postprocessed signal that exactly matches (during lossless compression) or approximates (during lossy compression) the original sampled input signal. Parallel processing implementations of both the compression and decompression subsystems are described that can operate at higher sampling rates when compared to the sampling rates of a single compression or decompression subsystem. In addition to providing the benefits of real-time compression and decompression to a new, general class of sampled data users who previously could not obtain benefits from compression, the present invention also enhances the performance of test and measurement equipment (oscilloscopes, signal generators, spectrum analyzers, logic analyzers, etc.), busses and networks carrying sampled data, and data converters (A/D and D/A converters).

Proceedings ArticleDOI
27 Jun 2004
TL;DR: This work proposes a novel robust lossless data hiding technique, which does not generate salt-and-pepper noise, and has been successfully applied to many commonly used images, thus demonstrating its generality.
Abstract: Recently, among various data hiding techniques, a new subset, lossless data hiding, has drawn tremendous interest. Most existing lossless data hiding algorithms are, however, fragile in the sense that they can be defeated when compression or other small alteration is applied to the marked image. The method of C. De Vleeschouwer et al. (see IEEE Trans. Multimedia, vol.5, p.97-105, 2003) is the only existing semi-fragile lossless data hiding technique (also referred to as robust lossless data hiding), which is robust against high quality JPEG compression. We first point out that this technique has a fatal problem: salt-and-pepper noise caused by using modulo 256 addition. We then propose a novel robust lossless data hiding technique, which does not generate salt-and-pepper noise. This technique has been successfully applied to many commonly used images (including medical images, more than 1000 images in the CorelDRAW database, and JPEG2000 test images), thus demonstrating its generality. The experimental results show that the visual quality, payload and robustness are acceptable. In addition to medical and law enforcement fields, it has been applied to authenticate losslessly compressed JPEG2000 images.

Patent
Kunal Mukerjee1
15 Apr 2004
TL;DR: Predictive lossless coding as discussed by the authors chooses and applies one of multiple available differential pulsecode modulation (DPCM) modes to individual macro-blocks to produce DPCM residuals having a closer to optimal distribution for run-length, Golomb Rice RLGR entropy encoding.
Abstract: Predictive lossless coding provides effective lossless image compression of both photographic and graphics content in image and video media. Predictive lossless coding can operate on a macroblock basis for compatibility with existing image and video codecs. Predictive lossless coding chooses and applies one of multiple available differential pulse-code modulation (DPCM) modes to individual macro-blocks to produce DPCM residuals having a closer to optimal distribution for run-length, Golomb Rice RLGR entropy encoding. This permits effective lossless entropy encoding despite the differing characteristics of photographic and graphics image content.

Proceedings ArticleDOI
23 Mar 2004
TL;DR: In the proposed approach, only two consecutive frames to generate a small set of motion vectors that represent the motion from the previous frame to the current frame is used, which makes it suitable for real time applications.
Abstract: Geometry compression is the compression of the 3D geometric data that provides a computer graphics system with the scene description necessary to render images. Geometric data is quite large and, therefore, needs effective compression methods to decrease the transmission and storage bit requirements. A large amount of research has focused on static geometry compression, but only limited research has addressed animated geometry compression, the compression of temporal sequences of geometry data. This paper proposes an octree-based motion representation method that can be applied to compress animated geometric data. In our approach, 3D animated sequences can be represented with a compression factor of over 100, with slight losses in animation quality. Paper focuses on compressing vertex positions for all the frames. In the proposed approach, only two consecutive frames to generate a small set of motion vectors that represent the motion from the previous frame to the current frame is used. The motion vectors are used to predict the vertex positions for each frame except the first frame. The process generates a hierarchical octree motion representation for each frame. Quantization and an adaptive arithmetic coder are used to achieve further data reduction. The simple and efficient decompression of this approach makes it suitable for real time applications.

Journal ArticleDOI
TL;DR: A new construction of lifted biorthogonal wavelets on surfaces of arbitrary two-manifold topology for compression and multiresolution representation, and wavelet constructions for bilinear, bicubic, and B-spline subdivision are presented.
Abstract: We present a new construction of lifted biorthogonal wavelets on surfaces of arbitrary two-manifold topology for compression and multiresolution representation. Our method combines three approaches: subdivision surfaces of arbitrary topology, B-spline wavelets, and the lifting scheme for biorthogonal wavelet construction. The simple building blocks of our wavelet transform are local lifting operations performed on polygonal meshes with subdivision hierarchy. Starting with a coarse, irregular polyhedral base mesh, our transform creates a subdivision hierarchy of meshes converging to a smooth limit surface. At every subdivision level, geometric detail is expanded from wavelet coefficients and added to the surface. We present wavelet constructions for bilinear, bicubic, and biquintic B-spline subdivision. While the bilinear and bicubic constructions perform well in numerical experiments, the biquintic construction turns out to be unstable. For lossless compression, our transform is computed in integer arithmetic, mapping integer coordinates of control points to integer wavelet coefficients. Our approach provides a highly efficient and progressive representation for complex geometries of arbitrary topology.

Patent
15 Jan 2004
TL;DR: In this article, a block data compression system consisting of a Compression Unit and a Decompression Unit, and an algorithm for fast block data compressing using multi-byte search is presented.
Abstract: A block data compression system comprising a Compression unit and a Decompression unit, and an Algorithm for fast block data compression using multi-byte search Objective of the invention is to develop a block data compression system and algorithm for fast block data compression with multi-byte search for optimal encoding during the learning phase of substitutional methods, allowing length-limited and relative small blocks of input data symbols to be compressed independently, as required by random-access storage or telecommunication devices; and reaching high-performance characteristics by employed accelerating architectures and highly pipelines data-flow principles According to the present invention these objectives are accomplished by a Compression Unit comprising an Input-FIFO (8), connected to a Modeling Unit (6), where said Modeling Unit (6) is connected to a multitude of memory locations represented by Trie-Dictionary (4) memory, to a Zero-Finder Look-Up Table (3), to search means in the form of a Comparison Unit (5), to memory means in the form of a Literal-Dictionary (2) and also to an Encoder Unit (7); said Encoder Unit (7) also connected through an Aligning Unit (7) to an Output-FIFO (12) The invention comprises a block data compression system, composed of a Compression Unit and a Decompression Unit, and an algorithm for fast block data compression using multi-byte search; and is related to the field of data compression, specifically to the implementation of a lossless, adaptive, reversible and fast block data compression for storage and telecommunication devices

Journal ArticleDOI
TL;DR: The methodology presented here is based on the known SCAN formal language for data accessing and processing and produces a lossless compression ratio of 1.88 for the standard Lenna, while the hiding part is able to embeds digital information at 12.5% of the size of the original image.

Proceedings ArticleDOI
01 Jun 2004
TL;DR: Vpc3 employs value predictors to bring out and amplify patterns in the traces so that conventional compressors can compress them more effectively, which results in much higher compression rates but also provides faster compression and decompression.
Abstract: Trace files are widely used in research and academia to study the behavior of programs. They are simple to process and guarantee repeatability. Unfortunately, they tend to be very large. This paper describes vpc3, a fundamentally new approach to compressing program traces. Vpc3 employs value predictors to bring out and amplify patterns in the traces so that conventional compressors can compress them more effectively. In fact, our approach not only results in much higher compression rates but also provides faster compression and decompression. For example, compared to bzip2, vpc3's geometric mean compression rate on SPECcpu2000 store address traces is 18.4 times higher, compression is ten times faster, and decompression is three times faster.

Proceedings ArticleDOI
16 Feb 2004
TL;DR: A new variable-sized-block method for VLIW code compression, which is fully adaptive and generates coding table on-the-fly during compression and decompression, and which has higher decoding bandwidth and comparable compression ratio.
Abstract: We propose a new variable-sized-block method for VLIW code compression. Code compression traditionally works on fixed-sized blocks and its ef.ciency is limited by the smallblock size. Branch blocks -- instructions between two consecutive possible branch targets -- provide larger blocks for code compression. We propose LZW-based algorithms to compress branch blocks. Our approach is fully adaptive and generates coding table on-the-fly during compression and decompression. When encountering a branch target,the coding table is cleared to ensure correctness. Decompression requires only a simple lookup and update when necessary. Our method provides 8 bytes peak decompression bandwidth and 1.82 bytes in average. Compared to Huffman's 1 byte and V2F's 13-bit peak performance, our methods have higher decoding bandwidth and comparable compression ratio. Parallel decompression could also be applied to our methods, which is more suitable for VLIW architecture.

01 May 2004
TL;DR: This document describes an additional compression method associated with a lossless data compression algorithm for use with TLS, and it describes a method for the specification of additional TLS compression methods.
Abstract: The Transport Layer Security (TLS) protocol (RFC 2246) includes features to negotiate selection of a lossless data compression method as part of the TLS Handshake Protocol and to then apply the algorithm associated with the selected method as part of the TLS Record Protocol. TLS defines one standard compression method which specifies that data exchanged via the record protocol will not be compressed. This document describes an additional compression method associated with a lossless data compression algorithm for use with TLS, and it describes a method for the specification of additional TLS compression methods.

Patent
02 Jul 2004
TL;DR: In this paper, a method and apparatus for encoding high dynamic range video by means of video compression is presented, which comprises the steps of providing high-dynamic range (HDR) tristimulus color data (XYZ) for each frame of the video and threshold versus intensity data for a human observer; constructing a perceptually conservative luminance transformation from continuous luminance data (Y) to discrete values (Lp) using the human observer.
Abstract: A method and apparatus for encoding high dynamic range video by means of video compression is shown. The method comprises the steps of providing high dynamic range (HDR) tristimulus color data (XYZ) for each frame of the video and threshold versus intensity data for a human observer; constructing a perceptually conservative luminance transformation from continuous luminance data (Y) to discrete values (Lp) using said threshold versus intensity data for the human observer; transforming the HDR tristimulus color data into perceptually linear color data of three color channels (Lp, u′, v′) for obtaining visually lossless compressed frames; estimating motion vector of said consecutive frames of the video and compensating the difference of the tristimulus color data for performing an inter-frame encoding and an inter-frame compression; transforming the compensated differences of the tristimulus color data to frequency space data; quantizing said frequency space data; variable-length encoding of the quantized frequency space data and storing or transmitting a stream of visual data resulting from the encoded quantized frequency space data.

Journal ArticleDOI
TL;DR: In this paper, a method to generate an accurate rational model of lossy systems from either measurements or an electromagnetic analysis is presented, which is valid either for lossless or lossy system responses.
Abstract: A method to generate an accurate rational model of lossy systems from either measurements or an electromagnetic analysis is presented. The Cauchy method has been used to achieve this goal. This formulation is valid either for lossless or lossy system responses. Thus, it provides an improvement over the conventional Cauchy method and takes into account the relationship between the transmission and reflection coefficients of the system which in our case is a filter. The resulting model can be used to extract the coupling structure of the filter. Two examples have been presented. One deals with measured data and the other one uses numerical simulation data from an electromagnetic analysis.

Journal ArticleDOI
TL;DR: Inspired by several generalization bounds, "compression coefficients" for SVMs are constructed which measure the amount by which the training labels can be compressed by a code built from the separating hyperplane and can fairly accurately predict the parameters for which the test error is minimized.
Abstract: In this paper we investigate connections between statistical learning theory and data compression on the basis of support vector machine (SVM) model selection. Inspired by several generalization bounds we construct "compression coefficients" for SVMs which measure the amount by which the training labels can be compressed by a code built from the separating hyperplane. The main idea is to relate the coding precision to geometrical concepts such as the width of the margin or the shape of the data in the feature space. The so derived compression coefficients combine well known quantities such as the radius-margin term R2/ρ2, the eigenvalues of the kernel matrix, and the number of support vectors. To test whether they are useful in practice we ran model selection experiments on benchmark data sets. As a result we found that compression coefficients can fairly accurately predict the parameters for which the test error is minimized.

Patent
17 Dec 2004
TL;DR: In this paper, an efficient lapped transform is realized using pre- and post-filters (or reversible overlap operators) that are structured of unit determinant component matrices, which can be implemented using planar shears or lifting steps.
Abstract: An efficient lapped transform is realized using pre- and post-filters (or reversible overlap operators) that are structured of unit determinant component matrices. The pre- and post-filters are realized as a succession of planar rotational transforms and unit determinant planar scaling transforms. The planar scaling transforms can be implemented using planar shears or lifting steps. Further, the planar rotations and planar shears have an implementation as reversible/lossless operations, giving as a result, a reversible overlap operator.

Proceedings ArticleDOI
16 Aug 2004
TL;DR: Lossless and lossy compression algorithms for microarray images originally digitized at 16 bpp (bits per pixels) that achieve an average of 9.5-11.5 bpp and 4.6-6.7 bpp are proposed, based on a completely automatic gridding procedure of the image.
Abstract: With the recent explosion of interest in microarray technology, massive amounts of microarray images are currently being produced. The storage and the transmission of this type of data are becoming increasingly challenging. Here we propose lossless and lossy compression algorithms for microarray images originally digitized at 16 bpp (bits per pixels) that achieve an average of 9.5-11.5 bpp (lossless) and 4.6-6.7 bpp (lossy, with a PSNR of 63 dB). The lossy compression is applied only on the background of the image, thereby preserving the regions of interest. The methods are based on a completely automatic gridding procedure of the image.

Proceedings ArticleDOI
11 Oct 2004
TL;DR: This paper introduces the use of transfer functions at decompression time to guide a level-of-detail selection scheme and demonstrates a significant reduction in the required amount of data while maintaining rendering quality.
Abstract: The size of standard volumetric data sets in medical imaging is rapidly increasing causing severe performance limitations in direct volume rendering pipelines. The methods presented in this paper exploit the medical knowledge embedded in the transfer function to reduce the required bandwidth in the pipeline. Typically, medical transfer functions cause large subsets of the volume to give little or no contribution to the rendered image. Thus, parts of the volume can be represented at low resolution while retaining overall visual quality. This paper introduces the use of transfer functions at decompression time to guide a level-of-detail selection scheme. The method may be used in combination with traditional lossy or lossless compression schemes. We base our current implementation on a multi-resolution data representation using compressed wavelet transformed blocks. The presented results using the adaptive decompression demonstrate a significant reduction in the required amount of data while maintaining rendering quality. Even though the focus of this paper is medical imaging, the results are applicable to volume rendering in many other domains.

Journal ArticleDOI
TL;DR: A survey of palette reordering methods is provided, and it is concluded that the pairwise merging heuristic proposed by Memon et al. is the most effective, but also the most computationally demanding.
Abstract: Palette reordering is a well-known and very effective approach for improving the compression of color-indexed images. In this paper, we provide a survey of palette reordering methods, and we give experimental results comparing the ability of seven of them in improving the compression efficiency of JPEG-LS and lossless JPEG 2000. We concluded that the pairwise merging heuristic proposed by Memon et al. is the most effective, but also the most computationally demanding. Moreover, we found that the second most effective method is a modified version of Zeng's reordering technique, which was 3%-5% worse than pairwise merging, but much faster.

Proceedings ArticleDOI
30 Mar 2004
TL;DR: A semantic compression algorithm called ItCompress ITerative Compression, which achieves good compression while permitting access even at attribute level without requiring the decompression of a larger unit, and is a cost-effective compression technique.
Abstract: Real datasets are often large enough to necessitate data compression. Traditional 'syntactic' data compression methods treat the table as a large byte string and operate at the byte level. The tradeoff in such cases is usually between the ease of retrieval (the ease with which one can retrieve a single tuple or attribute value without decompressing a much larger unit) and the effectiveness of the compression. In this regard, the use of semantic compression has generated considerable interest and motivated certain recent works. We propose a semantic compression algorithm called ItCompress ITerative Compression, which achieves good compression while permitting access even at attribute level without requiring the decompression of a larger unit. ItCompress iteratively improves the compression ratio of the compressed output during each scan of the table. The amount of compression can be tuned based on the number of iterations. Moreover, the initial iterations provide significant compression, thereby making it a cost-effective compression technique. Extensive experiments were conducted and the results indicate the superiority of ItCompress with respect to previously known techniques, such as 'SPARTAN' and 'fascicles'.

Journal ArticleDOI
23 May 2004
TL;DR: It is found that on average, pixel matching errors with similar magnitudes tend to appear in clusters for natural video sequences, and this approach is much better than other pixel gradient based adaptive PDS algorithms.
Abstract: In order to reduce the computation load, many conventional fast block-matching algorithms have been developed to reduce the set of possible searching points in the search window. All of these algorithms produce some quality degradation of a predicted image. Alternatively, another kind of fast block-matching algorithms which do not introduce any prediction error as compared with the full-search algorithm is to reduce the number of necessary matching evaluations for every searching point in the search window. The partial distortion search (PDS) is a well-known technique of the second kind of algorithms. In the literature, many researches tried to improve both lossy and lossless block-matching algorithms by making use of an assumption that pixels with larger gradient magnitudes have larger matching errors on average. Based on a simple analysis, it is found that, on average, pixel matching errors with similar magnitudes tend to appear in clusters for natural video sequences. By using this clustering characteristic, we propose an adaptive PDS algorithm which significantly improves the computation efficiency of the original PDS. This approach is much better than other algorithms which make use of the pixel gradients. Furthermore, the proposed algorithm is most suitable for motion estimation of both opaque and boundary macroblocks of an arbitrary-shaped object in MPEG-4 coding.

Proceedings ArticleDOI
01 Dec 2004
TL;DR: Instead of using modulo 256 addition, this method takes special measures that prevents overflow/underflow (one of critical issue for lossless watermarking) and hence does not suffer from annoying salt-and-pepper noise.
Abstract: In this paper, a new semi-fragile lossless digital watermarking scheme based on integer wavelet transform (IWT) is presented. Data are embedded into some IWT coefficients. The wavelet family applied is the 5/3 filter bank which serves as the default transformation in the JPEG2000 standard for image lossless compression. As a result, the proposed scheme can be integrated into the JPEG2000 standard smoothly. Different from the only existing semi-fragile lossless watermarking scheme, instead of using modulo 256 addition, this method takes special measures that prevents overflow/underflow (one of critical issue for lossless watermarking) and hence does not suffer from annoying salt-and-pepper noise. The exact cover media can be losslessly recovered if the stegoimage has not been altered. Furthermore, the hidden data can be extracted with no error even after incidental alterations, including compression, have been applied to the stegoimage (thus named after "semi-fragile").