scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 1999"


Proceedings ArticleDOI
01 Aug 1999
TL;DR: By building and maintaining a dictionary of individual user’s path updates, the proposed adaptive on-line algorithm can learn subscribers’ profiles and compressibility of the variable-to-fixed length encoding of the acclaimed LempelZiv family of algorithms reduces the update cost.
Abstract: The complexity of the mobility tracking problem in a cellular environment has been characterized under an informationtheoretic framework. Shannon’s entropy measure is identified as a basis for comparing user mobility models. By building and maintaining a dictionary of individual user’s path updates (as opposed to the widely used location updates), the proposed adaptive on-line algorithm can learn subscribers’ profiles. This technique evolves out of the concepts of lossless compression. The compressibility of the variable-to-fixed length encoding of the acclaimed LempelZiv family of algorithms reduces the update cost, whereas their built-in predictive power can be effectively used to reduce paging cost.

346 citations


Book
10 Dec 1999
TL;DR: Part 1 Fractal Image Compression: Iterated Function Systems Fractal Encoding of Grayscale Images Speeding Up FractalEncoding; Part 2 Wavelet Image Comp compression: Simple Wavelets Daubechies Wavelets Waveletimage Compression Techniques Comparison of Fractal and Wavelet image Compression.
Abstract: Part 1 Fractal Image Compression: Iterated Function Systems Fractal Encoding of Grayscale Images Speeding Up Fractal Encoding. Part 2 Wavelet Image Compression: Simple Wavelets Daubechies Wavelets Wavelet Image Compression Techniques Comparison of Fractal and Wavelet Image Compression Appendix A - Using the Accompanying Software Appendix B - Utility Windows Library (UWL) Appendix C - Organization of the Accompanying Software Source Code.

276 citations


Patent
29 Jan 1999
TL;DR: MemoryF/X as discussed by the authors is an integrated memory controller (IMC) including data compression and decompression engines for improved performance, which includes multiple novel techniques such as: 1) parallel lossless compression/decompression; 2) selectable compression modes such as lossless, lossy or no compression; 3) priority compression mode; 4) data cache techniques; 5) variable compression block sizes; 6) compression reordering; and 7) unique address translation, attribute, and address caches.
Abstract: An integrated memory controller (IMC) including MemoryF/X Technology which includes data compression and decompression engines for improved performance The memory controller (IMC) of the present invention preferably selectively uses a combination of lossless, lossy, and no compression modes Data transfers to and from the integrated memory controller of the present invention can thus be in a plurality of formats, these being compressed or normal (non-compressed), compressed lossy or lossless, or compressed with a combination of lossy and lossless The invention also indicates preferred methods for specific compression and decompression of particular data formats such as digital video, 3D textures and image data using a combination of novel lossy and lossless compression algorithms in block or span addressable formats To improve latency and reduce performance degradations normally associated with compression and decompression techniques, the MemoryF/X Technology encompasses multiple novel techniques such as: 1) parallel lossless compression/decompression; 2) selectable compression modes such as lossless, lossy or no compression; 3) priority compression mode; 4) data cache techniques; 5) variable compression block sizes; 6) compression reordering; and 7) unique address translation, attribute, and address caches The parallel compression and decompression algorithm allows high-speed parallel compression and high speed parallel decompression operation The IMC also preferably uses a special memory allocation and directory technique for reduction of table size and low latency operation The integrated data compression and decompression capabilities of the IMC remove system bottle-necks and increase performance This allows lower cost systems due to smaller data storage, reduced bandwidth requirements, reduced power and noise

127 citations


Journal ArticleDOI
01 Sep 1999
TL;DR: A new wavelet-based embedded compression technique that efficiently exploits the intraband dependencies and uses a quadtree-based approach to encode the significance maps is proposed, which produces a losslessly compressed embedded data stream, supports quality scalability and permits region-of-interest coding.
Abstract: Perfect reconstruction, quality scalability and region-of-interest coding are basic features needed for the image compression schemes used in telemedicine applications. This paper proposes a new wavelet-based embedded compression technique that efficiently exploits the intraband dependencies and uses a quadtree-based approach to encode the significance maps. The algorithm produces a losslessly compressed embedded data stream, supports quality scalability and permits region-of-interest coding. Moreover, experimental results obtained on various images show that the proposed algorithm provides competitive lossless/lossy compression results. The proposed technique is well-suited for telemedicine applications that require fast interactive handling of large image sets over networks with limited and/or variable bandwidth.

121 citations


Journal ArticleDOI
TL;DR: The method consists of a space spectral varying prediction followed by context-based classification and arithmetic coding of the outcome residuals and exhibits impressive results, thanks to the skill of predictors in fitting multispectral data patterns, regardless of differences in sensor responses.
Abstract: This paper describes an original application of fuzzy logic to the reversible compression of multispectral data. The method consists of a space spectral varying prediction followed by context-based classification and arithmetic coding of the outcome residuals. Prediction of a pixel to be encoded is obtained from the fuzzy-switching of a set of linear regression predictors. Pixels both on the current band and on previously encoded bands may be used to define a causal neighborhood. The coefficients of each predictor are calculated so as to minimize the mean-squared error for those pixels whose intensity level patterns lying on the causal neighborhood, belong in a fuzzy sense to a predefined cluster. The size and shape of the causal neighborhood, as well as the number of predictors to be switched, may be chosen by the user and determine the tradeoff between coding performances and computational cost. The method exhibits impressive results, thanks to the skill of predictors in fitting multispectral data patterns, regardless of differences in sensor responses.

116 citations


Proceedings ArticleDOI
24 Oct 1999
TL;DR: It is demonstrated how the RLS-based adaptation can produce predictor with support ideally aligned along an arbitrarily-oriented edge and therefore it is called "Edge Directed Prediction", which substantially outperforms former context-based prediction schemes for natural images.
Abstract: Natural images are populated with edges characterized by abrupt changes of local statistics. They put severe challenges on probability modeling of image sources. This paper proposes to employ recursive least square (RLS)-based predictive modeling to characterize local statistics for edges. It can be viewed as estimating the covariance matrix from a local causal neighborhood and selecting the MMSE optimal predictor for the local covariance estimate. We demonstrate how the RLS-based adaptation can produce predictor with support ideally aligned along an arbitrarily-oriented edge and therefore we call it "Edge Directed Prediction"(EDP). When applied to lossless image compression, the EDP substantially outperforms former context-based prediction schemes for natural images. Based on our high-level understanding of EDP, we dramatically reduce its complexity with little sacrifice on the performance, thus facilitating its application in practice.

114 citations


Patent
13 May 1999
TL;DR: In this paper, the authors proposed a coding method and apparatus that include splitting raw image data (200) into a plurality of channels (R, G1, G2, B) including color plane difference channels, and then compressing separately each of these channels using a two-dimensional discrete wavelet transform (210, 212, 214, 216, 216).
Abstract: A coding method and apparatus that include splitting raw image data (200) into a plurality of channels (R, G1, G2, B) including color plane difference channels (R-G1, B-G2), and then compressing separately each of these channels using a two-dimensional discrete wavelet transform (210, 212, 214, 216), the compression utilizing quantization, whereby the recovery of the compressed channel data yielding a perceptually lossless image. The method and apparatus operate on images directly in their Bayer pattern form. Quantization thresholds are defined for the quantizing which may vary depending upon the channel and DWT sub-band being processed.

106 citations


Proceedings ArticleDOI
15 Mar 1999
TL;DR: This paper investigates the use of energy histograms of the low frequency DCT coefficients as features for the retrieval of DCT compressed images and proposes a feature set that is able to identify similarities on changes of image-representation due to several lossless DCT transformations.
Abstract: With the increasing popularity of the use of compressed images, an intuitive approach for lowering computational complexity towards a practically efficient image retrieval system is to propose a scheme that is able to perform retrieval computation directly in the compressed domain. In this paper, we investigate the use of energy histograms of the low frequency DCT coefficients as features for the retrieval of DCT compressed images. We propose a feature set that is able to identify similarities on changes of image-representation due to several lossless DCT transformations. We then use the features to construct an image retrieval system based on the real-time image retrieval model. We observe that the proposed features are sufficient for performing high level retrieval on medium size image databases. And by introducing transpositional symmetry, the features can be brought to accommodate several lossless DCT transformations such as horizontal and vertical mirroring, rotating, transposing, and transversing.

103 citations


BookDOI
01 Apr 1999
TL;DR: Part 1 System applications: multimedia systems overview video compression audio compression system synchronization approaches digital versatile disk VLSI signal processing for very high speed digital subscriber loops (VDSL) cable modems wireless communication systems.
Abstract: Part 1 System applications: multimedia systems overview video compression audio compression system synchronization approaches digital versatile disk VLSI signal processing for very high speed digital subscriber loops (VDSL) cable modems wireless communication systems. Part 2 Programmable and custom architectures and algorithms: programmable DSPs RISC, video and media DSPs wireless DSPs motion estimation system design wavelet VLSI architectures DCT architectures lossless coders Viterbi decoders - algorithms and high performance architectures watermarking for multimedia systolic RLS adaptive filtering STAR-RLS filtering. Part 3 Advanced arithmetic architectures and design methodologies: division and square root finite field arithmetic cordic algorithms and architectures for fast and efficient vector-rotation implementation advanced systolic design low power design power estimation approaches system exploration for custom low power data storage and transfer hardware description and synthesis of DSP systems.

98 citations


Proceedings ArticleDOI
29 Mar 1999
TL;DR: The result is a theoretical validation and quantification of the earlier experimental observation that BWT-based lossless source codes give performance better than that of Ziv-Lempel-style codes and almost as good as that of prediction by partial mapping (PPM) algorithms.
Abstract: We here consider a theoretical evaluation of data compression algorithms based on the Burrows Wheeler transform (BWT). The main contributions include a variety of very simple new techniques for BWT-based universal lossless source coding on finite-memory sources and a set of new rate of convergence results for BWT-based source codes. The result is a theoretical validation and quantification of the earlier experimental observation that BWT-based lossless source codes give performance better than that of Ziv-Lempel-style codes and almost as good as that of prediction by partial mapping (PPM) algorithms.

97 citations


Patent
01 Feb 1999
TL;DR: An integrated memory controller (IMC) which includes data compression and decompression engines for improved performance is presented in this article. But it is limited to the use of a single memory controller.
Abstract: An integrated memory controller (IMC) which includes data compression and decompression engines for improved performance. The memory controller (IMC) of the present invention preferably sits on the main CPU bus or a high speed system peripheral bus such as the PCI bus and couples to system memory. The IMC preferably uses a lossless data compression and decompression scheme. Data transfers to and from the integrated memory controller of the present invention can thus be in either two formats, these being compressed or normal (non-compressed). The IMC also preferably includes microcode for specific decompression of particular data formats such as digital video and digital audio. Compressed data from system I/O peripherals such as the hard drive, floppy drive, or local area network (LAN) are decompressed in the IMC and stored into system memory or saved in the system memory in compressed format. Thus, data can be saved in either a normal or compressed format, retrieved from the system memory for CPU usage in a normal or compressed format, or transmitted and stored on a medium in a normal or compressed format. Internal memory mapping allows for format definition spaces which define the format of the data and the data type to be read or written. Software overrides may be placed in applications software in systems that desire to control data decompression at the software application level. The integrated data compression and decompression capabilities of the IMC remove system bottle-necks and increase performance. This allows lower cost systems due to smaller data storage requirements and reduced bandwidth requirements. This also increases system bandwidth and hence increases system performance. Thus the IMC of the present invention is a significant advance over the operation of current memory controllers.

Patent
12 Oct 1999
TL;DR: In this article, a lossless bandwidth compression method for use in a distributed processor system for communicating graphical text data from a remote application server to a user workstation over a low bandwidth transport mechanism enables the workstation display to support the illusion that the application program is running locally rather than at the remote application servers.
Abstract: A lossless bandwidth compression method for use in a distributed processor system for communicating graphical text data from a remote application server to a user workstation over a low bandwidth transport mechanism enables the workstation display to support the illusion that the application program is running locally rather than at the remote application server. At the application server, the graphical text data is represented by a string of glyphs, each glyph being a bit mask representing the foreground/background state of the graphical text data pixels. Each unique glyph is encoded by assigning a unique identification code (IDC). Each IDC is compared with the previous IDCs in the string and, if a match is found, the IDC is transmitted to the workstation. If a match with a prior IDC is not found, the IDC and the corresponding glyph pattern are transmitted to the workstation. At the workstation, the IDCs are queued in the order received while the glyph patterns are cached using the corresponding IDCs as addresses. The string of glyphs is reconstructed by using the queued IDCs in their natural order for accessing the cached glyph patterns as required to reproduce the original string of glyphs.

Proceedings ArticleDOI
F.F. Rodler1
05 Oct 1999
TL;DR: Experimental results on the CT dataset of the Visible Human have shown that the proposed wavelet based method for compressing volumetric data with little loss in quality provides very high compression rates with fairly fast random access.
Abstract: In this paper we propose a wavelet based method for compressing volumetric data with little loss in quality allowing fast random access to individual voxels within the volume. Such a method is important since storing and visualising very large volumes impose heavy demands on internal memory and external storage facilities, a problem not likely to become less in the future, making it accessible only to users with huge and expensive computers. Experimental results on the CT dataset of the Visible Human have shown that our method provides very high compression rates with fairly fast random access.

Patent
25 Mar 1999
TL;DR: An image compression system for implementing a zerotree wavelet compression algorithm is described in this paper.The compression system uses a wavelet based coding system which takes advantage of the correlation between insignificant coefficients at different scales.
Abstract: An image compression system for implementing a zerotree wavelet compression algorithm. The compression system uses a wavelet based coding system which takes advantage of the correlation between insignificant coefficients at different scales.

Journal ArticleDOI
01 Jun 1999
TL;DR: In this two-part paper, the major building blocks of image coding schemes are overviewed and coding results are presented which compare state-of-the-art techniques for lossy and lossless compression.
Abstract: Digital images have become an important source of information in the modern world of communication systems. In their raw form, digital images require a tremendous amount of memory. Many research efforts have been devoted to the problem of image compression in the last two decades. Two different compression categories must be distinguished: lossless and lossy. Lossless compression is achieved if no distortion is introduced in the coded image. Applications requiring this type of compression include medical imaging and satellite photography. For applications such as video telephony or multimedia applications, some loss of information is usually tolerated in exchange for a high compression ratio. In this two-part paper, the major building blocks of image coding schemes are overviewed. Part I covers still image coding, and Part II covers motion picture sequences. In this first part, still image coding schemes have been classified into predictive, block transform, and multiresolution approaches. Predictive methods are suited to lossless and low-compression applications. Transform-based coding schemes achieve higher compression ratios for lossy compression but suffer from blocking artifacts at high-compression ratios. Multiresolution approaches are suited for lossy as well for lossless compression. At lossy high-compression ratios, the typical artifact visible in the reconstructed images is the ringing effect. New applications in a multimedia environment drove the need for new functionalities of the image coding schemes. For that purpose, second-generation coding techniques segment the image into semantically meaningful pairs. Therefore, parts of these methods have been adapted to work for arbitrarily shaped regions. In order to add another functionality, such as progressive transmission of the information, specific quantization algorithms must he defined. A final step in the compression scheme is achieved by the codeword assignment. Finally, coding results are presented which compare state-of-the-art techniques for lossy and lossless compression. The different artifacts of each technique are highlighted and discussed. Also, the possibility of progressive transmission is illustrated.

Patent
Edward L. Schwartz1, Ahmad Zandi1
20 Aug 1999
TL;DR: A reversible Discrete Cosine Transform (DCT) is described in this paper, where the reversible DCT may be part of a compressor in a system and the system may include a decompressor with a reversible inverse DCT for lossless decompression or a legacy decompressor for lossy decompression.
Abstract: A reversible Discrete Cosine Transform (DCT) is described. The reversible DCT may be part of a compressor in a system. The system may include a decompressor with a reversible inverse DCT for lossless decompression or a legacy decompressor with an inverse DCT for lossy decompression.

Journal ArticleDOI
01 Sep 1999
TL;DR: This work investigates a near lossless compression technique that gives quantitative bounds on the errors introduced during compression and finds that such a technique gives significantly higher compression ratios than lossy compression.
Abstract: We study compression techniques for electroencephalograph (EEG) signals. A variety of lossless compression techniques, including compress, gzip, bzip, shorten, and several predictive coding methods, are investigated and compared. The methods range from simple dictionary based approaches to more sophisticated context modeling techniques. It is seen that compression ratios obtained by lossless compression are limited even with sophisticated context based bias cancellation and activity based conditional coding. Though lossy compression can yield significantly higher compression ratios while potentially preserving diagnostic accuracy, it is not usually employed due to legal concerns. Hence, we investigate a near lossless compression technique that gives quantitative bounds on the errors introduced during compression. It is observed that such a technique gives significantly higher compression ratios (up to 3-bit/sample saving with less than 1% error). Compression results are reported for EEG's recorded under various clinical conditions.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A new lossless compression scheme for the connectivity of tetrahedral meshes is introduced which can handle all tetra cathedral meshes in three dimensional euclidean space even with non manifold border and solutions for the compression of vertex coordinates and additional attributes, which might be attached to the mesh.
Abstract: In recent years, substantial progress has been achieved in the area of volume visualization on irregular grids, which is mainly based on tetrahedral meshes. Even moderately fine tetrahedral meshes consume several mega-bytes of storage. For archivation and transmission compression algorithms are essential. In scientific applications lossless compression schemes are of primary interest. This paper introduces a new lossless compression scheme for the connectivity of tetrahedral meshes. Our technique can handle all tetrahedral meshes in three dimensional euclidean space even with non manifold border. We present compression and decompression algorithms which consume for reasonable meshes linear time in the number of tetrahedra. The connectivity is compressed to less than 2.4 bits per tetrahedron for all measured meshes. Thus a tetrahedral mesh can almost be reduced to the vertex coordinates, which consume in a common representation about one quarter of the total storage space. We complete our work with solutions for the compression of vertex coordinates and additional attributes, which might be attached to the mesh.

Journal ArticleDOI
TL;DR: In a method and a device for transmission of S+P transform coded digitized images a mask is calculated by means of which a region of interest (ROI) can be transmitted lossless whereby the ROI can be transmit and received lossless and still maintaining a good compression ratio for the image as a whole.

Journal ArticleDOI
TL;DR: A wavelet-based compression scheme that is able to operate in the lossless mode that implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding.
Abstract: The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of 20 angiograms show that the algorithm outperforms the embedded zerotree coder, combined with the integer wavelet transform, by 0.38 bpp, the set partitioning coder by 0.21 bpp, and the lossless JPEG coder by 0.71 bpp. The scheme is a good candidate for radiological applications such as teleradiology and picture archiving and communications systems (PACS's).

Journal ArticleDOI
TL;DR: The quantization module implements a new technique for the coding of the wavelet coefficients that is more effective than the classical zerotree coding, and produces a losslessly compressed embedded data stream that supports progressive refinement of the decompressed images.
Abstract: Lossless image compression with progressive transmission capabilities plays a key role in measurement applications, requiring quantitative analysis and involving large sets of images. This work proposes a wavelet‐based compression scheme that is able to operate in the lossless mode. The quantization module implements a new technique for the coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of multimodal medical images show that the proposed algorithm outperforms the embedded zerotree coder combined with the integer wavelet transform by 0.28 bpp, the set‐partitioning coder by 0.1 bpp, and the lossless JPEG coder by 0.6 bpp. The scheme produces a losslessly compressed embedded data stream; hence, it supports progressive refinement of the decompressed images. Therefore, it is a good candidate for telematics applications requiring fast user interaction with the image data, retaining the option of lossless transmission and archiving of the images. © 1999 John Wiley & Sons, Inc. Int J Imaging Syst Technol 10: 76–85, 1999

Patent
10 Sep 1999
TL;DR: A block based hybrid compression method where the input page is classified as SOLID, TEXT, SATURATED TEXT or IMAGE type, and the compression method most appropriate for each class is chosen on a block by block basis is presented in this article.
Abstract: A block based hybrid compression method where the input page is classified as SOLID, TEXT, SATURATED TEXT or IMAGE type, and the compression method most appropriate for each class is chosen on a block by block basis. Blocks classified as IMAGE may be compressed using Parallel Differential Pulse Code Modulation. This method allows the decompression algorithm to decode multiple pixels in parallel, thus making real time decompression significantly easier to implement. The methods shown will execute very efficiently on a Texas Instruments TMS302C82 multiprocessing Digital Signal Processor.

Patent
Ahmad Zandi1, Edward L. Schwartz1
30 Jul 1999
TL;DR: In this paper, a number of reversible wavelet transforms have been identified which allow exact reconstruction in integer arithmetic, and different transforms vary in how rounding is performed, except for the rounding with non-linear operations.
Abstract: Recently, a number of reversible wavelet transforms have been identified which allow for exact reconstruction in integer arithmetic. Different transforms vary in how rounding is performed. The present invention provides a transform, which is linear except for the rounding with non-linear operations in order to create a reversible implementation. Also, the present invention also provides transforms which are decomposed into all finite inpulse response parts.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: LOCO-I/JPEG-LS attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding, within a few percentage points of the best available compression ratios.
Abstract: LOGO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. The algorithm was conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit based on Golomb codes. The JPEG-LS standard evolved after successive refinements of the core algorithm, and a description of its design principles and main algorithmic components is presented in this paper. LOCO-I/JPEG-LS attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level.

Proceedings ArticleDOI
18 Oct 1999
TL;DR: An application of the Set Partitioning in Hierarchical Trees (SPIHT) algorithm to volumetric medical images, using a 3D wavelet decomposition and a3D spatial dependence tree.
Abstract: This paper focuses on lossless medical image compression methods for 3D volumetric medical images that operate on three-dimensional (3D) reversible integer wavelet transforms. We offer an application of the Set Partitioning in Hierarchical Trees (SPIHT) algorithm to volumetric medical images, using a 3D wavelet decomposition and a 3D spatial dependence tree. The wavelet decomposition is accomplished with integer wavelet filters implemented with the lifting method, where careful scaling and truncations keep the integer precision small and the transform unitary. We have tested our encoder on volumetric medical images using different integer filters and different coding unit sizes. The coding unit sizes of 16 and 8 slices save considerable memory and coding delay from full sequence coding units used in previous works. Results show that, even with these small coding units, our algorithm with certain filters performs as well and sometimes better in lossless coding than previously coding systems using 3D integer wavelet transforms on volumetric medical images.

Proceedings ArticleDOI
05 Oct 1999
TL;DR: This paper presents an efficient wavelet-based compression method providing fast visualization of large volume data, which is divided into individual blocks with regular resolution, resulting in a fairly good compression ratio and fast reconstruction.
Abstract: Since volume rendering needs a lot of computation time and memory space, many researches have been suggested for accelerating rendering or reducing data size using compression techniques. However, there is little progress in a research for accomplishing these goals. This paper presents an efficient wavelet-based compression method providing fast visualization of large volume data, which is divided into individual blocks with regular resolution. Wavelet transformed block is runlength encoded in accordance with the reconstruction order resulting in a fairly good compression ratio and fast reconstruction. A cache data structure is designed to speed up the reconstruction, and an adaptive compression scheme is proposed to produce a higher quality rendered image. The compression method proposed here is combined with several accelerated volume rendering algorithms, such as brute-force volume rendering with min-max table and Lacroute's shear-warp factorization. Experimental results have shown the space requirement to be about 1/27 and the rendering time to be about 3 seconds for 512/spl times/512/spl times/512 data sets while preserving the quality of an image much like using the original data.

Patent
03 May 1999
TL;DR: In this paper, a method and apparatus for high-performance lossless data compression implemented in hardware for improving network communications is presented, where instructions for a compression task are assigned to the compression module by a microprocessor writing a control block to a queue in stored local memory.
Abstract: A method and apparatus is presented providing high-performance lossless data compression implemented in hardware for improving network communications. A compression module useful in a switching platform is also presented capable of compressing data stored in buffer memory. Instructions for a compression task are assigned to the compression module by a microprocessor writing a control block to a queue in stored local memory. The control block informs the compression module of the size and location of the unprocessed data, as well as a location in the buffer memory for storing the processed data and the maximum allowed size for the compressed data. Using this technique, the microprocessor can limit the compression of data to those data streams allowing compression, to those segments that are susceptible to compression, and to those segments that are large enough to show a transmission speed improvement via compression.

Journal ArticleDOI
TL;DR: A new lossy variant of the fixed-database Lempel-Ziv coding algorithm for encoding at a fixed distortion level is proposed, and its asymptotic optimality and universality for memoryless sources is demonstrated.
Abstract: A new lossy variant of the fixed-database Lempel-Ziv coding algorithm for encoding at a fixed distortion level is proposed, and its asymptotic optimality and universality for memoryless sources (with respect to bounded single-letter distortion measures) is demonstrated: as the database size m increases to infinity, the expected compression ratio approaches the rate-distortion function. The complexity and redundancy characteristics of the algorithm are comparable to those of its lossless counterpart. A heuristic argument suggests that the redundancy is of order (log log m)/log m, and this is also confirmed experimentally; simulation results are presented that agree well with this rate. Also, the complexity of the algorithm is seen to be comparable to that of the corresponding lossless scheme. We show that there is a tradeoff between compression performance and encoding complexity, and we discuss how the relevant parameters can be chosen to balance this tradeoff in practice. We also discuss the performance of the algorithm when applied to sources with memory, and extensions to the cases of unbounded distortion measures and infinite reproduction alphabets.

Proceedings ArticleDOI
07 Jun 1999
TL;DR: A new joint compression and encryption method is presented that uses high-order conditional entropy coding of wavelet coefficients to facilitate encryption, and state-of-the-art compression and significantly enhanced security are achieved, with no extra computational complexity.
Abstract: As the Internet and multimedia systems grow in size and popularity, compression and encryption of image and video data are becoming increasingly important. However, independent compression and encryption is too slow for many multimedia applications. This paper presents a new joint compression and encryption method for images and videos that uses high-order conditional entropy coding of wavelet coefficients to facilitate encryption. As a result, state-of-the-art compression and significantly enhanced security are achieved, with no extra computational complexity.

Journal ArticleDOI
TL;DR: This paper investigates the application of a radial basis function network (RBFN) to a hierarchical image coding for progressive transmission and develops an efficient method of computing the network parameters for reduction in computational and memory requirements.
Abstract: Investigates the application of a radial basis function network (RBFN) to hierarchical image coding for progressive transmission. The RBFN is then used to generate an interpolated image from the subsampled version. An efficient method of computing the network parameters is developed for reduction in computational and memory requirements. The coding method does not suffer from problems of blocking effect and can produce the coarsest image quickly. Quantization error effects introduced at one stage are considered in decoding images at the following stages, thus allowing lossless progressive transmission.