scispace - formally typeset
Search or ask a question

Showing papers on "Lossless compression published in 2001"


Journal ArticleDOI
TL;DR: In this article, it was shown that the compression ratio of Gzip and Gzip can be bounded in terms of the kth order empirical entropy of the input string for any k ≥ 0.
Abstract: The Burrows—Wheeler Transform (also known as Block-Sorting) is at the base of compression algorithms that are the state of the art in lossless data compression In this paper, we analyze two algorithms that use this technique The first one is the original algorithm described by Burrows and Wheeler, which, despite its simplicity outperforms the Gzip compressor The second one uses an additional run-length encoding step to improve compression We prove that the compression ratio of both algorithms can be bounded in terms of the kth order empirical entropy of the input string for any k ≥ 0 We make no assumptions on the input and we obtain bounds which hold in the worst case that is for every possible input string All previous results for Block-Sorting algorithms were concerned with the average compression ratio and have been established assuming that the input comes from a finite-order Markov source

387 citations


Journal ArticleDOI
TL;DR: The binDCT can be tuned to cover the gap between the Walsh-Hadamard transform and the DCT, and allows a 16-bit implementation, enables lossless compression, and maintains satisfactory compatibility with the floating-point DCT.
Abstract: We present the design, implementation, and application of several families of fast multiplierless approximations of the discrete cosine transform (DCT) with the lifting scheme called the binDCT. These binDCT families are derived from Chen's (1977) and Loeffler's (1989) plane rotation-based factorizations of the DCT matrix, respectively, and the design approach can also be applied to a DCT of arbitrary size. Two design approaches are presented. In the first method, an optimization program is defined, and the multiplierless transform is obtained by approximating its solution with dyadic values. In the second method, a general lifting-based scaled DCT structure is obtained, and the analytical values of all lifting parameters are derived, enabling dyadic approximations with different accuracies. Therefore, the binDCT can be tuned to cover the gap between the Walsh-Hadamard transform and the DCT. The corresponding two-dimensional (2-D) binDCT allows a 16-bit implementation, enables lossless compression, and maintains satisfactory compatibility with the floating-point DCT. The performance of the binDCT in JPEG, H.263+, and lossless compression is also demonstrated.

342 citations


Proceedings ArticleDOI
01 Aug 2001
TL;DR: This paper presents a novel, fully progressive encoding approach for lossless transmission of triangle meshes with a very fine granularity, using a new valence-driven decimating conquest, combined with patch tiling and an original strategic retriangulation to maintain the regularity of valence.
Abstract: Lossless transmission of 3D meshes is a very challenging and timely problem for many applications, ranging from collaborative design to engineering. Additionally, frequent delays in transmissions call for progressive transmission in order for the end user to receive useful successive refinements of the final mesh. In this paper, we present a novel, fully progressive encoding approach for lossless transmission of triangle meshes with a very fine granularity. A new valence-driven decimating conquest, combined with patch tiling and an original strategic retriangulation is used to maintain the regularity of valence. We demonstrate that this technique leads to good mesh quality, near-optimal connectivity encoding, and therefore a good rate-distortion ratio throughout the transmission. We also improve upon previous lossless geometry encoding by decorrelating the normal and tangential components of the surface. For typical meshes, our method compresses connectivity down to less than 3.7 bits per vertex, 40% better in average than the best methods previously reported [5, 18]; we further reduce the usual geometry bit rates by 20% in average by exploiting the smoothness of meshes. Concretely, our technique can reduce an ascii VRML 3D model down to 1.7% of its size for a 10-bit quantization (2.3% for a 12-bit quantization) while providing a very progressive reconstruction.

290 citations


Book ChapterDOI
25 Apr 2001
TL;DR: This paper introduces a general approach for high-capacity data embedding that is distortion-free (or lossless) in the sense that after the embedded information is extracted from the stego-image, the authors can revert to the exact copy of the original image before the embedding occurred.
Abstract: One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted by some small amount of noise due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small, it may not be acceptable for medical imagery (for legal reasons) or for military images inspected under unusual viewing conditions (after filtering or extreme zoom). In this paper, we introduce a general approach for high-capacity data embedding that is distortion-free (or lossless) in the sense that after the embedded information is extracted from the stego-image, we can revert to the exact copy of the original image before the embedding occurred. The new method can be used as a powerful tool to achieve a variety of non-trivial tasks, including distortion-free robust watermarking, distortion-free authentication using fragile watermarks, and steganalysis. The proposed concepts are also extended to lossy image formats, such as the JPG.

269 citations


Journal ArticleDOI
TL;DR: This analysis shows that the superiority of the LS-based adaptation is due to its edge-directed property, which enables the predictor to adapt reasonably well from smooth regions to edge areas.
Abstract: This paper sheds light on the least-square (LS)-based adaptive prediction schemes for lossless compression of natural images. Our analysis shows that the superiority of the LS-based adaptation is due to its edge-directed property, which enables the predictor to adapt reasonably well from smooth regions to edge areas. Recognizing that LS-based adaptation improves the prediction mainly around the edge areas, we propose a novel approach to reduce its computational complexity with negligible performance sacrifice. The lossless image coder built upon the new prediction scheme has achieved noticeably better performance than the state-of-the-art coder CALIC with moderately increased computational complexity.

259 citations


Journal ArticleDOI
TL;DR: A new valence‐driven conquest for arbitrary meshes that always guarantees smaller compression rates than the original method, resulting in the lowest compression ratios published so far, for both irregular and regular meshes, small or large.
Abstract: In this paper, we propose a valence-driven, single-resolution encoding technique for lossless compression of triangle mesh connectivity. Building upon a valence-based approach pioneered by Touma and Gotsman 22, we design a new valence-driven conquest for arbitrary meshes that always guarantees smaller compression rates than the original method. Furthermore, we provide a novel theoretical entropy study of our technique, hinting the optimality of the valence-driven approach. Finally, we demonstrate the practical efficiency of this approach (in agreement with the theoretical prediction) on a series of test meshes, resulting in the lowest compression ratios published so far, for both irregular and regular meshes, small or large.

243 citations


Proceedings ArticleDOI
02 Apr 2001
TL;DR: Two new invertible watermarking methods for authentication of digital images in the JPEG format are presented, providing new information assurance tools for integrity protection of sensitive imagery, such as medical images or high-importance military images viewed under non-standard conditions when usual criteria for visibility do not apply.
Abstract: We present two new invertible watermarking methods for authentication of digital images in the JPEG format. While virtually all previous authentication watermarking schemes introduced some small amount of non-invertible distortion in the image, the new methods are invertible in the sense that, if the image is deemed authentic, the distortion due to authentication can be completely removed to obtain the original image data. The first technique is based on lossless compression of biased bit-streams derived from the quantized JPEG coefficients. The second technique modifies the quantization matrix to enable lossless embedding of one bit per DCT coefficient. Both techniques are fast and can be used for general distortion-free (invertible) data embedding. The new methods provide new information assurance tools for integrity protection of sensitive imagery, such as medical images or high-importance military images viewed under non-standard conditions when usual criteria for visibility do not apply.

207 citations


Patent
02 Feb 2001
TL;DR: In this article, a composite disk controller provides data storage and retrieval acceleration using multiple caches for data pipelining and increased throughput, and the disk controller with acceleration is embedded in the storage device.
Abstract: Data storage controllers and data storage devices employing lossless or lossy data compression and decompression to provide accelerated data storage and retrieval bandwidth. In one embodiment of the invention, a composite disk controller provides data storage and retrieval acceleration using multiple caches for data pipelining and increased throughput. In another embodiment of the invention, the disk controller with acceleration is embedded in the storage device and utilized for data storage and retrieval acceleration.

185 citations


Journal ArticleDOI
TL;DR: It is found that lossless audio coders have reached a limit in what can be achieved for lossless compression of audio, and a new lossless Audio coder is described called AudioPak, which low algorithmic complexity and performs well or even better than most of the losslessaudio coders that have been described in the literature.
Abstract: Lossless audio compression is likely to play an important part in music distribution over the Internet, DVD audio, digital audio archiving, and mixing. The article is a survey and a classification of the current state-of-the-art lossless audio compression algorithms. This study finds that lossless audio coders have reached a limit in what can be achieved for lossless compression of audio. It also describes a new lossless audio coder called AudioPak, which low algorithmic complexity and performs well or even better than most of the lossless audio coders that have been described in the literature.

181 citations


Journal ArticleDOI
TL;DR: A new methodology which performs both lossless compression and encryption of binary and gray-scale images, based on SCAN patterns generated by the SCAN methodology is presented.

154 citations


Journal ArticleDOI
TL;DR: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling, and the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
Abstract: The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

Patent
17 Apr 2001
TL;DR: In this article, the authors propose a lossless image streaming system for the transmission of images over a communication network, which eliminates the necessity to store a compressed version of the original image, by losslessly streaming ROI data using the original stored image.
Abstract: A lossless image streaming system for the transmission of images over a communication network. The system eliminates the necessity to store a compressed version of the original image, by losslessly streaming ROI data using the original stored image. The imaging system also avoids the computationally intensive task of compression of the full image. When a user wishes to interact with a remote image, the imaging client generates and sends a ROI request list to the imaging server. The request list can be ordered according to the particular progressive mode selected (e.g., progressive by quality, resolution or spatial order). The imaging server performs a fast preprocessing step in near real time after which it can respond to any ROI requests in near real time. When a ROI request arrives at the server, a progressive image encoding algorithm is performed, but not for the full image. Instead, the encoding algorithm is performed only for the ROI. Since the size of the ROI is bounded by the size and resolution of the viewing device at the client and not by the size of the image, only a small portion of the full progressive coding computation is performed for a local area of the original image.

Journal ArticleDOI
TL;DR: A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented and predicts accurately the results obtained using images compressed by the well-known EZW algorithm.
Abstract: The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.

Proceedings ArticleDOI
07 Oct 2001
TL;DR: An algorithm for image compression, in which the order of the compression and interpolation stages is reversed to avoid increasing redundancy before compression, is proposed and the image transform is introduced to compress uninterpolated images with JPEG.
Abstract: We propose a new approach for image compression in digital cameras, where the goal is to achieve better quality at a given rate by using the characteristics of a Bayer color filter array. Most digital cameras produce color images by using one CCD plate and each pixel in an image has only one color component, so an interpolation method is needed to produce a full color image. After finishing an image processing stage, in order to reduce the memory requirements of the camera, a lossless or lossy compression stage, using a coder such as JPEG, often follows. Before decreasing redundancy in a compression stage, redundancy is increased in an interpolation stage. We propose an algorithm for image compression, in which the order of the compression and interpolation stages is reversed to avoid increasing redundancy before compression. We introduce the image transform to compress uninterpolated images with JPEG. Our simulations show that the result of our algorithm is better than conventional methods for all compression ranges. This proposed algorithm provides not only better quality but also lower complexity because the number of luminance data of our method is only half of that of conventional methods.

Patent
07 May 2001
TL;DR: In this article, a dictionary-based data compression apparatus is proposed to provide efficient compression of relatively short data packets having undefined contents as may be expected in a network switch by using a library of static dictionaries each optimized for a different data type, a data type determiner operable to scan incoming data and determine a data types thereof, a selector for selecting a static dictionary corresponding to a determined data type and a compressor for compressing said incoming data using said selected dictionary.
Abstract: Dictionary based data compression apparatus comprising: a library of static dictionaries each optimized for a different data type, a data type determiner operable to scan incoming data and determine a data type thereof, a selector for selecting a static dictionary corresponding to said determined data type and a compressor for compressing said incoming data using said selected dictionary. The apparatus is useful in providing efficient compression of relatively short data packets having undefined contents as may be expected in a network switch.

Proceedings ArticleDOI
21 Oct 2001
TL;DR: It is demonstrated that a specific type of regular lattice is able to represent the same data set as a Cartesian grid to the same accuracy but with 29.3% fewer samples, which speeds up traditional volume rendering algorithms by the same ratio.
Abstract: The classification of volumetric data sets as well as their rendering algorithms are typically based on the representation of the underlying grid. Grid structures based on a Cartesian lattice are the de-facto standard for regular representations of volumetric data. In this paper we introduce a more general concept of regular grids for the representation of volumetric data. We demonstrate that a specific type of regular lattice --- the so-called body-centered cubic --- is able to represent the same data set as a Cartesian grid to the same accuracy but with 29.3% fewer samples. This speeds up traditional volume rendering algorithms by the same ratio, which we demonstrate by adopting a splatting implementation for these new lattices. We investigate different filtering methods required for computing the normals on this lattice. The lattice representation results also in lossless compression ratios that are better than previously reported. Although other regular grid structures achieve the same sample efficiency, the body-centered cubic is particularly easy to use. The only assumption necessary is that the underlying volume is isotropic and band-limited - an assumption that is valid for most practical data sets.

Proceedings ArticleDOI
25 Oct 2001
TL;DR: A hybrid model of lossless compression in the region of interest, with high-rate, motion-compensated, lossy compression in other regions is discussed, and it is shown that it outperforms other common compression schemes, such as discrete cosine transform, vector quantization, and principal component analysis.
Abstract: CT or MRI medical imaging produce human body pictures in digital form. Since these imaging techniques produce prohibitive amounts of data, compression is necessary for storage and communication purposes. Many current compression schemes provide a very high compression rate but with considerable loss of quality. On the other hand, in some areas in medicine, it may be sufficient to maintain high image quality only in the region of interest, i.e., in diagnostically important regions. This paper discusses a hybrid model of lossless compression in the region of interest, with high-rate, motion-compensated, lossy compression in other regions. We evaluate our method on medical CT images, and show that it outperforms other common compression schemes, such as discrete cosine transform, vector quantization, and principal component analysis. In our experiments, we emphasize CT imaging of the human colon.

Patent
18 May 2001
TL;DR: In this article, a low-cost camera by implementing the major functions in host software is provided, which is accomplished by sending raw, digitized data from the camera directly to the host, where the increased volume of raw data is handled by either an improved compression/decompression scheme using lossless compression, using lossy compression or using a shared bus with higher bandwidth.
Abstract: A low cost camera by implementing the major functions in host software is provided. This is accomplished by sending raw, digitized data from the camera directly to the host. The increased volume of raw data is handled by either an improved compression/decompression scheme using lossless compression, using lossy compression or using a shared bus with higher bandwidth. By moving such functions as color processing and scaling to the host, the pixel correction can also be moved to the host. This in turn allows the elimination of the frame buffer memory from the camera. Finally, the camera can use a low cost lens by implementing vignetting, distortion, gamma or aliasing correction with a correction value stored in a register of the camera for later access by the host to perform corrections.

Posted Content
TL;DR: A development of parts of rate-distortion theory and pattern-matching algorithms for lossy data compression, centered around a lossy version of the asymptotic equipartition property (AEP), is presented.
Abstract: We present a development of parts of rate-distortion theory and pattern- matching algorithms for lossy data compression, centered around a lossy version of the Asymptotic Equipartition Property (AEP). This treatment closely parallels the corresponding development in lossless compression, a point of view that was advanced in an important paper of Wyner and Ziv in 1989. In the lossless case we review how the AEP underlies the analysis of the Lempel-Ziv algorithm by viewing it as a random code and reducing it to the idealized Shannon code. This also provides information about the redundancy of the Lempel-Ziv algorithm and about the asymptotic behavior of several relevant quantities. In the lossy case we give various versions of the statement of the generalized AEP and we outline the general methodology of its proof via large deviations. Its relationship with Barron's generalized AEP is also discussed. The lossy AEP is applied to: (i) prove strengthened versions of Shannon's source coding theorem and universal coding theorems; (ii) characterize the performance of mismatched codebooks; (iii) analyze the performance of pattern- matching algorithms for lossy compression; (iv) determine the first order asymptotics of waiting times (with distortion) between stationary processes; (v) characterize the best achievable rate of weighted codebooks as an optimal sphere-covering exponent. We then present a refinement to the lossy AEP and use it to: (i) prove second order coding theorems; (ii) characterize which sources are easier to compress; (iii) determine the second order asymptotics of waiting times; (iv) determine the precise asymptotic behavior of longest match-lengths. Extensions to random fields are also given.

Journal ArticleDOI
TL;DR: The PNG format provides a network-friendly, patent-free, lossless compression scheme that is truly cross-platform and has many new features that are useful for multimedia and Web-based radiologic teaching.
Abstract: Despite the rapid growth of the Internet for storage and display of World Wide Web-based teaching files, the available image file formats have remained relatively limited. The recently developed portable networks graphics (PNG) format is versatile and offers several advantages over the older Internet standard image file formats that make it an attractive option for digital teaching files. With the PNG format, it is possible to repeatedly open, edit, and save files with lossless compression along with gamma and chromicity correction. The two-dimensional interlacing capabilities of PNG allow an image to fill in from top to bottom and from right to left, making retrieval faster than with other formats. In addition, images can be viewed closer to the original settings, and metadata (ie, information about data) can be incorporated into files. The PNG format provides a network-friendly, patent-free, lossless compression scheme that is truly cross-platform and has many new features that are useful for multimedia and Web-based radiologic teaching. The widespread acceptance of PNG by the World Wide Web Consortium and by the most popular Web browsers and graphic manipulation software companies suggests an expanding role in the future of multimedia teaching file development.

Journal ArticleDOI
TL;DR: In this paper, a classified causal differential pulse code modulation scheme is proposed for optical data, either multi/hyperspectral three-dimensional (3D) or panchromatic two-dimensional(2D) observations, based on a classified linear regression prediction, followed by context-based arithmetic coding of the outcome prediction errors.
Abstract: Near-lossless compression yielding strictly bounded reconstruction error is proposed for high-quality compression of remote sensing images. A classified causal differential pulse code modulation scheme is presented for optical data, either multi/hyperspectral three-dimensional (3-D) or panchromatic two-dimensional (2-D) observations. It is based on a classified linear-regression prediction, followed by context-based arithmetic coding of the outcome prediction errors and provides excellent performances, both for reversible and for irreversible (near-lossless) compression. Coding times are affordable thanks to fast convergence of training. Decoding is always real time. If the reconstruction errors fall within the boundaries of the noise distributions, the decoded images will be virtually lossless even though encoding was not strictly reversible.

Journal ArticleDOI
TL;DR: The experimental results suggest that the compression method can be used effectively in developing real-time applications that must handle large volume data, made of color samples taken in three- or higher-dimensional space.
Abstract: This paper presents a new 3D RGB image compression scheme designed for interactive real-time applications. In designing our compression method, we have compromised between two important goals: high compression ratio and fast random access ability, and have tried to minimize the overhead caused during run-time reconstruction. Our compression technique is suitable for applications wherein data are accessed in a somewhat unpredictable fashion, and real-time performance of decompression is necessary. The experimental results on three different kinds of 3D images from medical imaging, image-based rendering, and solid texture mapping suggest that the compression method can be used effectively in developing real-time applications that must handle large volume data, made of color samples taken in three- or higher-dimensional space.

Patent
13 Nov 2001
TL;DR: In this article, a data encoding scheme maps a set of data to a number of spectral components, each component having an amplitude, a phase and a unique frequency, from these mapped tones, an analog baseband signal can be formed, which, when implemented in a data transmission scheme, can realize much higher throughput per available bandwidth than conventional techniques such as those employing binary baseband signals.
Abstract: A data encoding scheme maps a set of data to a number of spectral components, each component having an amplitude, a phase and a unique frequency. From these mapped tones, an analog baseband signal can be formed, which, when implemented in a data transmission scheme, can realize much higher throughput per available bandwidth than conventional techniques such as those employing binary baseband signals. The encoding scheme can also be implemented in data compression schemes and can realize lossless compression ratios exponentially superior to conventional compression schemes.

Patent
19 Mar 2001
TL;DR: In this article, a client-transparent method for compressing and transmitting requested network server data and uncompressing this data on client browsers is presented, where compression is performed dynamically or statically and in either a centralized or distributed manner.
Abstract: Client-transparent methods and apparatus are taught for compressing and transmitting requested network server data and uncompressing this data on client browsers. A network request for a file from a typical client specifies a list of acceptable encoding schemes. In response, the file is compressed using a substantially lossless encoding format or codes that is one of the acceptable encoding schemes listed. In some embodiments, compression is performed dynamically in response to requests. A particular content delivery server may be chosen to handle each network request for a file at least partly based upon one or more criteria indicating a relative quality of connectivity between the selected server and the requesting client. Compression is performed as a further element of a content delivery business service, and may be performed either dynamically or statically and in either a centralized or distributed manner. A proxy-server may be used to intercept and automatically modify client requests, in order to facilitate compression, transmittal, and decompression of network data to requesting clients in a client-transparent manner.

Proceedings Article
27 Mar 2001
TL;DR: GlicbawlsGreyLevelImageCompressionByAdaptiveWeightedLeastSquares is presented, an algorithm that achieves thatgoal for natural images and is developed as an entry for the International Obfuscated C Contest.
Abstract: GlicbawlsGreyLevelImageCompressionByAdaptiveWeightedLeastSquaresBerndMeyerPeterTischerScho olofComputerScienceandSoftwareEngineeringMonashUniversityClaytonVictoriaAustraliaEmailbmeyerpetcssemonasheduauIntro ductionInrecentyearsmostresearchintolosslessandnearlosslesscompressionofgreyscaleimages could b echaracterizedas b elonging to eitherof twodistinct groupsTherstgroupwhichisconcernedwithsocalledpracticalalgorithmsencompassesresearchintometho dsthatallowcompressionanddecompressionwithlomo deratecomputationalcomplexitywhilestillobtainingimpressivecompressionratiosSomewellknownalgorithmscomingfromthisgroupareLOCOCALICandP AR The other group is mainly concerned with determining what is theoretically p ossibleAlgorithms coming from this group are usually characterized by extreme computationalcomplexityand orhugememoryrequirementsWhiletheirpracticalapplicabilitislow they generally achieveb etter compression thanthe b est practical algorithm ofthesametimethusprovingbeyondadoubtthatthepracticalalgorithmsfailtoexploitsome redundancyinherent in the imagesWellknown examples are UCM and TMW What has b een largely missing so far is an algorithm that combines the compressionratesoftheimpracticalalgorithmswithmo deratecomputationalrequirementsthe practical onesIn this pap er we present Glicbawls an algorithm that achieves thatgoal for natural imagesThe current implementation can compress and decompress greyscale images with to bits p er pixel using raw or ASCI I PGM les in b oth lossless and nearlossless mo deColourimagesrawandASCI IPPMlesarealsosupp ortedwhilecompressionratesfor themarenotworldclass theyare usually b etterthanPNGsDue to the simplicity of the Glicbawls algorithm a full featured enco der deco der canbeimplementedin bytesofCco deEvenwhenincludingthedeco derwitheachcompressedimagethecompressionratesachievedarestillextremelycomp etitiveThisisalsoduetosomeratheratro ciousabusesoftheClanguageGlicbawlswasoriginallydevelop ed as an entry for the International Obfuscated C Contest


Journal ArticleDOI
TL;DR: The authors propose a compression scheme driven by texture analysis, homogeneity mapping and speckle noise reduction within the wavelet framework, and their results compare favorably with the conventional SPIHT wavelet and the JPEG compression methods.
Abstract: SAR image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. The authors propose a compression scheme driven by texture analysis, homogeneity mapping and speckle noise reduction within the wavelet framework. The image compressibility and interpretability are improved by incorporating speckle reduction into the compression scheme. The authors begin with the classical set partitioning in hierarchical trees (SPIHT) wavelet compression scheme, and modify it to control the amount of speckle reduction, applying different encoding schemes to homogeneous and nonhomogeneous areas of the scene. The results compare favorably with the conventional SPIHT wavelet and the JPEG compression methods.

Journal ArticleDOI
TL;DR: A range of information-lossless address and instruction trace compression schemes that can reduce both storage space and access time by an order of magnitude or more, without discarding either references or interreference timing information from the original trace are discussed.
Abstract: The tremendous storage space required for a useful data base of program traces has prompted a search for trace reduction techniques. In this paper, we discuss a range of information-lossless address and instruction trace compression schemes that can reduce both storage space and access time by an order of magnitude or more, without discarding either references or interreference timing information from the original trace. The PDATS family of trace compression techniques achieves trace coding densities of about six references per byte. This family of techniques is now in use as the standard in the NMSU TraceBase, an extensive trace archive that has been established for use by the international research and teaching community.

Journal ArticleDOI
TL;DR: This work focuses on estimating the information conveyed to a user by hyperspectral image data, establishing the extent to which an increase in spectral resolution enhances the amount of usable information.
Abstract: This work focuses on estimating the information conveyed to a user by hyperspectral image data. The goal is establishing the extent to which an increase in spectral resolution enhances the amount of usable information. Indeed, a tradeoff exists between spatial and spectral resolution due to physical constraints of multi-band sensors imaging with a prefixed SNR. After describing an original method developed for the automatic estimation of variance and correlation of the noise introduced by hyperspectral imagers, lossless interband data compression is exploited to measure the useful information content of hyperspectral data. In fact, the bit rate achieved by the reversible compression process takes into account both the contribution of the "observation" noise (i.e., information regarded as statistical uncertainty, but whose relevance to a user is null) and the intrinsic information of radiance sampled and digitized through an ideally noise-free process. An entropic model of the decorrelated image source is defined and, once the parameters of the noise, assumed to be Gaussian and stationary, have been measured, such a model is inverted to yield an estimate of the information content of the noise-free source from the code rate. Results are reported and discussed on both simulated and AVIRIS data.

Journal ArticleDOI
TL;DR: This paper addresses the problem of compressing text images with JBIG2 by proposing two symbol dictionary design techniques: the class-based and tree-based techniques and comparing their coding efficiency, reconstructed image quality and system complexity.
Abstract: The JBIG2 standard for lossy and lossless bilevel image coding is a very flexible encoding strategy based on pattern matching techniques. This paper addresses the problem of compressing text images with JBIG2. For text image compression, JBIG2 allows two encoding strategies: SPM and PM&S. We compare in detail the lossless and lossy coding performance using the SPM-based and PM&S-based JBIG2, including their coding efficiency, reconstructed image quality and system complexity. For the SPM-based JBIG2, we discuss the bit rate tradeoff associated with symbol dictionary design. We propose two symbol dictionary design techniques: the class-based and tree-based techniques. Experiments show that the SPM-based JBIG2 is a more efficient lossless system, leading to 8% higher compression ratios on average. It also provides better control over the reconstructed image quality in lossy compression. However, SPM's advantages come at the price of higher encoder complexity. The proposed class-based and tree-based symbol dictionary designs outperform simpler dictionary formation techniques by 8% for lossless and 16-18% for lossy compression.