scispace - formally typeset
Search or ask a question

Showing papers on "Image compression published in 1994"


01 Jan 1994
TL;DR: A block-sorting, lossless data compression algorithm, and the implementation of that algorithm and the performance of the implementation with widely available data compressors running on the same hardware are compared.
Abstract: The charter of SRC is to advance both the state of knowledge and the state of the art in computer systems. From our establishment in 1984, we have performed basic and applied research to support Digital's business objectives. Our current work includes exploring distributed personal computing on multiple platforms, networking , programming technology, system modelling and management techniques, and selected applications. Our strategy is to test the technical and practical value of our ideas by building hardware and software prototypes and using them as daily tools. Interesting systems are too complex to be evaluated solely in the abstract; extended use allows us to investigate their properties in depth. This experience is useful in the short term in refining our designs, and invaluable in the long term in advancing our knowledge. Most of the major advances in information systems have come through this strategy, including personal computing, distributed systems, and the Internet. We also perform complementary work of a more mathematical flavor. Some of it is in established fields of theoretical computer science, such as the analysis of algorithms, computational geometry, and logics of programming. Other work explores new ground motivated by problems that arise in our systems research. We have a strong commitment to communicating our results; exposing and testing our ideas in the research and development communities leads to improved understanding. Our research report series supplements publication in professional journals and conferences. We seek users for our prototype systems among those with whom we have common interests, and we encourage collaboration with university researchers. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Systems Research Center. All rights reserved. Authors' abstract We describe a block-sorting, lossless data compression algorithm, and our implementation of that algorithm. We compare the performance of our implementation with widely available data compressors running on the same hardware. The algorithm works by applying a reversible transformation to a block of input …

2,753 citations


Journal ArticleDOI
TL;DR: Recently, there has been a resurgence of interest in the properties of natural images as mentioned in this paper, which can be viewed as satisfying certain "design criteria" such as invariance to scale.
Abstract: Recently there has been a resurgence of interest in the properties of natural images. Their statistics are important not only in image compression but also for the study of sensory processing in biology, which can be viewed as satisfying certain ‘design criteria’. This review summarizes previous work on image statistics and presents our own data. Perhaps the most notable property of natural images is an invariance to scale. We present data to support this claim as well as evidence for a hierarchical invariance in natural scenes. These symmetries provide a powerful description of natural images as they greatly restrict the class of allowed distributions.

956 citations


Proceedings ArticleDOI
01 Apr 1994
TL;DR: The Photobook system is described, which is a set of interactive tools for browsing and searching images and image sequences that differ from those used in standard image databases in that they make direct use of the image content rather than relying on annotations.
Abstract: We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually significant coefficients. We describe three Photobook tools in particular: one that allows search based on gray-level appearance, one that uses 2-D shape, and a third that allows search based on textural properties.

941 citations


Journal ArticleDOI
TL;DR: A full color video compression strategy, based on 3-D subband coding with camera pan compensation, to generate a single embedded bit stream supporting multiple decoder display formats and a wide, finely gradated range of bit rates is proposed.
Abstract: We propose a full color video compression strategy, based on 3-D subband coding with camera pan compensation, to generate a single embedded bit stream supporting multiple decoder display formats and a wide, finely gradated range of bit rates. An experimental implementation of our algorithm produces a single bit stream, from which suitable subsets are extracted to be compatible with many decoder frame sizes and frame rates and to satisfy transmission bandwidth constraints ranging from several tens of kilobits per second to several megabits per second. Reconstructed video quality from any of these bit stream subsets is often found to exceed that obtained from an MPEG-1 implementation, operated with equivalent bit rate constraints, in both perceptual quality and mean squared error. In addition, when restricted to 2-D, the algorithm produces some of the best results available in still image compression. >

688 citations


Journal Article
TL;DR: This article describes a simple general-purpose data compression algorithm, called Byte Pair Encoding (BPE), which provides almost as much compression as the popular Lempel, Ziv, and Welch method.
Abstract: Data compression is becoming increasingly important as a way to stretch disk space and speed up data transfers. This article describes a simple general-purpose data compression algorithm, called Byte Pair Encoding (BPE), which provides almost as much compression as the popular Lempel, Ziv, and Welch (LZW) method [3, 2]. (I mention the LZW method in particular because it delivers good overall performance and is widely used.) BPE’s compression speed is somewhat slower than LZW’s, but BPE’s expansion is faster. The main advantage of BPE is the small, fast expansion routine, ideal for applications with limited memory. The accompanying C code provides an efficient implementation of the algorithm.

657 citations


01 Jan 1994
TL;DR: Some simple functions to compute the discrete cosine transform and how it is used for image compression are developed to illustrate the use of Mathematica in image processing and to provide the reader with the basic tools for further exploration of this subject.
Abstract: The discrete cosine transform (DCT) is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. These functions illustrate the power of Mathematica in the prototyping of image processing algorithms. The rapid growth of digital imaging applications, including desktop publishing, multimedia, teleconferencing, and high-definition television (HDTV) has increased the need for effective and standardized image compression techniques. Among the emerging standards are JPEG, for compression of still images [Wallace 1991]; MPEG, for compression of motion video [Puri 1992]; and CCITT H.261 (also known as Px64), for compression of video telephony and teleconferencing. All three of these standards employ a basic technique known as the discrete cosine transform (DCT). Developed by Ahmed, Natarajan, and Rao [1974], the DCT is a close relative of the discrete Fourier transform (DFT). Its application to image compression was pioneered by Chen and Pratt [1984]. In this article, I will develop some simple functions to compute the DCT and show how it is used for image compression. We have used these functions in our laboratory to explore methods of optimizing image compression for the human viewer, using information about the human visual system [Watson 1993]. The goal of this paper is to illustrate the use of Mathematica in image processing and to provide the reader with the basic tools for further exploration of this subject.

364 citations


01 Jan 1994
TL;DR: Three approaches to the measurement of medical image quality are described: signal-to-noise ratio (SNR), subjective rating, and diagnostic accuracy, which compare and contrast in a particular application, and recently developed methods for determining diagnostic accuracy of lossy compressed medical images are considered.
Abstract: Compressing a digital image can facilitate its transmission, storage, and processing. As radiology departments become increasingly digital, the quantities fo their imaging data are forcing consideration of compression in picture archiving and communication systems. Significant compression is achievable anly by lossy algorithms, which do not permit the exact recovery of the original images

284 citations


Proceedings ArticleDOI
27 Jun 1994
TL;DR: In this article, a subband coding scheme based on estimation and exploitation of just-noticeable-distortion (JND) profile is presented to maintain high image quality with low bit-rates.
Abstract: To maintain high image quality with low bit-rates, an effective coding algorithm should not only remove statistical correlation but also perceptual redundancy from image signals. A subband coding scheme based on estimation and exploitation of just-noticeable-distortion (JND) profile is presented. >

211 citations


Journal ArticleDOI
TL;DR: A simple way to get better compression performances (in MSE sense) via quadtree decomposition, by using near to optimal choice of the threshold for quad tree decomposition; and bit allocation procedure based on the equations derived from rate-distortion theory.
Abstract: Quadtree decomposition is a simple technique used to obtain an image representation at different resolution levels. This representation can be useful for a variety of image processing and image compression algorithms. This paper presents a simple way to get better compression performances (in MSE sense) via quadtree decomposition, by using near to optimal choice of the threshold for quadtree decomposition; and bit allocation procedure based on the equations derived from rate-distortion theory. The rate-distortion performance of the improved algorithm is calculated for some Gaussian field, and it is examined vie simulation over benchmark gray-level images. In both these cases, significant improvement in the compression performances is shown. >

176 citations


Journal ArticleDOI
TL;DR: What wavelets are and a practical, nuts-and-bolts tutorial on wavelet-based compression that will help readers to understand and experiment with this important new technology.
Abstract: The wavelet transform has become a cutting-edge technology in image compression research. This article explains what wavelets are and provides a practical, nuts-and-bolts tutorial on wavelet-based compression that will help readers to understand and experiment with this important new technology.

171 citations


Patent
29 Nov 1994
TL;DR: In this article, a negotiation handshake protocol is described which enables the two sites to negotiate the compression rate based on such factors, such as the speed or data bandwidth on the communications connection between two sites, the data demand between the sites and amount of silence detected in the speech signal.
Abstract: The present invention includes software and hardware components to enable digital data communication over standard telephone lines. The present invention converts analog voice signals to digital data, compresses that data and places the compressed speech data into packets for transfer over the telephone lines to a remote site. A voice control digital signal processor (DSP) operates to use one of a plurality of speech compression algorithms which produce a scaleable amount of compression. The rate of compression is inversely proportional to the quality of the speech the compression algorithm is able to reproduce. The higher the compression, the lower the reproduction quality. The selection of the rate of compression is dependant on such factors as the speed or data bandwidth on the communications connection between the two sites, the data demand between the sites and amount of silence detected in the speech signal. The voice compression rate is dynamically changed as the aforementioned factors change. A negotiation handshake protocol is described which enables the two sites to negotiate the compression rate based on such factors.

Journal ArticleDOI
01 Jun 1994
TL;DR: This overview focuses on a comparison of lossless compression capabilities of the international standard algorithms for still image compression known as MH, MR, MMR, JBIG, and JPEC.
Abstract: This overview focuses on a comparison of lossless compression capabilities of the international standard algorithms for still image compression known as MH, MR, MMR, JBIG, and JPEC. Where the algorithms have parameters to select, these parameters have been carefully set to achieve maximal compression. Compression variations due to differences in data are illustrated and scaling of these compression results with spatial resolution or amplitude precision are explored. These algorithms are also summarized in terms of the compression technology they utilize, with further references given for precise technical details and the specific international standards involved. >

Patent
25 Jan 1994
TL;DR: In this paper, the authors proposed a method for performing image compression that eliminates redundant and invisible image components using a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed.
Abstract: A method for performing image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.

Proceedings ArticleDOI
13 Nov 1994
TL;DR: A method is described, called DCTune, for the design of color quantization matrices that is based on a model of the visibility of quantization artifacts that describes the artifact visibility as a function of DCT frequency, color channel, and display resolution and brightness.
Abstract: The JPEG baseline standard for image compression employs a block discrete cosine transform (DCT) and uniform quantization. For a monochrome image, a single quantization matrix is allowed, while for a color image, distinct matrices are allowed for each color channel. We describe a method, called DCTune, for the design of color quantization matrices that is based on a model of the visibility of quantization artifacts. The model describes the artifact visibility as a function of DCT frequency, color channel, and display resolution and brightness. The model also describes the summation of artifacts over space and frequency, and masking of artifacts by the image itself. The DCTune matrices are different from the de facto JPEG matrices, and appear to provide superior visual quality at equal bit-rates. >

Patent
Bryan J. Dawson1
01 Sep 1994
TL;DR: In this article, a method and apparatus which dynamically selects an image compression process for an image to be transferred from a first agent to a second agent is presented. But the method is limited to the case where the image being compressed has a particular size associated with it, which indicates the amount of storage space required to store the image.
Abstract: A method and apparatus which dynamically selects an image compression process for an image to be transferred from a first agent to a second agent. The image being compressed has a particular size associated with it, which indicates the amount of storage space required to store the image, such as in the system memory or a mass storage device. The image also has a particular color resolution associated with it, which indicates the number of different colors which the image may contain. A particular image compression process is selected for the image based on its size and color resolution. In one embodiment, the present invention produces one of three possible outcomes. First, the image may remain uncompressed. Second, the image may be compressed using a lossless compression process, which reduces the size of the image while retaining all data for the image. Third, the image may be compressed using a lossy compression process, which reduces the size of the image by losing a small amount of data for the image.

Patent
02 Dec 1994
TL;DR: The objective of compression is to reduce the number of bits as much as possible, while keeping the resolution and the visual quality of the reconstructed image as close to the original image as possible.
Abstract: An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

Patent
26 Apr 1994
TL;DR: In this article, an image compression/decompression coprocessor (410) is integrated on a single chip, and the control bus has a control unit (418) which is connected by an internal, global bus (416) to a number of different, special purpose processing units (414, 420, 422, 424, 426 and 428).
Abstract: The present invention provides an image compression/decompression coprocessor (410) which is integrated on a single chip. The control bus has a control unit (418) which is connected by an internal, global bus (416) to a number of different, special purpose processing units (414, 420, 422, 424, 426 and 428). Each of the processing units is specifically designed to handle only certain steps in compression and decompression processes.

Journal ArticleDOI
TL;DR: Simulation results show that for high spectral resolution images, significant savings can be made by using spectral correlations in addition to spatial correlations, and the increase in complexity incurred in order to make these gains is minimal.
Abstract: While spatial correlations are adequately exploited by standard lossless image compression techniques, little success has been attained in exploiting spectral correlations when dealing with multispectral image data. The authors present some new lossless image compression techniques that capture spectral correlations as well as spatial correlation in a simple and elegant manner. The schemes are based on the notion of a prediction tree, which defines a noncausal prediction model for an image. The authors present a backward adaptive technique and a forward adaptive technique. They then give a computationally efficient way of approximating the backward adaptive technique. The approximation gives good results and is extremely easy to compute. Simulation results show that for high spectral resolution images, significant savings can be made by using spectral correlations in addition to spatial correlations. Furthermore, the increase in complexity incurred in order to make these gains is minimal. >

Patent
25 Jul 1994
TL;DR: In this article, the color image compression and decompression is achieved by either spatially or chromatically multiplexing three digitized color planes, into a digital array representative of a single digitized spatially- and chromatically-multiplexed plane, or, by use of a color imaging device, capturing an image directly into a single spatially multiplexed image plane, for further compression, transmission and/or storage.
Abstract: Color image compression and decompression is achieved by either spatially or chromatically multiplexing three digitized color planes, into a digital array representative of a single digitized spatially- and chromatically-multiplexed plane, or, by use of a color imaging device, capturing an image directly into a single spatially-multiplexed image plane, for further compression, transmission and/or storage (40). At the point of decompression, a demultiplexer (50) separately extracts, from the stored or transmitted image, data to restore each of the color planes. Specific demultiplexing techniques involve correlating information of other planes with the color plane to be demultiplexed. Various techniques of entropy reduction, smoothing and speckle reduction may be used together with standard digital color compression techniques, such as JPEG. Using lossless JPEG about 6:1 data compression is achieved with no losses in subsequent processing after initial compression. Using lossy JPEG substantially higher data compression is achievable, but with proportional loss in perceived image quality.

Proceedings ArticleDOI
17 Oct 1994
TL;DR: This work presents a lossless data-compression algorithm which, being oriented specifically for volume data, achieves greater compression performance than generic compression algorithms that are typically available on modern computer systems.

Proceedings ArticleDOI
29 Mar 1994
TL;DR: The authors use the amount of uncertainty or entropy between marks as the criterion for the matching process and present a novel method of screening which uses a quad-tree decomposition and finds local centroids at each tree level.
Abstract: Textual image compression is a method of both lossy and lossless image compression that is particularly effective for images containing repeated sub-images, notably pages of text. This paper addresses the problem of pattern comparison by using an information or compression based approach. Following Mohiuddin et al. ( 1984), the authors use the amount of uncertainty or entropy between marks as the criterion for the matching process. The entropy model they use is the context-based compression model proposed by Langdon and Rissanen (1981) and further developed by Moffat (1991). There are two principal issues to investigate when studying template matching methods: their susceptibility to different kinds of noise, and how they respond to errors in the initial registration. Because of the computation-intensive nature of the comparison operation, many schemes have been devised to pre-filter or screen the marks in advance to determine those that will surely fail the match. They present a novel method of screening which uses a quad-tree decomposition and finds local centroids at each tree level. >

Patent
05 Jul 1994
TL;DR: In this paper, a media editing system for editing source material comprising digitizing apparatus for receiving and digitizing video and audio source material, the video source material including a sequence of images, each spanning both the horizontal and vertical display axes of the video sources material.
Abstract: A media editing system for editing source material comprising digitizing apparatus for receiving and digitizing video and audio source material, the video source material including a sequence of images, each spanning both the horizontal and vertical display axes of the video source material. The editing system also includes computing apparatus including compression apparatus responsive to the digitizing apparatus. The compression apparatus compresses the images from the video source material. The computing apparatus determines if at least one of the compressed images occupies more than a target amount of storage and provides an indication if the at least one of the compressed images does occupy more than the target amount of storage. The compression apparatus is responsive to this indication to adjust its level of compression. The computing apparatus is also for manipulating the stored source material. The editing system further comprises a mass storage responsive to the computing apparatus to receive the compressed video source material and the audio source material, and output apparatus communicating with the computing apparatus to display the manipulated source material. In another general aspect, a data buffer that compensates for differences in data rates, between a storage device and an image compression processor, and a method and apparatus for the real time indexing of frames in a video data sequence.

Proceedings ArticleDOI
16 Sep 1994
TL;DR: Both waveform and description coding techniques are presented and applied to a dense motion field and a reduction factor between 3:1 and 9:1 of the raw bit rate of the motion vectors can be reached.
Abstract: This paper deals with the comparison in different motion vector coding methods suitable for motion compensated image communication systems. Both waveform and description coding techniques are presented and applied to a dense motion field. Only lossless compression is considered, and a reduction factor between 3:1 and 9:1 of the raw bit rate of the motion vectors can be reached. Although some of these techniques have already been used for the coding of motion information it would be greatly interesting to compare them in the same conditions, with intent to apply them to professional motion compensated applications as opposed to the particular case of motion compensated picture coding.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: The authors consider data compression of binary error diffused images by using nonlinear filters to decode error-diffused images to compress them in the gray-scale domain, which gives better image quality than directly compressing the binary images.
Abstract: The authors consider data compression of binary error diffused images. The original contribution is using nonlinear filters to decode error-diffused images to compress them in the gray-scale domain; this gives better image quality than directly compressing the binary images. Their method is of low computational complexity and can work with any halftoning algorithm. >

Patent
02 May 1994
TL;DR: In this paper, a method of producing a video image from a compressed version of a source video image which has been compressed by transforming to a transform domain and quantizing the source image in accordance with quantization constraints is presented.
Abstract: A method of producing a video image from a compressed version of a source video image which has been compressed by transforming to a transform domain and quantizing the source video image in accordance with quantization constraints, including back-transforming from the transform domain and dequantizing the compressed version to produce a decompressed video image, reducing total variation in the first decompressed video image to produce a reduced variation image, transforming the reduced variation image to produce a revised transform and conforming the revised transform with the quantization constraints of the compressed version so as to produce a constrained transform, and back-transforming the constrained transform so as to produce a replica of the source video image.

Proceedings ArticleDOI
16 Sep 1994
TL;DR: Results of a scheme to encode video sequences of digital image data based on a quadtree still-image fractal method showing near real-time software-only decoding; resolution independence; high compression ratios; and low compression times as compared with standard fixed image fractal schemes.
Abstract: We present results of a scheme to encode video sequences of digital image data based on a quadtree still-image fractal method. The scheme encodes each frame using image pieces, or vectors, from its predecessor; hence it can be thought of as a VQ scheme in which the code book is derived from the previous image. We present results showing: near real-time (5 - 12 frames/sec) software-only decoding; resolution independence; high compression ratios (25 - 244:1); and low compression times (2.4 - 66 sec/frame) as compared with standard fixed image fractal schemes.© (1994) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: The model predicts that an 8-b single-band image subject to noise with unit standard deviation can be compressed reversibly to no less than 2.0 b/pixel, equivalent to a maximum compression ratio of about 4:1, and has been extended to multispectral imagery.
Abstract: Reversible image compression rarely achieves compression ratios larger than about 3:1. An explanation of this limit is offered, which hinges upon the additive noise the sensor introduces into the image. Simple models of this noise allow lower bounds on the bit rate to be estimated from sensor noise parameters rather than from ensembles of typical images. The model predicts that an 8-b single-band image subject to noise with unit standard deviation can be compressed reversibly to no less than 2.0 b/pixel, equivalent to a maximum compression ratio of about 4:1. The model has been extended to multispectral imagery. The Airborne Visible and Infra Red Imaging Spectrometer (AVIRIS) is used as an example, as the noise in its 224 bands is well characterized. The model predicts a lower bound on the bit rate for the compressed data of about 5.5 b/pixel when a single codebook is used to encode all the bands. A separate codebook for each band (i.e., 224 codebooks) reduces this bound by 0.5 b/pixel to about 5.0 b/pixel, but 90% of this reduction is provided by only four codebooks. Empirical results corroborate these theoretical predictions. >

Journal ArticleDOI
TL;DR: A complete biorthogonal basis of QMF is implemented by a fast wavelet transform chip designed with Verilog HDL and the image processing is demonstrated numerically.
Abstract: Both discrete wavelet transform (DWT) and inverse DWT are implemented using the lossless quadrature mirror filter (QMF) bank The image passing through the finite impulse response QMF filtering becomes blurred and thus requires fewer number of pixels Such a decimation amounts to the critical sampling that leads to the complexity O ( N ) for N data The data compression comes from the permissible bits per pixel dynamic range compression of those filtered images having fewer details The image reconstruction at a telereceiving station is accomplished by means of the inverse DWT Thus, a complete biorthogonal basis of QMF is implemented by a fast wavelet transform chip designed with Verilog HDL and the image processing is demonstrated numerically Adaptive DWT is sketched

Journal ArticleDOI
TL;DR: This preliminary study suggests that digitized mammograms are very amenable to compression by techniques compatible with the JPEG standard.
Abstract: We have developed a Joint Photographic Experts Group (JPEG) compatible image compression scheme tailored to the compression of digitized mammographic images. This includes a preprocessing step that segments the tissue area from the background, replaces the background pixels with a constant value, and applies a noise-removal filter to the tissue area. The process was tested by performing a just-noticeable difference (JND) study to determine the relationship between compression ratio and a reader's ability to discriminate between compressed and noncompressed versions of digitized mammograms. We found that at compression ratios of 15∶1 and below, image-processing experts are unable to detect a difference, whereas at ratios of 60∶1 and above they can identify the compressed image nearly 100% of the time. The performance of less specialized viewers was significantly lower because these viewers seemed to have difficulty in differentiating between artifact and real information at the lower and middle compression ratios. This preliminary study suggests that digitized mammograms are very amenable to compression by techniques compatible with the JPEG standard. However, this study was not designed to address the efficacy of image compression process for mammography, but is a necessary first step in optimizing the compression in anticipation of more elaborate reader performance (ROC) studies.

01 Dec 1994
TL;DR: The methods that have been developed to reduce the time complexity of Fractal image compression are reviewed and a new taxonomy of the methods is presented, an evaluation is provided and two new techniques are proposed.
Abstract: Fractal image compression allows fast decoding but suffers from long encoding times. During the encoding a large number of sequential searches through a list of domains (portions of the image) are carried out while trying to find a best match for another image portion called range. In this article we review and extend the methods that have been developed to reduce the time complexity of this searching. Also we present a new taxonomy of the methods, provide an evaluation and propose two new techniques.