scispace - formally typeset
Search or ask a question

Showing papers on "Image compression published in 1992"


Journal ArticleDOI
TL;DR: If pictures can be characterized by their membership in the smoothness classes considered, then wavelet-based methods are near-optimal within a larger class of stable transform-based, nonlinear methods of image compression.
Abstract: A novel theory is introduced for analyzing image compression methods that are based on compression of wavelet decompositions. This theory precisely relates (a) the rate of decay in the error between the original image and the compressed image as the size of the compressed image representation increases (i.e., as the amount of compression decreases) to (b) the smoothness of the image in certain smoothness classes called Besov spaces. Within this theory, the error incurred by the quantization of wavelet transform coefficients is explained. Several compression algorithms based on piecewise constant approximations are analyzed in some detail. It is shown that, if pictures can be characterized by their membership in the smoothness classes considered, then wavelet-based methods are near-optimal within a larger class of stable transform-based, nonlinear methods of image compression. Based on previous experimental research it is argued that in most instances the error incurred in image compression should be measured in the integral sense instead of the mean-square sense. >

1,038 citations


Journal ArticleDOI
TL;DR: High-quality variable-rate image compression is achieved by segmenting an image into regions of different sizes, classifying each region into one of several perceptually distinct categories, and using a distinct coding procedure for each category.
Abstract: High-quality variable-rate image compression is achieved by segmenting an image into regions of different sizes, classifying each region into one of several perceptually distinct categories, and using a distinct coding procedure for each category Segmentation is performed with a quadtree data structure by isolating the perceptually more important areas of the image into small regions and separately identifying larger random texture blocks Since the important regions have been isolated, the remaining parts of the image can be coded at a lower rate than would be otherwise possible High-quality coding results are achieved at rates between 035 and 07 b/p depending on the nature of the original image, and satisfactory results have been obtained at 025 b/p >

253 citations


Journal ArticleDOI
TL;DR: Results from an image compression scheme based on iterated transforms are presented as a function of several encoding parameters including maximum allowed scale factor, number of domains, resolution of scale and offset values, minimum range size, and target fidelity.

231 citations


Patent
07 Aug 1992
TL;DR: In this article, the authors present a method for image compression in a filmless digital camera, where each image is individually evaluated and the compression applied in such manner as to retain maximum quality while fitting the data into the pre-assigned memory.
Abstract: In a filmless digital camera, each image is individually evaluated and the compression applied in such manner as to retain maximum quality while fitting the data into the pre-assigned memory. For example, such a camera may have a stated image storage capacity for designated number of images, for example, thirty-two black and white images. In one embodiment of this invention, the image data is generated as analog data and converted into digital data. These data representing one complete image, are divided into small discrete blocks. Each of these blocks is compressed using one of the standard compression methods such as the discrete cosine transform (DCT). Each block of compressed data is then examined and a determination made as to the quality of an image resulting from such compression. If the quality falls below a pre-set standard, the block of data is compressed by an alternate method, for example, by differential coding. The blocks of data that meet the quality requirement, without use of the alternate compression method, are recorded by the first compression method without further processing. Each block of data is coded to indicate the method by which it is compressed. After compression, of the entire image, a computation is mad of the memory storage capacity required for the image. If the required memory is appreciably less than the amount of memory allocated for each image, the compression parameters are adjusted accordingly and the image compressed again. When the memory requirement falls within the established tolerance, the image is recorded. Each block of data is encoded to indicate the method by which it was compressed prior to recording.

190 citations


Patent
30 Nov 1992
TL;DR: In this article, a multichannel image compression system uses a plurality of encoders to compress image data, and a coding level command is provided to each of the encoder to specify a level of quality provided by each encoder.
Abstract: A multichannel image compression system uses a plurality of encoders to compress image data. A coding level command is provided to each of the encoders to specify a level of quality to be provided by each encoder. Encoded image data, provided by the encoders in response to the coding level command, is multiplexed into a combined signal for transmission. The coding level command is adjusted in response to an accumulated amount of data from the combined signal, to maintain the accumulated data within a throughput capability of a communication channel. Although the coding level command may specify a global coding level that is the same for all of the encoders, the encoders can derive local coding levels from the global coding level to provide different encoding qualities. Decoder apparatus is provided to recover an image from the compressed image data.

189 citations


Proceedings ArticleDOI
12 May 1992
TL;DR: A high-performance robot vision system that performs real-time tracking of moving objects, real- time optical flow computation, and high-speed depth map generation is described.
Abstract: The authors describe a high-performance robot vision system that performs real-time tracking of moving objects, real-time optical flow computation, and high-speed depth map generation. The system was implemented as a transputer-based vision system augmented with a high-speed correlation processor. The transputer vision board was equipped with three image frame memories, each of which could be used simultaneously for image input, image processing, and image display. Thus, the system could devote all its computation power to image processing without waiting for image input or display. The vision board was also equipped with a standard image compression chip, used as a correlation processor. Using the chip, a very fast correlation-based robot vision system was developed. This system can also be used as a multiprocessor configuration to greatly improve performance. >

167 citations


Journal ArticleDOI
TL;DR: Three fast search routines to be used in the encoding phase of vector quantization (VQ) image compression systems are presented and show that the proposed algorithms need only 3-20% of the number of mathematical operations required by a full search.
Abstract: Three fast search routines to be used in the encoding phase of vector quantization (VQ) image compression systems are presented. These routines, which are based on geometric considerations, provide the same results as an exhaustive (or full) search. Examples show that the proposed algorithms need only 3-20% of the number of mathematical operations required by a full search and fewer than 50% of the operations required by recently proposed alternatives. >

154 citations


Patent
08 Apr 1992
TL;DR: In this paper, a method and apparatus for image compression suitable for personal computer applications, which compresses and stores data in two steps, is presented, where an image is captured in real-time and compressed using an efficient method and stored to a hard disk.
Abstract: A method and apparatus for image compression suitable for personal computer applications, which compresses and stores data in two steps. An image is captured in real-time and compressed using an efficient method and stored to a hard-disk. At some later time, the data is further compressed in non-real-time using a computationally more intense algorithm that results in a higher compression ratio. The two-step approach allows the storage reduction benefits of a highly sophisticated compression algorithm to be achieved without requiring the computational resources to perform this algorithm in real-time. A compression algorithm suitable for performing the first compression step on a host processor in a personal computer is also described. The first compression step accepts 4:2:2 YCrCb data from the video digitizer. The two chrominance components are averaged and a pseudo-random number is added to all components. The resulting values are quantized and packed into a single 32-bit word representing a 2×2 array of pixels. The seed value for the pseudo-random number is remembered so that the pseudo-random noise can be removed before performing the second compression step.

154 citations


Patent
03 Sep 1992
TL;DR: In this article, the authors present a method for compressing portions of the input data flow that includes the steps of: allocating the random access memory to portions of input data flows; determining when an insufficient amount of random Access Memory is available for such allocation; employing a first data compression procedure on the input Data Flow portions to produce a compressed data portion; testing the compressed data portions to determine if a level of compression has been achieved that exceeds a threshold and, if not, employing succeeding data compression procedures and repeating the test for each procedure against a threshold, whereby the compression procedure
Abstract: A peripheral unit converts an input data flow to page-arranged outputs and includes a random access memory capacity that is insufficient in size to accommodate an entire page of raster data. The peripheral unit also includes a processor and a control memory that holds a plurality of data compression procedures, each procedure exhibiting a different performance characteristic. The peripheral unit performs a method for compressing portions of the input data flow that includes the steps of: allocating the random access memory to portions of the input data flow; determining when an insufficient amount of random access memory is available for such allocation; employing a first data compression procedure on the input data flow portions to produce a compressed data portion; testing the compressed data portion to determine if a level of compression has been achieved that exceeds a threshold and, if not, employing succeeding data compression procedures and repeating the test for each procedure against a threshold, whereby the compression procedure that first enables a threshold level of compression to be achieved is the compression procedure employed to compress the data flow portion. Improved compression methods and techniques for handling input data flows with both integral and independent image descriptors are also described.

144 citations


Patent
17 Nov 1992
TL;DR: In this article, the authors propose an enhancement to a standard lossy image compression technique wherein a single set of side information is provided to allow decompression of the compressed file, where certain portions of the image are selected (either by the user or automatically) for more compression than other portions.
Abstract: An enhancement to a standard lossy image compression technique wherein a single set of side information is provided to allow decompression of the compressed file. Certain portions of the image are selected (either by the user or automatically) for more compression than other portions of the image. A particular embodiment is implemented for use with the JPEG image compression technique. JPEG calls for subdividing the image into blocks, transforming the array of pixel values in each block according to a discrete cosine transform (DCT) so as to generate a plurality of coefficients, quantizing the coefficients for each block, and entropy encoding the quantized coefficients for each block. Techniques for increasing the compression ratio include subjecting each selected block to a low pass filtering operation prior to the transform, subjecting the coefficients for each selected block to a thresholding operation before the quantizing step, subjecting the coefficients for each selected block to a downward weighting operation before encoding them, or, where the entropy encoding uses Huffman codes, mapping coefficients to adjacent shorter codes.

143 citations


Book ChapterDOI
01 Jan 1992
TL;DR: In this article, the authors present background, theory and specific implementation notes for an image compression scheme based on fractal transforms and compare the results from various implementations with standard image compression techniques.
Abstract: This article presents background, theory, and specific implementation notes for an image compression scheme based on fractal transforms. Results from various implementations are presented and compared to standard image compression techniques.

Proceedings ArticleDOI
J.M. Shapiro1
23 Mar 1992
TL;DR: A simple, yet remarkably effective, image compression algorithm that has the property that the bits in the bit stream are generated in order of importance, yielding fully hierarchical image compression suitable for embedded coding or progressive transmission is described.
Abstract: A simple, yet remarkably effective, image compression algorithm that has the property that the bits in the bit stream are generated in order of importance, yielding fully hierarchical image compression suitable for embedded coding or progressive transmission, is described. Given an image bit stream, the decoder can cease decoding at the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. The compression algorithm is based on three key concepts: (1) wavelet transform or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, and (3) hierarchical entropy-coded quantization. >

Journal ArticleDOI
TL;DR: An adaptive electronic neural network processor has been developed for high-speed image compression based on a frequency-sensitive self-organization algorithm that is quite efficient and can achieve near-optimal results.
Abstract: An adaptive electronic neural network processor has been developed for high-speed image compression based on a frequency-sensitive self-organization algorithm The performance of this self-organization network and that of a conventional algorithm for vector quantization are compared The proposed method is quite efficient and can achieve near-optimal results The neural network processor includes a pipelined codebook generator and a paralleled vector quantizer, which obtains a time complexity O(1) for each quantization vector A mixed-signal design technique with analog circuitry to perform neural computation and digital circuitry to process multiple-bit address information are used A prototype chip for a 25-D adaptive vector quantizer of 64 code words was designed, fabricated, and tested It occupies a silicon area of 46 mm*68 mm in a 20 mu m scalable CMOS technology and provides a computing capability as high as 32 billion connections/s The experimental results for the chip and the winner-take-all circuit test structure are presented >

Proceedings ArticleDOI
11 Oct 1992
TL;DR: The authors present a unified mathematical approach that allows one to formulate both linear and nonlinear algorithms in terms of minimization problems related to the so-called K-functionals of harmonic analysis.
Abstract: The authors present a unified mathematical approach that allows one to formulate both linear and nonlinear algorithms in terms of minimization problems related to the so-called K-functionals of harmonic analysis. They then summarize the previously developed mathematics that analyzes the image compression and Gaussian noise removal algorithms. >

Journal ArticleDOI
TL;DR: It is shown that the best transforms for transform image coding, namely, the scrambled real discrete Fourier transform, the discrete cosine transform, and the discrete Cosine-III transform are also the best for image enhancement.
Abstract: Blockwise transform image enhancement techniques are discussed. Previously, transform image enhancement has usually been based on the discrete Fourier transform (DFT) applied to the whole image. Two major drawbacks with the DFT are high complexity of implementation involving complex multiplications and additions, with intermediate results being complex numbers, and the creation of severe block effects if image enhancement is done blockwise. In addition, the quality of enhancement is not very satisfactory. It is shown that the best transforms for transform image coding, namely, the scrambled real discrete Fourier transform, the discrete cosine transform, and the discrete cosine-III transform, are also the best for image enhancement. Three techniques of enhancement discussed in detail are alpha-rooting, modified unsharp masking, and filtering motivated by the human visual system response (HVS). With proper modifications, it is observed that unsharp masking and HVS-motivated filtering without nonlinearities are basically equivalent. Block effects are completely removed by using an overlap-save technique in addition to the best transform.

Patent
Ke-Chiang Chu1
18 Dec 1992
TL;DR: In this paper, a data compression process and system that identifies the data type of an input data stream and then selects in response to the identified data type at least one data compression method from a set of data compression methods that provides an optimal compression ratio for that particular data type, thus maximizing the compression ratio of that data stream.
Abstract: A data compression process and system that identifies the data type of an input data stream and then selects in response to the identified data type at least one data compression method from a set of data compression methods that provides an optimal compression ratio for that particular data type, thus maximizing the compression ratio for that input data stream. Moreover, the data compression process also provides means to alter the rate of compression during data compression for added flexibility and data compression efficiency. Furthermore, a system memory allocation process is also provided to allow system or user control over the amount of system memory to be allocated for the memory intensive data compression process. System memory allocation process estimates the memory requirement to compress the input data stream, and allocates only that amount of system memory as needed by the data compression for memory allocation efficiency.

Journal ArticleDOI
01 Jul 1992
TL;DR: An intelligent forms processing system (IFPS) which provides capabilities for automatically indexing form documents for storage/retrieval to/from a document library and for capturing information from scanned form images using intelligent character recognition (ICR).
Abstract: This paper describes an intelligent forms processing system (IFPS) which provides capabilities for automatically indexing form documents for storage/retrieval to/from a document library and for capturing information from scanned form images using intelligent character recognition (ICR). The system also provides capabilities for efficiently storing form images. IFPS consists of five major processing components: (1) An interactive document analysis stage that analyzes a blank form in order to define a model of each type of form to be accepted by the system; the parameters of each model are stored in a form library. (2) A form recognition module that collects features of an input form in order to match it against one represented in the form library; the primary features used in this step are the pattern of lines defining data areas on the form. (3) A data extraction component that registers the selected model to the input form, locates data added to the form in fields of interest, and removes the data image to a separate image area. A simple mask defining the center of the data region suffices to initiate the extraction process; search routines are invoked to track data that extends beyond the masks. Other special processing is called on to detect lines that intersect the data image and to delete the lines with minimum distortion to the rest of the image. (4) An ICR unit that converts the extracted image data to symbol code for input to data base or other conventional processing systems. Three types of ICR logic have been implemented in order to accommodate monospace typing, proportionally spaced machine text, and handprinted alphanumerics. (5) A forms dropout module that removes the fixed part of a form and retains only the data filled in for storage. The stored data can be later combined with the fixed form to reconstruct the original form. This provides for extremely efficient storage of form images, thus making possible the storage of very large number of forms in the system. IFPS is implemented as part of a larger image management system called Image and Records Management system (IRM). It is being applied in forms data management in several state government applications.

Proceedings ArticleDOI
27 Aug 1992
TL;DR: In this article, the authors used Minkowski-metric as a combination rule for small impairments like those usually encountered in digitally coded images, which can be represented by a set of orthogonal vectors along the axes of a multidimensional Euclidean space.
Abstract: The urge to compress the amount of information needed to represent digitized images while preserving perceptual image quality has led to a plethora of image-coding algorithms. At high data compression ratios, these algorithms usually introduce several coding artifacts, each impairing image quality to a greater or lesser extent. These impairments often occur simultaneously. For the evaluation of image-coding algorithms, it is important to find out how these impairments combine and how this can be described. The objective of the present study is to show that Minkowski-metrics can be used as a combination rule for small impairments like those usually encountered in digitally coded images. To this end, an experiment has been conducted in which subjects assessed the perceptual quality of scale-space-coded color images comprising three kinds of impairment, viz., 'unsharpness', 'phantoms' (dark/bright patches within bright/dark homogeneous regions) and 'color desaturation'. The results show an accumulation of these impairments that is efficiently described by a Minkowski-metric with an exponent of about two. The latter suggests that digital-image-coding impairments may be represented by a set of orthogonal vectors along the axes of a multidimensional Euclidean space. An extension of Minkowski-metrics is presented to generalize the proposed combination rule to large impairments.

Journal ArticleDOI
TL;DR: A compression method for multispectral data sets is proposed where a small subset of image bands is initially vector quantized and the remaining bands are predicted from the quantized images.
Abstract: A compression method for multispectral data sets is proposed where a small subset of image bands is initially vector quantized. The remaining bands are predicted from the quantized images. Two different types of predictors are examined, an affine predictor and a new nonlinear predictor. The residual (error) images are encoded at a second stage based on the magnitude of the errors. This scheme simultaneously exploits both spatial and spectral correlation inherent in multispectral images. Simulation results on an image set from the Thematic Mapper with seven spectral bands provide a comparison of the affine predictor with the nonlinear predictor. It is shown that the nonlinear predictor provides significantly improved performance compared to the affine predictor. Image compression ratios between 15 and 25 are achieved with remarkably good image quality. >

Patent
23 Dec 1992
TL;DR: In this article, multiple hash tables are used based on different subblock sizes for string matching, and this improves the compression ratio and rate of compression, while using multiple hashing tables with a recoverable hashing method further improves compression ratio.
Abstract: Compressing a sequence of characters drawn from an alphabet uses string substitution with no a priori information. An input data block is processed into an output data block comprised of variable length incompressible data sections and variable length compressed token sections. Multiple hash tables are used based on different subblock sizes for string matching, and this improves the compression ratio and rate of compression. The plurality of uses of the multiple hash tables allows for selection of an appropriate compression data rate and/or compression factor in relation to the input data. Using multiple hashing tables with a recoverable hashing method further improves compression ratio and compression rate. Each incompressible data section contains means to distinguish it from compressed token sections.

Patent
23 Mar 1992
TL;DR: In this article, an edge table is provided to hold values where each value, when combined with the differential value for a block on the edge of the virtual image, provides an absolute value for the block without reference to blocks beyond the edge.
Abstract: In an image compression system using a typical image compression scheme, a pointer array is provided to point to each of the many MCUs in a compressed image file. From all the blocks of an image, a subset of the blocks is selected as a virtual image. The virtual image is edited, and each edited block is compressed into an edited block. The edited block is compressed into an edited MCU and placed in an edited block region, and the pointer to the original MCU is changed to point to the new MCU. In this way, the pointer array can be modified to perform an Undo operation. An edge table is provided to hold values where each value, when combined with the differential value for a block on the edge of the virtual image, provides an absolute value for the block without reference to blocks beyond the edge of the virtual image. The entries in the edge table are determined from the compressed MCUs without the blocks being fully decompressed. More than one edge table can be provided. In an image editor, a virtual image is decompressed from a compressed image, the virtual image is processed, and recompressed. The recompressed, edited blocks are then placed in an edited block memory. In an alternate embodiment, values to be combined with a differential value are held in an offset table for all the selectable blocks.

Patent
23 Nov 1992
TL;DR: In this article, an image compression method based on symbol matching is disclosed, where a voting scheme is used in conjunction with a plurality of novel similarity tests to improve symbol matching accuracy.
Abstract: An image compression method based on symbol matching is disclosed. Precompression of the image is performed prior to symbol matching to improve efficiency. A voting scheme is used in conjunction with a plurality of novel similarity tests to improve symbol matching accuracy. A template composition scheme achieves image enhancement. Other disclosed features provide further advantages. Apparatus for implementing the image compression method for image transmission, storage, and enhancement are included.

Journal ArticleDOI
TL;DR: This work presents two new methods (called MLP and PPPM) for lossless compression, both involving linear prediction, modeling prediction errors by estimating the variance of a Laplace distribution, and coding using arithmetic coding applied to precomputed distributions.
Abstract: We give a new paradigm for lossless image compression, with four modular components: pixel sequence, prediction, error modeling and coding. We present two new methods (called MLP and PPPM) for lossless compression, both involving linear prediction, modeling prediction errors by estimating the variance of a Laplace distribution, and coding using arithmetic coding applied to precomputed distributions. The MLP method is both progressive and parallelizable. We give results showing that our methods perform significantly better than other currently used methods for lossless compression of high resolution images, including the proposed JPEG standard. We express our results both in terms of the compression ratio and in terms of a useful new measure of compression efficiency, which we call compression gain.

Patent
19 Nov 1992
TL;DR: In this article, a discrete transform image data compression system was proposed, in which frequency transform coefficients are modified in accordance with a matrix of quantizer values, employing a predefined plurality of quantization matrices to adaptively select, on a document-by-document basis, an approximate memory packet size for each document's compressed image data storage.
Abstract: A discrete transform image data compression system in which frequency transform coefficients are modified in accordance with a matrix of quantizer values employs a predefined plurality of quantization matrices to adaptively select, on a document-by-document basis, an approximate memory packet size for each document's compressed image data storage by selecting one of the plurality of quantization matrices in accordance with the packet size estimate obtained for each document image. Additionally, the system employs generation of contrast reduction and gray level stretch remapping curves as a function of global image data characteristics, such as a gray level histogram of the document image data. The remapping curves are utilized to preprocess the image data for more effective data compression.

Patent
Ke-Chiang Chu1
18 Dec 1992
TL;DR: In this paper, a data compression process and system that identifies the data type of an input data stream and then selects in response to the identified data type at least one data compression method from a set of data compression methods that provides an optimal compression ratio for that particular data type, thus maximizing the compression ratio of that data stream.
Abstract: A data compression process and system that identifies the data type of an input data stream and then selects in response to the identified data type at least one data compression method from a set of data compression methods that provides an optimal compression ratio for that particular data type, thus maximizing the compression ratio for that input data stream. Moreover, the data compression process also provides means to alter the rate of compression during data compression for added flexibility and data compression efficiency. Furthermore, a system memory allocation process is also provided to allow system or user control over the amount of system memory to be allocated for the memory intensive data compression process. System memory allocation process estimates the memory requirement to compress the input data stream, and allocates only that amount of system memory as needed by the data compression for memory allocation efficiency.

Patent
28 Aug 1992
TL;DR: In this paper, a method and apparatus for storing compressed bit map images in a laser printer is described, where bit maps representing a page of data are divided into bands and compressed into the printer memory and when needed by the interpreter/rasterizer, they are decompressed into another portion of that memory, or when desired to print those bands they are directly transmitted to a decompression engine.
Abstract: A method and apparatus for storing compressed bit map images in a laser printer. Bit map images representing a page of data are divided into bands and compressed into the printer memory. Then, when needed by the interpreter/rasterizer, they are decompressed into another portion of that memory, or when desired to print those bands they are directly transmitted to a decompression engine. The bands of bit map image are compressed using a Lempel-Ziv algorithm that contains improvements allowing compression towards the end of the band and improves the compression speed at the beginning of the band by initializing a hash table. Further, the interpreter/rasterizer switches between compression routines depending on the available memory and the desired speed of compression. The compression routine requests supplemental destination buffers when it needs additional memory in which to compress data. Finally, the compression continues to add margin white space during the compression of the uncompressed bit images so that margin what space need not be stored in the uncompressed bit image.

Proceedings ArticleDOI
01 Nov 1992
TL;DR: The results of recent work on a specific adaptive algorithm that provides excellent robustness properties for MPEG-1 video transmitted on either one- or two-tier transmission media are reported.
Abstract: This paper presents an adaptive error concealment technique for MPEG (Moving Picture Experts Group) compressed video. Error concealment algorithms are essential for many practical video transmission scenarios characterized by occasional data loss due to thermal noise, channel impairments, network congestion, etc.. Such scenarios of current importance include terrestrial (simulcast) HDTV, teleconferencing via packet networks, TV/HDTV over fiber-optic ATM (asynchronous transfer mode) systems, etc. In view of the increasing importance of MPEG video for many of these applications, a number of error concealment approaches for MPEG have been developed, and are currently being evaluated in terms of their complexity vs. performance trade-offs. Here, we report the results of recent work on a specific adaptive algorithm that provides excellent robustness properties for MPEG-1 video transmitted on either one- or two-tier transmission media. Receiver error concealment is intended to ameliorate the impact of lost video data by exploiting available redundancy in the decoded picture. The concealment process must be supported by an appropriate transport format which helps to identify the image pixel regions which correspond to lost video data. Once the image region (i.e., macroblocks, slices, etc.) to be concealed are identified, a combination of temporal and spatial replacement techniques may be applied to fill in the lost picture elements. The specific details of the concealment procedure will depend upon the compression algorithm being used, and on the level of algorithmic complexity permissible within the decoder. Simulation results obtained from a detailed end-to-end model that incorporates MPEG compression/decompression and a custom cell-relay (ATM type) transport format are reported briefly.© (1992) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
24 Mar 1992
TL;DR: The authors show that compression can be efficiently parallelized and a computational advantage is obtained when the dictionary has the prefix property and can be generalized to the sliding window method where the dictionary is a window that passes continuously over the input string.
Abstract: The authors study parallel algorithms for lossless data compression via textual substitution. Dynamic dictionary compression is known to be P-complete, however, if the dictionary is given in advance, they show that compression can be efficiently parallelized and a computational advantage is obtained when the dictionary has the prefix property. The approach can be generalized to the sliding window method where the dictionary is a window that passes continuously from left to right over the input string. >

Patent
Ku-man Park1
05 Oct 1992
TL;DR: In this paper, an image compression system using the setting of fixed bit rates for compressing an image wherein image blocks are sorted into classes in accordance with the activities of the blocks, the activities being obtained by dividing an original image by a predetermined unit.
Abstract: An image compression system using the setting of fixed bit rates for compressing an image wherein image blocks are sorted into classes in accordance with the activities of the blocks, the activities being obtained by dividing an original image by a predetermined unit. Individual scale factors are given for each block, so that coding is carried out by fixing the allocating amount of bits to blocks to be coded. The system is operated by a method which detects activities according to a visual characteristic for each predetermined block unit, classifies blocks into corresponding classes based on the detected activities, sets a quantization scale factor corresponding to a sorted class by the average activity of the activities, controls the quantization by determining a quantization stepsize according to a predetermined value of a quantization table and a quantization scale factor, determines the pertinence of bit number of the quantized coefficient with respect to an allocated bit number per block, and repeatedly adjusts the block bit number to be output to carry out entropy coding. Thus, the quality of an image can be stabilized.

Book ChapterDOI
01 Jan 1992