scispace - formally typeset
Search or ask a question

Showing papers on "Image compression published in 1998"


Journal ArticleDOI
C.I. Podilchuk1, Wenjun Zeng2
TL;DR: This work proposes perceptually based watermarking schemes in two frameworks: the block-based discrete cosine transform and multiresolution wavelet framework and discusses the merits of each one, which are shown to provide very good results both in terms of image transparency and robustness.
Abstract: The huge success of the Internet allows for the transmission, wide distribution, and access of electronic data in an effortless manner. Content providers are faced with the challenge of how to protect their electronic data. This problem has generated a flurry of research activity in the area of digital watermarking of electronic content for copyright protection. The challenge here is to introduce a digital watermark that does not alter the perceived quality of the electronic content, while being extremely robust to attack. For instance, in the case of image data, editing the picture or illegal tampering should not destroy or transform the watermark into another valid signature. Equally important, the watermark should not alter the perceived visual quality of the image. From a signal processing perspective, the two basic requirements for an effective watermarking scheme, robustness and transparency, conflict with each other. We propose two watermarking techniques for digital images that are based on utilizing visual models which have been developed in the context of image compression. Specifically, we propose watermarking schemes where visual models are used to determine image dependent upper bounds on watermark insertion. This allows us to provide the maximum strength transparent watermark which, in turn, is extremely robust to common image processing and editing such as JPEG compression, rescaling, and cropping. We propose perceptually based watermarking schemes in two frameworks: the block-based discrete cosine transform and multiresolution wavelet framework and discuss the merits of each one. Our schemes are shown to provide very good results both in terms of image transparency and robustness.

962 citations


Journal ArticleDOI
TL;DR: Extensive computations are presented that support the hypothesis that near-optimal shrinkage parameters can be derived if one knows (or can estimate) only two parameters about an image F: the largest alpha for which FinEpsilon(q)(alpha )(L( q)(I)),1/q=alpha/2+1/2, and the norm |F|B(q) alpha)(L(Q)(I)).
Abstract: This paper examines the relationship between wavelet-based image processing algorithms and variational problems. Algorithms are derived as exact or approximate minimizers of variational problems; in particular, we show that wavelet shrinkage can be considered the exact minimizer of the following problem. Given an image F defined on a square I, minimize over all g in the Besov space B11(L1(I)) the functional |F-g|L2(I)2+λ|g|(B11(L1(I))). We use the theory of nonlinear wavelet image compression in L2(I) to derive accurate error bounds for noise removal through wavelet shrinkage applied to images corrupted with i.i.d., mean zero, Gaussian noise. A new signal-to-noise ratio (SNR), which we claim more accurately reflects the visual perception of noise in images, arises in this derivation. We present extensive computations that support the hypothesis that near-optimal shrinkage parameters can be derived if one knows (or can estimate) only two parameters about an image F: the largest α for which F∈Bqα(Lq(I)),1/q=α/2+1/2, and the norm |F|Bqα(Lq(I)). Both theoretical and experimental results indicate that our choice of shrinkage parameters yields uniformly better results than Donoho and Johnstone's VisuShrink procedure; an example suggests, however, that Donoho and Johnstone's (1994, 1995, 1996) SureShrink method, which uses a different shrinkage parameter for each dyadic level, achieves a lower error than our procedure.

810 citations


Proceedings ArticleDOI
04 Oct 1998
TL;DR: Experimental results show that spatially adaptive wavelet thresholding yields significantly superior image quality and lower MSE than optimal uniform thresholding.
Abstract: The method of wavelet thresholding for removing noise, or denoising, has been researched extensively due to its effectiveness and simplicity. Much of the work has been concentrated on finding the best uniform threshold or best basis. However, not much has been done to make this method adaptive to spatially changing statistics which is typical of a large class of images. This work proposes a spatially adaptive wavelet thresholding method based on context modeling, a common technique used in image compression to adapt the coder to the non-stationarity of images. We model each coefficient as a random variable with the generalized Gaussian prior with unknown parameters. Context modeling is used to estimate the parameters for each coefficient, which are then used to adapt the thresholding strategy. Experimental results show that spatially adaptive wavelet thresholding yields significantly superior image quality and lower MSE than optimal uniform thresholding.

635 citations


Journal ArticleDOI
TL;DR: Why harmonic analysis has interacted with data compression is explained, and some interesting recent ideas in the field that may affect data compression in the future are described.
Abstract: In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon's R(D) theory in the case of Gaussian stationary processes, which says that transforming into a Fourier basis followed by block coding gives an optimal lossy compression technique; practical developments like transform-based image compression have been inspired by this result. In this paper we also discuss connections perhaps less familiar to the information theory community, growing out of the field of harmonic analysis. Recent harmonic analysis constructions, such as wavelet transforms and Gabor transforms, are essentially optimal transforms for transform coding in certain settings. Some of these transforms are under consideration for future compression standards. We discuss some of the lessons of harmonic analysis in this century. Typically, the problems and achievements of this field have involved goals that were not obviously related to practical data compression, and have used a language not immediately accessible to outsiders. Nevertheless, through an extensive generalization of what Shannon called the "sampling theorem", harmonic analysis has succeeded in developing new forms of functional representation which turn out to have significant data compression interpretations. We explain why harmonic analysis has interacted with data compression, and we describe some interesting recent ideas in the field that may affect data compression in the future.

479 citations


Proceedings ArticleDOI
24 Jul 1998
TL;DR: The model is based on a multiscale representation of pattern, luminance, and color processing in the human visual system and can be usefully applied to image quality metrics, image compression methods, and perceptually-based image synthesis algorithms.
Abstract: In this paper we develop a computational model of adaptation and spatial vision for realistic tone reproduction. The model is based on a multiscale representation of pattern, luminance, and color processing in the human visual system. We incorporate the model into a tone reproduction operator that maps the vast ranges of radiances found in real and synthetic scenes into the small fixed ranges available on conventional display devices such as CRT’s and printers. The model allows the operator to address the two major problems in realistic tone reproduction: wide absolute range and high dynamic range scenes can be displayed; and the displayed images match our perceptions of the scenes at both threshold and suprathreshold levels to the degree possible given a particular display device. Although in this paper we apply our visual model to the tone reproduction problem, the model is general and can be usefully applied to image quality metrics, image compression methods, and perceptually-based image synthesis algorithms. CR Categories: I.3.0 [Computer Graphics]: General;

458 citations


Journal ArticleDOI
TL;DR: A review of perceptual image quality metrics and their application to still image compression can be found in this article, where a broad range of metrics ranging from simple mathematical measures to those which incorporate full perceptual models are examined.

383 citations


Book
01 Mar 1998
TL;DR: In this article, the authors present a practical, plain-English guide to image compression, including JPEG, MPEG-1, and MPEG-2, and present an intriguing glimpse of other systems currently in development.
Abstract: From the Publisher: This practical,plain-English guide reviews JPEG,MPEG-1,and MPEG-2 N today's most widely used image compression standards N and presents an intriguing glimpse of other systems currently in development. From the fundamentals of the sampled images that form the actual input to any compression system to available compression tools and performance considerations,the material is clear,concise,and richly relevant. Each chapter covers the basics first,and then goes into greater detail,making the book easily accessible to readers at all levels of familiarity with the topic. DVD,switching transport,and audio compression schemes are also covered. Recommended reading for any video,audio,or broadcast engineer interested in maintaining transmission/storage quality or in more reliably diagnosing compression-related problems. Here's what every TV engineer needs to know about JPEG and MPEG! Digital television,Internet video,DVD,and videoconferencing; all require a solid practical and theoretical understanding of video compression options,both for storage and transmission. This guide,written by a video engineer for video engineers,gives you the expertise you need to stay on top in the field. It reviews JPEG MPEG-1,and MPEG-2\\=today's most widely used image compression standards - and presents an intriguing glimpse at other systems currently in development. From the fundamentals of the sampled images that form the actual input to any compression system to the available compression tools and performance considerations,the material is clear,concise,and richly relevant. Each chapter covers the basics first,and then goes into greater detail,making the book easily accessible to readers at alllevels of familiarity with the topic. MPEG transport schemes,switching of MPEG,and audio compression schemes are also covered. This practical guide will be helpful to any video,audio,or broadcast engineer interested in maintaining transmission/storage quality or in being able to more reliably diagnose compression-related problems.

319 citations


Journal ArticleDOI
TL;DR: A new image compression technique called DjVu is presented that enables fast transmission of document images over low-speed connections, while faithfully reproducing the visual aspect of the document, including color, fonts, pictures, and paper texture.

312 citations


Journal ArticleDOI
TL;DR: This poster presents a poster presenting a probabilistic procedure for estimating the response of the immune system to laser-spot assisted treatment of central nervous system injuries.
Abstract: Keywords: LTS1 Reference LTS-ARTICLE-1998-011doi:10.1117/1.482648View record in Web of Science Record created on 2006-06-14, modified on 2016-08-08

270 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed watermarking technique results in an almost invisible difference between the watermarked image and the original image, and is robust to common image processing operations and JPEG lossy compression.
Abstract: In this paper, a multiresolution-based technique for embedding digital "watermarks" into images is proposed. The watermarking technique has been proposed as a method by hiding secret information in the images so as to discourage unauthorized copying or attesting the origin of the images. In our method, we take advantage of multiresolution signal decomposition. Both the watermark and the host image are composed of multiresolution representations with different structures and then the decomposed watermarks of different resolution are embedded into the corresponding resolution of the decomposed images. In case of image quality degradation, the low-resolution rendition of the watermark will still be preserved within the corresponding low-resolution components of the image. The experimental results show that the proposed watermarking technique results in an almost invisible difference between the watermarked image and the original image, and is robust to common image processing operations and JPEG lossy compression.

261 citations


Proceedings ArticleDOI
28 Dec 1998
TL;DR: The image coding algorithm developed here, apart from being embedded and of low complexity, is very efficient and is comparable to the best known low-complexity image coding schemes available today.
Abstract: We propose an embedded hierarchical image coding algorithm of low complexity. It exploits two fundamental characteristics of an image transform -- the well defined hierarchical structure, and energy clustering in frequency and in space. The image coding algorithm developed here, apart from being embedded and of low complexity, is very efficient and is comparable to the best known low-complexity image coding schemes available today.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
04 Oct 1998
TL;DR: A scheme for authenticating the visual content of digital images is proposed, robust to compression noise, but will detect deliberate manipulation of the image-data.
Abstract: It is straightforward to apply general schemes for authenticating digital data to the problem of authenticating digital images. However, such a scheme would not authenticate images that have undergone lossy compression, even though they may not have been manipulated otherwise. We propose a scheme for authenticating the visual content of digital images. This scheme is robust to compression noise, but will detect deliberate manipulation of the image-data. The proposed scheme is based on the extraction of feature-points from the image. These feature-points are defined so as to be relatively unaffected by lossy compression. The set of feature-points from a given image is encrypted using public key encryption, to generate the digital signature of the image. Authenticity is verified by comparing the feature-points of the image in question, with those recovered from the previously computed digital signature.

Journal ArticleDOI
TL;DR: It is shown that this multiresolution watermarking method is more robust to proposed methods to some common image distortions, such as the wavelet transform based image compression, image rescaling/stretching and image halftoning.
Abstract: In this paper, we introduce a new multiresolution watermarking method for digital images. The method is based on the discrete wavelet transform (DWT). Pseudo-random codes are added to the large coefficients at the high and middle frequency bands of the DWT of an image. It is shown that this method is more robust to proposed methods to some common image distortions, such as the wavelet transform based image compression, image rescaling/stretching and image halftoning. Moreover, the method is hierarchical.

Journal ArticleDOI
TL;DR: A review and analysis of recent developments in postprocessing techniques, including various types of compression artifacts, two types of postprocessing algorithms based on image enhancement and restoration principles and current bottlenecks are addressed.

Proceedings ArticleDOI
28 Dec 1998
TL;DR: This paper first model such unreliable channels as erasure channels and then presents a MDC system using polyphase transform and selective quantization to recover channel erasures to achieve robust communication over unreliable channels such as a lossy packet network.
Abstract: In this paper, we present an efficient Multiple Description Coding (MDC) technique to achieve robust communication over unreliable channels such as a lossy packet network. We first model such unreliable channels as erasure channels and then we present a MDC system using polyphase transform and selective quantization to recover channel erasures. Different from previous MDC work, our system explicitly separates description generation and redundancy addition which greatly reduces the implementation complexity specially for systems with more than two descriptions. Our system also realizes a Balanced Multiple Description Coding (BMDC) framework which can generate descriptions of statistically equal rate and importance. This property is well matched to communication systems with no priority mechanisms for data delivery, such as today's Internet. We then study, for a given total coding rate, the problem of optimal bit allocation between source coding and redundancy coding to achieve the minimum average distortion for different channel failure rates. With high resolution quantization assumption, we give optimal redundancy bit rate allocations for both scalar i.i.d sources and vector i.i.d sources for independent channel failures. To evaluate the performance of our system, we provide an image coding application with two descriptions and our simulation results are better than the best MDC image coding results reported to date. We also provide image coding examples with 16 descriptions to illustrate the simplicity and effectiveness of our proposed MDC system.

Proceedings ArticleDOI
28 Dec 1998
TL;DR: The rationale for the Mixed Raster Content approach is developed by describing the multi-layered imaging model in light of a rate-distortion trade-off and results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT.
Abstract: This paper will describe the Mixed Raster Content (MRC) method for compressing compound images, containing both binary text and continuous-tone images. A single compression algorithm that simultaneously meets the requirements for both text and image compression has been elusive. MRC takes a different approach. Rather than using a single algorithm, MRC uses a multi-layered imaging model for representing the results of multiple compression algorithms, including ones developed specifically for text and for images. As a result, MRC can combine the best of existing or new compression algorithms and offer different quality-compression ratio tradeoffs. The algorithms used by MRC set the lower bound on its compression performance. Compared to existing algorithms, MRC has some image-processing overhead to manage multiple algorithms and the imaging model. This paper will develop the rationale for the MRC approach by describing the multilayered imaging model in light of a rate-distortion trade-off. Results will be presented comparing images compressed using MRC, JPEG and state-of-the-art wavelet algorithms such as SPIHT. MRC has been approved or proposed as an architectural model for several standards, including ITU Color Fax, IETF Internet Fax, and JPEG 2000.

Journal ArticleDOI
TL;DR: To reduce the effect of the noise on compression, the distortion is measured with respect to the original image not to the input of the coder, to design the optimal coder.
Abstract: Noise degrades the performance of any image compression algorithm. This paper studies the effect of noise on lossy image compression. The effect of Gaussian, Poisson, and film-grain noise on compression is studied. To reduce the effect of the noise on compression, the distortion is measured with respect to the original image not to the input of the coder. Results of noisy source coding are then used to design the optimal coder. In the minimum-mean-square-error (MMSE) sense, this is equivalent to an MMSE estimator followed by an MMSE coder. The coders for the Poisson noise and the film-grain noise cases are derived and their performance is studied. The effect of this preprocessing step is studied using standard coders, e.g., JPEG, also. As is demonstrated, higher quality is achieved at lower bit rates.

Patent
28 Sep 1998
TL;DR: In this paper, a method for encoding an original image and decoding the encoded image to generate a representation of the original image is also disclosed, where the comparator and the decoder units determine the quantized colors for each encoded image block and map each pixel to one of the derived quantized colours.
Abstract: An image processing system (205) includes an image encoder system (220) and an image decoder system (230) that are coupled together. The image encoder system (220) includes an image decomposer (315) and a block encoder (318) that are coupled together. The block encoder (318) includes a color quantizer (335) and a bitmap construction module (340). The image decomposer (315) breaks an original image into blocks. Each block (260) is then processed by the block encoder (318a-nth). Specifically, the color quantizer (335) selects some number of base points, or codewords, that serve as reference pixel values, such as colors, from which quantized pixel values are derived. The bitmap construction module (340) then maps each pixel colors to one of the derived quantized colors. The codewords and bitmap are output as encoded image blocks (320). The decoder system (230) includes a block decoder (505a-mth). The block decoder (505a-mth) includes a block type detector (520), one or more decoder units, and an output selector (523). Using the codewords of the encoded data blocks, the comparator and the decoder units determine the quantized colors for the encoded image block and map each pixel to one of the quantized colors. The output selector (523) outputs the appropriate color, which is ordered in an image composer with the other decoded blocks to output an image representative of the original image. A method for encoding an original image and for decoding the encoded image to generate a representation of the original image is also disclosed.

Proceedings ArticleDOI
04 Oct 1998
TL;DR: Two methods of digital watermark for image signals based on the wavelet transform are proposed, one of which gives the watermarked image of better quality and is robust against JPEG compression.
Abstract: We propose two methods of digital watermark for image signals based on the wavelet transform. We classify wavelet coefficients as insignificant or significant by using zerotree which is defined in the embedded zerotree wavelet (EZW) algorithm. In the first method, information data are embedded as a watermark in the location of insignificant coefficients. In the second method, information data can be embedded by thresholding and modifying significant coefficients at the coarser scales in perceptually important spectral components of image signals. Information data are detected by using the position of zerotree's root and the threshold value after the wavelet decomposition of an image in which data hide. Experimental results show the proposed method gives the watermarked image of better quality and is robust against JPEG compression.

Journal ArticleDOI
TL;DR: This paper presents a compression scheme for digital still images, by using the Kohonen's neural network algorithm, not only for its vector quantization feature, but also for its topological property, which allows an increase of about 80% for the compression rate.
Abstract: Presents a compression scheme for digital still images, by using Kohonen's neural network algorithm, not only for its vector quantization feature, but also for its topological property. This property allows an increase of about 80% for the compression rate. Compared to the JPEG standard, this compression scheme shows better performances (in terms of PSNR) for compression rates higher than 30.

Proceedings ArticleDOI
04 Jan 1998
TL;DR: An effective stochastic gradient descent algorithm is introduced that automaticaIly matches a model to a novel image by finding the parameters that minimize the error between the image generated by the model and the novel image.
Abstract: We describe a flexible model for representing images of objects of a certain class, known a priori, such as faces, and introduce a new algorithm for matching it to a novel image and thereby performing image analysis. We call this model a multidimensional morphable model or just a, morphable model. The morphable model is learned from example images (called prototypes) of objects of a class. In this paper we introduce an effective stochastic gradient descent algorithm that automaticaIly matches a model to a novel image by finding the parameters that minimize the error between the image generated by the model and the novel image. Two examples demonstrate the robustness and the broad range of applicability of the matching algorithm and the underlying morphable model. Our approach can provide novel solutions to several vision tasks, including the computation of image correspondence, object verification, image synthesis and image compression.

Proceedings ArticleDOI
17 Jul 1998
TL;DR: A new video quality metric is described that is an extension of these still image metrics into the time domain, based on the Discrete Cosine Transform, in order that might be applied in the widest range of applications.
Abstract: The advent of widespread distribution of digital video creates a need for automated methods for evaluating the visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics, and the economic need to reduce bit-rate to the lowest level that yields acceptable quality. In previous work, we have developed visual quality metrics for evaluating, controlling,a nd optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. Here I describe a new video quality metric that is an extension of these still image metrics into the time domain. Like the still image metrics, it is based on the Discrete Cosine Transform. An effort has been made to minimize the amount of memory and computation required by the metric, in order that might be applied in the widest range of applications. To calibrate the basic sensitivity of this metric to spatial and temporal signals we have made measurements of visual thresholds for temporally varying samples of DCT quantization noise.© (1998) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: An optimized spatial-domain implementation of the Gabor transform, using one-dimensional 11-tap filter masks, that is faster and more flexible than Fourier implementations, and two ways to incorporate a high-pass residual, which permits a visually complete representation of the image.
Abstract: Gabor schemes of multiscale image representation are useful in many computer vision applications. However, the classic Gabor expansion is computationally expensive due to the lack of orthogonality of Gabor functions. Some alternative schemes, based on the application of a bank of Gabor filters, have important advantages such as computational efficiency and robustness, at the cost of redundancy and lack of completeness. In a previous work we proposed a quasicomplete Gabor transform, suitable for fast implementations in either space or frequency domains. Reconstruction was achieved by simply adding together the even Gabor channels. We develop an optimized spatial-domain implementation, using one-dimensional 11-tap filter masks, that is faster and more flexible than Fourier implementations. The reconstruction method is improved by applying fixed and independent weights to the Gabor channels before adding them together. Finally, we analyze and implement, in the spatial domain, two ways to incorporate a high-pass residual, which permits a visually complete representation of the image. © 1998 SPIE and IS&T.

Book
01 Jan 1998
TL;DR: This thesis provides new representations of audio signals that allow for both very low bit rate audio data compression and high quality compressed domain processing and modiications.
Abstract: I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and in quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and in quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and in quality, as a dissertation for the degree of Doctor of Philosophy. In the world of digital audio processing, one usually has the choice of performing modiications on the raw audio signal or performing data compression on the audio signal. But, performing modiications on a data compressed audio signal has proved diicult in the past. This thesis provides new representations of audio signals that allow for both very low bit rate audio data compression and high quality compressed domain processing and modiications. In this system, two compressed domain processing algorithms are available: timescale and pitch-scale modiications. Timescale modiications alter the playback speed of audio without changing the pitch. Similarly, pitch-scale modiications alter the pitch of the audio without changing the playback speed. The algorithms presented in this thesis segment the input audio signal into separate sinusoidal, transients, and noise signals. During attack-transient regions of the audio signal, the audio is modeled by transform coding techniques. During the remaining non-transient regions, the audio is modeled by a mixture of multiresolution sinusoidal modeling and noise modeling. Careful phase matching techniques at the time boundaries between the sines and transients allow for seamless transitions between the two representations. By separating the audio into three individual representations, each can be eeciently and perceptually quantized. In addition, by segmenting the audio into transient and non-transient regions, high quality timescale modiications that stretch only the non-transient portions are possible. v vi Acknowledgements First I would like to thank my principal advisor, Prof. Julius O. Smith III. In addition to being a seemingly all-knowing audio guy, our weekly meetings during my last year in school helped me out immensely by keeping me and my research focused and on track. If it were not for the academic freedom he gives me and the other CCRMA grad students, I would not have stumbled across this thesis topic. My next thanks goes out to Tony Verma, …

Journal ArticleDOI
TL;DR: This work introduces a highly scalable video compression system for very low bit-rate videoconferencing and telephony applications around 10-30 kbits/s and incorporates a high degree of video scalability into the codec by combining the layered/progressive coding strategy with the concept of embedded resolution block coding.
Abstract: We introduce a highly scalable video compression system for very low bit-rate videoconferencing and telephony applications around 10-30 kbits/s. The video codec first performs a motion-compensated three-dimensional (3-D) wavelet (packet) decomposition of a group of video frames, and then encodes the important wavelet coefficients using a new data structure called tri-zerotrees (TRI-ZTR). Together, the proposed video coding framework forms an extension of the original zero tree idea of Shapiro (1992) for still image compression. In addition, we also incorporate a high degree of video scalability into the codec by combining the layered/progressive coding strategy with the concept of embedded resolution block coding. With scalable algorithms, only one original compressed video bit stream is generated. Different subsets of the bit stream can then be selected at the decoder to support a multitude of display specifications such as bit rate, quality level, spatial resolution, frame rate, decoding hardware complexity, and end-to-end coding delay. The proposed video codec also allows precise bit rate control at both the encoder and decoder, and this can be achieved independently of the other video scaling parameters. Such a scheme is very useful for both constant and variable bit rate transmission over mobile communication channels, as well as video distribution over heterogeneous multicast networks. Finally, our simulations demonstrated comparable objective and subjective performance when compared to the ITU-T H.263 video coding standard, while providing both multirate and multiresolution video scalability.

Journal ArticleDOI
TL;DR: A spatial subband image-compression method well suited to the local nature of the CNNUM, which performs especially well with radiographical images (mammograms) and is suggested to use as part of a cellular neural/nonlinear (CNN)-based mammogram-analysis system.
Abstract: This paper demonstrates how the cellular neural-network universal machine (CNNUM) architecture can be applied to image compression. We present a spatial subband image-compression method well suited to the local nature of the CNNUM. In case of lossless image compression, it outperforms the JPEG image-compression standard both in terms of compression efficiency and speed. It performs especially well with radiographical images (mammograms); therefore, it is suggested to use it as part of a cellular neural/nonlinear (CNN)-based mammogram-analysis system. This paper also gives a CNN-based method for the fast implementation of the moving pictures experts group (MPEG) and joint photographic experts group (JPEG) moving and still image-compression standards.

Journal ArticleDOI
TL;DR: A combined wavelet zerotree coding and packetization method that provides excellent image compression and graceful degradation against packet erasure and for example, compresses the 512/spl times/512 gray-scale Lena image to 0.2 b/pixel.
Abstract: We describe a combined wavelet zerotree coding and packetization method that provides excellent image compression and graceful degradation against packet erasure. For example, using 53-byte packets (48-byte payload), the algorithm compresses the 512/spl times/512 gray-scale Lena image to 0.2 b/pixel with a peak signal-to-noise ratio (PSNR) of 32.2 dB with no packet erasure, and 26.3 dB on average for 10% packets erased.

Patent
25 Mar 1998
TL;DR: In this article, a random walk in Pascal's hypervolume (a multi-dimensional generalization of Pascal's triangle) is proposed for multilevel digital source compression in both lossless and lossy modes.
Abstract: A method and apparatus for data, image, video, acoustic, multimedia and general multilevel digital source compression in both lossless and lossy modes is described. The method is universal (no knowledge of source statistics required) and asymptotically optimal in terms of Shannon's noiseless coding theorem. The method utilizes a random walk in Pascal's hypervolume (a multi-dimensional generalization of Pascal's triangle) starting at the apex and proceeding downward, which is directed by the incoming source sequence according to an algorithm, until it terminates at a boundary which has been constructed in such a way that the encoding of each variable length source sequence can be accomplished in a fixed number of bits. Codewords and decoded source sequences can either be computed at the encoder and decoder, respectively, or precomputed and stored at those respective locations. A preprocessing module is used to set up the data for lossless data or image compression. Another preprocessing module is used for lossy compression, and video compression can vary seamlessly between lossless and lossy modes depending on the requirements of the transmission rate.

Journal ArticleDOI
TL;DR: In this article, a new method for fractal image compression is proposed using genetic algorithm (GA) with an elitist model, which greatly decreases the search space for finding the self similarities in the given image.
Abstract: A new method for fractal image compression is proposed using genetic algorithm (GA) with an elitist model. The self transformation property of images is assumed and exploited in the fractal image compression technique. The technique described utilizes the GA, which greatly decreases the search space for finding the self similarities in the given image. This article presents theory, implementation, and an analytical study of the proposed method along with a simple classification scheme. A comparison with other fractal-based image compression methods is also reported.

Patent
29 May 1998
TL;DR: In this article, an authentication and security system for medical image management systems is described, which includes an authentication server for maintaining and storing hashes and timestamps, and for providing hash, timestamp pairs in encrypted form in response to requests from display stations including an identifier.
Abstract: A medical image management system of the type including an image archive server for receiving image datasets from image acquisition computers closely associated with medical imaging devices and maintaining a central store for the image datasets, and a plurality of remote display stations for displaying images from requested image datasets which are retrieved by the image archive server from the image data store is provided with an authentication and security system which includes an authentication server for maintaining and storing hashes and timestamps, and for providing hash, timestamp pairs in encrypted form in response to requests from display stations including an identifier. The image acquisition computers are configured for pre-processing the image datasets received from these devices, including performing any required image compression, encrypting at least a portion of the image datasets, computing hashes and providing them and identifiers to the authentication server, receiving timestamps from the authentication server which are then inserted in the pre-processed image datasets, and sending the pre-processed image datasets to the image archive server for storage in the image data store. The display stations are configured for decrypting and performing any required data decompression on the pre-processed image datasets sent them by the image archive server, computing hashes from the image datasets, requesting and decrypting hash/timestamp pairs received from the authentication server, and comparing the hashes, and optionally the timestamps, obtained from the authentication server with those computed or extracted from the image datasets received from the image archive server.