scispace - formally typeset
Search or ask a question

Showing papers on "JPEG published in 1996"


Proceedings ArticleDOI
16 Sep 1996
TL;DR: A watermarking scheme to hide copyright information in an image by filtering a pseudo-noise sequence with a filter that approximates the frequency masking characteristics of the visual system to guarantee that the embedded watermark is invisible and to maximize the robustness of the hidden data.
Abstract: We propose a watermarking scheme to hide copyright information in an image. The scheme employs visual masking to guarantee that the embedded watermark is invisible and to maximize the robustness of the hidden data. The watermark is constructed for arbitrary image blocks by filtering a pseudo-noise sequence (author id) with a filter that approximates the frequency masking characteristics of the visual system. The noise-like watermark is statistically invisible to deter unauthorized removal. Experimental results show that the watermark is robust to several distortions including white and colored noises, JPEG coding at different qualities, and cropping.

416 citations


Journal ArticleDOI
TL;DR: This work points out that the wavelet transform is just one member in a family of linear transformations, and the discrete cosine transform (DCT) can be coupled with an embedded zerotree quantizer, and presents an image coder that outperforms any other DCT-based coder published in the literature.
Abstract: Since Shapiro (see ibid., vol.41, no.12, p. 445, 1993) published his work on embedded zerotree wavelet (EZW) image coding, there have been increased research activities in image coding centered around wavelets. We first point out that the wavelet transform is just one member in a family of linear transformations, and the discrete cosine transform (DCT) can also be coupled with an embedded zerotree quantizer. We then present such an image coder that outperforms any other DCT-based coder published in the literature, including that of the Joint Photographers Expert Group (JPEG). Moreover, our DCT-based embedded image coder gives higher peak signal-to-noise ratios (PSNR) than the quoted results of Shapiro's EZW coder.

225 citations


Proceedings ArticleDOI
01 Sep 1996
TL;DR: Two schemes for hiding data in images are introduced that exploit perceptual masking properties to embed the data in an invisible manner and increase the robustness of the hidden information.
Abstract: Data hiding is the process of encoding extra information in an image by making small modifications to its pixels. To be practical, the hidden data must be perceptually invisible yet robust to common signal processing operations. This paper introduces two schemes for hiding data in images. The techniques exploit perceptual masking properties to embed the data in an invisible manner. The first method employs spatial masking and data spreading to hide information by modifying the image coefficients. The second method uses frequency masking to modify the image spectral components. By using perceptual masking, we also increase the robustness of the hidden information. Experimental results of data recovery after applying noise and JPEG coding to the hidden data are included.

190 citations


Journal ArticleDOI
TL;DR: This work addresses the problem of retrieving images from a large database using an image as a query, specifically aimed at databases that store images in JPEG format, and works in the compressed domain to create index keys.
Abstract: We address the problem of retrieving images from a large database using an image as a query. The method is specifically aimed at databases that store images in JPEG format, and works in the compressed domain to create index keys. A key is generated for each image in the database and is matched with the key generated for the query image. The keys are independent of the size of the image. Images that have similar keys are assumed to be similar, but there is no semantic meaning to the similarity.

152 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: An image authentication technique by embedding each image with a signature so as to discourage unauthorized copying is proposed, which could actually survive several kinds of image processing and the JPEG lossy compression.
Abstract: An image authentication technique by embedding each image with a signature so as to discourage unauthorized copying is proposed. The proposed technique could actually survive several kinds of image processing and the JPEG lossy compression.

143 citations


Patent
19 Sep 1996
TL;DR: In this paper, a watermark is embedded into video/image/multimedia data using spread spectrum methodology, which is extracted from watermarked data without the use of an original or unwatermarked version of the data by using MPEG/JPEG coefficients.
Abstract: A watermark is embedded into video/image/multimedia data using spread spectrum methodology. The watermark is extracted from watermarked data without the use of an original or unwatermarked version of the data by using MPEG/JPEG coefficients. The image to be watermarked is divided into subimages. Each subimage is embedded with a watermark. When extracting the watermark, the result from each subimage is combined to determine the originally embedded watermark.

109 citations


Patent
26 Aug 1996
TL;DR: In this article, index keys from JPEG encoded still images are extracted based on the gray-scale, luminance, and/or chrominance values and stored in a database along with corresponding location, and size information.
Abstract: Index keys from JPEG encoded still images are extracted based on the gray-scale, luminance, and/or chrominance values. The index keys are stored in a database, along with corresponding location, and size information. The index key of a query image, also encoded in the JPEG format, is extracted. The index key of the image query is compared with the index keys stored in the meta database, with still images having index keys similar to the index key of the query image identified. The still images are then retrieved and displayed by selection of a user.

86 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: This work compute the perceptual error for each block based upon the DCT quantization error adjusted according to the contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image.
Abstract: An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8/spl times/8 block, which multiplies the quantization matrix, yielding the new matrix for that block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon the DCT quantization error adjusted according to the contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bit rate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

82 citations


Journal ArticleDOI
TL;DR: This paper investigates a modified DCT computation scheme, to be called the subband DCT (SB-DCT), that provides a simple, efficient solution to the reduction of the block artifacts while achieving faster computation.
Abstract: The discrete cosine transform (DCT) is well known for its highly efficient coding performance and is widely used in many image compression applications. However, in low bit rate coding, it produces undesirable block artifacts that are visually not pleasing. In addition, in many practical applications, faster computation and easier VLSI implementation of DCT coefficients are also important issues. The removal of the block artifacts and faster DCT computation are therefore of practical interest. In this paper, we investigate a modified DCT computation scheme, to be called the subband DCT (SB-DCT), that provides a simple, efficient solution to the reduction of the block artifacts while achieving faster computation. We have applied the new approach for the low bit rate coding and decoding of images. Simulation results on real images have verified the improved performance obtained using the proposed method over the standard JPEG method.

74 citations


Proceedings ArticleDOI
16 Sep 1996
TL;DR: This work proposes a new approach that compresses image blocks using a layered representation, derived from progressive JPEG, and has been combined with CR and optimized for efficient software implementation to provide an improved solution for Internet packet video.
Abstract: Several compression schemes for Internet video utilize block-based conditional replenishment (CR) where block updates are coded independently of the past. In the current Internet video tools, blocks are compressed with a single-layer representation. We propose a new approach that compresses image blocks using a layered representation. Our layered-DCT (LDCT) compression algorithm, derived from progressive JPEG, has been combined with CR and optimized for efficient software implementation to provide an improved solution for Internet packet video. Although LDCT is constrained to a layered representation, its compression performance is as good or better than the single layer Intra-H.261 and baseline JPEG coding schemes.

67 citations


Patent
17 Jun 1996
TL;DR: In this article, a memory efficient system of storing color correction information for liquid crystal tuning filters when used with electronic imaging cameras to produce color images is presented, which is stored with maximum possible gain to optimize accuracy prior to compression.
Abstract: A memory efficient system of storing color correction information for liquid crystal tuning filters when used with electronic imaging cameras to produce color images, which color correction information is stored with maximum possible gain to optimize accuracy preparatory to compression. The system bins the color correction image, for example, from a 4K×4K CCD sensor into a 500×500 or 1K×1K file, and then applies the JPEG and/or wavelet compression algorithm with a default configuration and/or a custom quantization table that emphasizes low frequency changes with more bits than high frequency changes with less bits. At the end of the compression, the compressed R, G, B files and an n-point correction executable algorithm are stored on floppy disk or CD ROM and are used to automatically take control of image enhancement when invoked by the photographer.

01 Oct 1996
TL;DR: It is shown that, despite errors caused by compression, information can be steganographically encoded into pixel data so that it is recoverable after JPEG processing, though not with perfect accuracy.
Abstract: : Steganographic techniques can be used to hide data within digital images with little or no visible change in the perceived appearance of the image and can be exploited to export sensitive information. Since images are frequently compressed for storage or transmission, effective steganography must employ coding techniques to counter the errors caused by lossy compression algorithms. The Joint Photographic Expert Group "JPEG" compression algorithm, while producing only a small amount of visual distortion, introduces a relatively large number of errors in the bitmap data. It is shown that, despite errors caused by compression, information can be steganographically encoded into pixel data so that it is recoverable after JPEG processing, though not with perfect accuracy.

Patent
07 Mar 1996
TL;DR: In this paper, a quantization table with a "supra-threshold" term is used to select image elements that correspond to important image elements and a smaller weight to less important image items.
Abstract: A method of compressing color source image data includes forming a quantization table with a "supra-threshold" term. This method includes a step of selecting a set of target images, where each target image includes one or more image elements such as text. These image elements are then analyzed to identify those that are more important for visual quality. These "supra-threshold" terms are then selected that gives a larger weight to the quantization table elements that correspond to important image elements and a smaller weight to the table elements that correspond to less important image elements. This process selectively weights the characteristics of each DCT basis vectors. By giving larger weights to the table elements that correspond to the "up-downness" of the image, i.e., the vertical attributes of the image elements, and the "left-rightness" of the image, i.e., the horizontal attributes of the image elements, and smaller weights to the table elements corresponding to the "criss-crossedness" of the image, i.e., the diagonal attributes of the image elements, the visual quality of an image that includes text can be preserved while significantly increasing the compression ratio.

01 Oct 1996
TL;DR: This memo describes the RTP payload format for JPEG video streams, which is optimized for real-time video streams where codec parameters change rarely from frame to frame.
Abstract: This memo describes the RTP payload format for JPEG video streams. The packet format is optimized for real-time video streams where codec parameters change rarely from frame to frame.

Journal ArticleDOI
TL;DR: Findings show that digital compression may be used routinely in echocardiography, resulting in improved image and diagnostic quality over present standards.
Abstract: A large interobserver and intraobserver variability study was performed comparing both digitally compressed and uncompressed echocardiographic images with the same images recorded onto super-VHS video-cassette tape (the current standard). In a blinded, randomized fashion, 179 observers scored the diagnostic and image quality of 20 pairs of echocardiographic loops representing various pathologic conditions. Overall, the digital images were preferred to the S-VHS images both for image quality and diagnostic content (p < 0.0001) regardless of the background or experience level of the observer. Furthermore, uncompressed digital images and those compressed by the Joint Photographic Experts Group (JPEG) algorithm at ratios of 20:1 were judged equivalent. These findings show that digital compression may be used routinely in echocardiography, resulting in improved image and diagnostic quality over present standards.

Patent
Felice A. Micco1, Martin E. Banton1
26 Sep 1996
TL;DR: In this article, a method and apparatus for the rotation of images in conjunction with a block-wise, variable-length data compression operation is presented, where the rotated image is produced upon decompression of the stored rotated data.
Abstract: The present invention is a method and apparatus for the rotation of images in conjunction with a block-wise, variable-length data compression operation. In a preferred embodiment, the rotation of image blocks on a microscopic level is accomplished independently from the rotation of the blocks themselves (macroscopic), stored in electronic precollation memory and the rotated image produced upon decompression of the stored rotated data. The two stage process allows the use of standardized JPEG or similar variable-length compression schemes, thereby accomplishing the rotation in conjunction with compression and minimizing the need for large memory buffers to accomplish image rotation

Patent
Ricardo L. de Queiroz1
26 Sep 1996
TL;DR: In this paper, a method and apparatus for the processing of images that have been compressed using a discrete cosine transform operation, and particularly JPEG compressed images is presented. But it is not suitable for the use of large memory buffers.
Abstract: The present invention is a method and apparatus for the processing of images that have been compressed using a discrete cosine transform operation, and particularly JPEG compressed images. In a preferred embodiment, the rotation of image blocks is accomplished by sign inversion and transposition operations to accomplish intrablock operations. Subsequently, one of a number of alternative methods is employed to accomplish the same image processing on an interblock level, thereby enabling the rotation or mirroring of compressed images. The two stage process allows the use of either a standardized JPEG system with enhancements or a hybrid processing method, thereby accomplishing the image processing in conjunction with compression or decompression operations and minimizing the need for large memory buffers to accomplish the image processing. Accordingly, the technique has application to any number of systems, including digital printers and copiers where there may be a necessity to orthogonally rotate or mirror the digital image.

01 Aug 1996
TL;DR: A video encoder control scheme which maintains the quality of the encoded video at a constant level is proposed, referred to as Constant Quality VBR (CQ-VBR), based on a quantitative video quality metric which is used in a feedback control mechanism to adjust the encoder parameters.
Abstract: Lossy video compression algorithms, such as those used in the H.261, MPEG, and JPEG standards, result in quality degradation seen in the form of digital tiling, edge busyness, and mosquito noise. The encoder parameters (typically, the so-called quantizer scale) can be adjusted to trade-off encoded video quality and bit rate. Clearly, when more bits are used to represent a given scene, the quality gets better. However, for a given set of encoder parameter values, both the generated traffic and the resulting quality depend on the scene content. Therefore, in order to achieve certain quality and traffic objectives at all times, the encoder parameters must be appropriately adjusted according to the scene content. Currently, two schemes exist for setting the encoder parameters. The most commonly used scheme today is called Constant Bit Rate (CBR), where the encoder parameters are controlled to achieve a target bit rate over time by considering a hypothetical rate control buffer at the encoder''s output which is drained at the target bit rate; the buffer occupancy level is used as feedback to control the quantizer scale. In a CBR encoded video stream, the quality varies in time, since the quantizer scale is controlled to achieve a constant bit rate regardless of the scene complexity. In the other existing scheme, called Open-Loop Variable Bit Rate (OL-VBR), all encoder parameters are simply kept fixed at all times. The motivation behind this scheme is to presumably provide a more consistent video quality compared to CBR encoding. In this report, we characterize the traffic and quality for the CBR and OL-VBR schemes by using several video sequences of different spatial and temporal characteristics, encoded using the H.261, MPEG, and motion-JPEG standards. We investigate the effect of the controller parameters (i.e., for CBR, target bit rate and rate control buffer size, and for OL-VBR, the fixed quantizer scale) and video content on the resulting traffic and quality. We show that with the CBR and OL-VBR schemes, the encoder control parameters can be chosen so as to achieve or exceed a given quality objective at all times; however, this can only be done by producing more bits than needed during some of the scenes. In order to produce only as many bits as needed to achieve a given quality objective, we propose a video encoder control scheme which maintains the quality of the encoded video at a constant level, referred to as Constant Quality VBR (CQ-VBR). This scheme is based on a quantitative video quality metric which is used in a feedback control mechanism to adjust the encoder parameters. We determine the appropriate feedback functions for the H.261, MPEG, and motion-JPEG standards. We show that this scheme is indeed able to achieve a constant quality at all times; however, the resulting traffic occasionally contains bursts of relatively high-magnitude (5-10 times the average), but short duration (5-15 frames). We then introduce a modification to this s

Journal ArticleDOI
TL;DR: This paper further develops an approximation technique called condensation to improve performance and evaluates condensations in terms of processing speed and image quality.
Abstract: This paper addresses the problem of processing motion-JPEG video data in the compressed domain. The operations covered are those where a pixel in the output image is an arbitrary linear combination of pixels in the input image, which includes convolution, scaling, rotation, translation, morphing, de-interlacing, image composition, and transcoding. This paper further develops an approximation technique calledcondensationto improve performance and evaluates condensations in terms of processing speed and image quality. Using condensation, motion-JPEG video can be processed at near real-time rates on current generation workstations.

Proceedings ArticleDOI
15 Apr 1996
TL;DR: A novel scheme for encoding wavelet coefficients, termed set partitioning in hierarchical trees, has recently been proposed and yields significantly better compression than more standard methods.
Abstract: Wavelet-based image compression is proving to be a very effective technique for medical images, giving significantly better results than the JPEG algorithm. A novel scheme for encoding wavelet coefficients, termed set partitioning in hierarchical trees, has recently been proposed and yields significantly better compression than more standard methods. We report the results of experiments comparing such coding to more conventional wavelet compression and to JPEG compression on several types of medical images.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: The authors' experiments indicate that the PIC metric provides the best correlation with subjective evaluations, and predicts that at very low bit rates the Said-Pearlman algorithm and the 8/spl times/8 subband PIC coder perform the best, while at highbit rates the 4/spltimes/4 sub band Pic coder dominates.
Abstract: We investigate different algorithms and performance criteria for supra-threshold image compression. The algorithms include JPEG and perceptual JPEG, the Safranek-Johnston perceptual subband image coder (PIC), and the Said-Pearlman algorithm which is based on Shapiro's embedded zerotree wavelet algorithm. We also consider a number of performance criteria. These include mean-squared error, Watson's perceptual metric, a metric based on the PIC coder, as well as an eye-filter weighted mean-squared-error metric. Our experiments indicate that the PIC metric provides the best correlation with subjective evaluations. The metric predicts that at very low bit rates the Said-Pearlman algorithm and the 8/spl times/8 subband PIC coder perform the best, while at high bit rates the 4/spl times/4 subband PIC coder dominates.

Patent
07 Mar 1996
TL;DR: The text and image enhancing technique according to the invention is integrated into the decoding or inverse quantization step that is necessarily required by the JPEG standard as mentioned in this paper, which does not require any additional computations than already required for the compression and decompression processes.
Abstract: The text and image enhancing technique according to the invention is integrated into the decoding or inverse quantization step that is necessarily required by the JPEG standard The invention integrates the two by using two different quantization tables: a first quantization table (Q E ) for use in quantizing the image data during the compression step and a second quantization table used during the decode or inverse quantization during the decompression process The second quantization table Q D is related to the first quantization table according to a predetermined function of the energy in a reference image and the energy in a scanned image The energy of the reference image lost during the scanning process, as represented by the energy in the scanned image, is restored during the decompression process by appropriately scaling the second quantization table according to the predetermined function The difference between the two tables, in particular the ratio of the two tables, determines the amount of image enhancing that is done in the two steps By integrating the image enhancing and inverse quantization steps the method does not require any additional computations than already required for the compression and decompression processes

Journal ArticleDOI
TL;DR: For caries diagnosis, compression ratio rates of 1:12 can be justified before accuracy and image quality is significantly affected.
Abstract: Image compression may reduce storage needs whether in the lossless (reversible) or lossy (irreversible) form.The aims of the study were to evaluate (1) storage needs, (2) subjective image quality, and (3) accuracy of caries detection in digital radiographs compressed to various levels by a lossy compression method. The material consisted of 116 extracted human premolars and molars. The teeth were mounted three in a line and radiographed by the Digora system (Sorodex Medical Systems, Helsinki, Finland). The images were exported in tagged image file format and compressed with the Lempel-Ziv-Welch reversible and the Joint Photographic Experts Group irreversible compression algorithm on four levels. The total of 580 images were assessed by five observers on a 5-rank confidence scale for caries diagnosis. The observers subjectively judged image quality on an 11-point rank scale. With the reversible compression, images could be compressed to less than 50% of the original storage needs whereas the four irreversible compression factors compressed to 20%, 8%, 5%, and 3%, respectively. For occlusal surfaces, there was no relationship between accuracy and image compression ( p >0.3); for approximal surfaces, when receiver operating characteristic curve areas were increasingly smaller and the compression rate was higher. The difference between the original and the most compressed images was 14% ( p =0.1). The median quality score was above middle on the 11-point rank scale for all except the most compressed images (median score=1). In conclusion, for caries diagnosis, compression rates of 1:12 can be justified before accuracy and image quality is significantly affected.

Proceedings ArticleDOI
01 Sep 1996
TL;DR: In this article, an image is modified by a pseudo-noise signature which is shaped by the perceptual thresholds from masking effects, which is used to check image integrity and measure its distortion.
Abstract: We propose a novel scheme to embed an invisible signature into an image to check image integrity and measure its distortion. The technique is based on the pseudo-noise sequences and visual masking effects. The values of an image are modified by a pseudo-noise signature which is shaped by the perceptual thresholds from masking effects. The method is robust and can gauge errors accurately up to half of the perceptual thresholds. It also readily identifies large image distortion. Experimental results after applying JPEG and white noise to the image are also reported.

Proceedings ArticleDOI
07 May 1996
TL;DR: A locally adaptive perceptual masking threshold model that computes, based on the contents of the original images, the maximum amount of noise energy that can be injected at each transform coefficient that results in perceptually distortion-free still images or sequences of images.
Abstract: This paper involves designing, implementing, and testing of a locally adaptive perceptual masking threshold model for image compression. This model computes, based on the contents of the original images, the maximum amount of noise energy that can be injected at each transform coefficient that results in perceptually distortion-free still images or sequences of images. The adaptive perceptual masking threshold model can be used as a pre-processor to a JPEG compression standard image coder. DCT coefficients less than their corresponding perceptual thresholds can be set to zero before the normal JPEG quantization and Huffman coding steps. The result is an image-dependent gain in the bit rate needed for transparent coding. In an informal subjective test involving 318 still images in the AT&T Bell Laboratory image database, this model provided a gain in bit-rate saving on the order of 10 to 30%.

Proceedings ArticleDOI
12 May 1996
TL;DR: This paper investigates the efficacy of the different prediction schemes that were proposed for a new lossless/nearly lossless compression standard for continuous-tone still images and discusses their computational complexity and the price/performance trade-offs.
Abstract: It has long been realized that the current JPEG standard does not provide state-of-the-art performance in its lossless mode. In view of this, the International Standards Organization (ISO) recently solicited proposals for a new lossless/nearly lossless compression standard for continuous-tone still images. A total of nine proposals were submitted in the summer of 1995. Seven of these used a prediction step for 'decorrelating' the image prior to modelling and encoding. In this paper we investigate the efficacy of the different prediction schemes that were proposed. We also discuss their computational complexity and the price/performance trade-offs that emerge from our study.

Book ChapterDOI
01 Jan 1996
TL;DR: In this chapter the important developments which have led to the third generation in quantitative coronary arteriographic (QCA) analytical software are presented, as well as current developments in the fields of image compression and storage.
Abstract: In this chapter the important developments which have led to the third generation in quantitative coronary arteriographic (QCA) analytical software are presented, as well as current developments in the fields of image compression and storage. The conventional QCA approaches with automated contour detection techniques based on Minimum Cost contour detection Algorithms (MCA) have been well established and validated. However, further improvements in the calculations of the diameter and reference diameter functions were needed, especially for complex morphology and for stent applications. The development of the Gradient Field Transform (GFTR) approach for the quantitation of complex lesions represents a major step forward in QCA. With the advent of the cineless catheterization laboratory, the issue of image compression has become of major relevance. Phantom studies with lossy JPEG image compression at 5122 matrix size demonstrate that the compression factor (CF) should not exceed the level of 10. On the other hand, if JPEG and LOT lossy compression schemes (CF’s of 5,8 and 12) are applied to routinely acquired coronary angiographic image results, QCA measurements demonstrate that all three compression factors lead to significantly increased random differences in the measurements. These results suggest that even the JPEG and LOT compression ratio of 5 is not acceptable for QCA. Finally, an extensive QCA study has demonstrated that S-VHS video tape is unacceptable for QCA and should be excluded from quantitative angiographic clinical trials.

Journal ArticleDOI
TL;DR: Fifty image sequences from 31 interventional procedures were viewed both in the original (uncompressed) state and after 15:1 lossyJPEG compression, and experienced angiographers identified dissections, suspected thrombi, and coronary stents.
Abstract: Background Development of the “all-digital” cardiac catheterization laboratory has been slowed by substantial computer archival and transfer requirements. Lossy data compression reduces this burden but creates irreversible changes in images, potentially impairing detection of clinically important angiographic features. Methods and Results Fifty image sequences from 31 interventional procedures were viewed both in the original (uncompressed) state and after 15:1 lossy Joint Photographic Expert’s Group (JPEG) compression. Experienced angiographers identified dissections, suspected thrombi, and coronary stents, and their results were compared with those from a consensus panel that served as a “gold standard.” The panel and the individual observers reviewed the same image sequences 4 months after the first session to determine intraobserver variability. Intraobserver agreement for original images was not significantly different from that for compressed images (89.8% versus 89.5% for 600 pairs of observations ...

Proceedings ArticleDOI
25 Aug 1996
TL;DR: This paper focuses on a lossy compression technique for a gray image using the Hilbert curve, and confirms that in spite of the simple computation in comparison to JPEG, acceptable quality images can be obtained at bit-rates above 0.6 bit/pixel.
Abstract: Hilbert curve is one of the space-filling curves published by Peano. There are several applications using this curve such as image processing, computer hologram, etc. In this paper, we concentrate on a lossy compression technique for a gray image using the Hilbert curve. The merit of this curve is to pass through all points in a quadrant, and it always moves to the neighbor quadrant. Our method is based on this neighborhood property, by a simple segmentation of the scanned one-dimensional data using a zero order interpolation. From our experiments, we have confirmed that in spite of the simple computation in comparison to JPEG, acceptable quality images can be obtained at bit-rates above 0.6 bit/pixel.

Proceedings ArticleDOI
23 Sep 1996
TL;DR: Techniques for compression of laser line scan and camera images, as well as format-specific data compression for quick-look sonar mapping data, are presented for the autonomous minehunting and mapping technology demonstration.
Abstract: Efficient use of the bandwidth-limited acoustic communications link requires that considerable attention be given to both removing redundancy in data and minimizing superfluous resolution. In this paper we present techniques for compression of laser line scan and camera images, as well as format-specific data compression for quick-look sonar mapping data. For image compression JPEG and a wavelet-based technique (EPIC) are examined. JPEG is found to be less efficient than the wavelet transform but has the advantage that it is robust with respect to lost data packets. The wavelet-based transform is more efficient at high compression rates though below a certain rate both offer similar performance. The specific context for this work is the autonomous minehunting and mapping technology (AMMT) demonstration which utilizes the DARPA large diameter underwater vehicle. The ultimate goal of the project is identification and imaging of mine-like objects. The algorithms presented here were implemented in real-time and operated in the field for this program. Details of specific issues encountered during development of the system are also described.