scispace - formally typeset
Search or ask a question
Book ChapterDOI

A memory-efficient image compression method using DWT applied to histogram-based block optimization

TL;DR: An improved and hugely memory-efficient block optimization technique is presented that incorporates byte compression and discrete wavelet transform and all the DWT coefficients are stored, showing huge improvement in compression and reduces image storage space.
Abstract: Image compression is an essential task for storing images in digital format. In this communication, an improved and hugely memory-efficient block optimization technique is presented that incorporates byte compression and discrete wavelet transform (\({ {DWT}}\)). Instead of the common method of nulling insignificant \({ {DWT}}\) coefficients, all the \({ {DWT}}\) coefficients are stored. The only lossy part comes from block optimization without noticeable degradation in the decompressed images. The method shows huge improvement in compression and reduces image storage space. The results obtained from this technique are compared to JPEG and JPEG2000 standard which shows this can be a fast alternative to other compression methods.
Citations
More filters
Proceedings ArticleDOI
01 Aug 2020
TL;DR: In this paper, the VLSI implementation of HAAR wavelet-based image compression is proposed and designed and provides a hardware-free architecture with low cost.
Abstract: The Discrete Wavelet transform is one of the best tools for signal and data analysis, It requires efficient hardware implementation in the real-time applications. The submissions established in the field of imaging necessitates compacted architecture. In DWT discrete sampling is accomplished for the wavelets. In this paper, the VLSI implementation of HAAR wavelet-based image compression is proposed and designed. HAAR wavelet transform is one of the easiest methods for image compression because it has coefficients as either 1 or −1. In this work software alone is used for the compression together with optimizing it with a continuous optimization algorithm and provides a hardware-free architecture with low cost. The VHDL work is carried out in Xilinx Platform and provides a truncated power architecture for a concrete application. The same VHDL architecture can also be instigated in FPGA which will harvest hardware effectual compromising outcomes.

47 citations


Cites methods from "A memory-efficient image compressio..."

  • ...The image was decomposed using the horizontal and vertical details obtained [5]....

    [...]

Proceedings ArticleDOI
05 Mar 2020
TL;DR: Higher compression ratio is obtained after three levels of decompositon and the decomposed can be reconstructed without appreciable loss in the original image.
Abstract: Image Compression aims at minimal the Storage and for the easy transmission Without affecting pictures quality. In this paper HAAR wavelet based Discrete Wavelet Transform (DWT) is done for the effective and efficient image compression..HAAR DWT provides an easy way of compression as the coefficient are either 1 or -1.The wavelet transforms are used for the time and frequency analysis. In this paper higher compression ratio is obtained after three levels of decompositon. The decomposed can be reconstructed without appreciable loss in the original image.

29 citations

References
More filters
Journal ArticleDOI
TL;DR: The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications.
Abstract: For the past few years, a joint ISO/CCITT committee known as JPEG (Joint Photographic Experts Group) has been working to establish the first international compression standard for continuous-tone still images, both grayscale and color. JPEG’s proposed standard aims to be generic, to support a wide variety of applications for continuous-tone images. To meet the differing needs of many applications, the JPEG standard includes two basic compression methods, each with various modes of operation. A DCT-based method is specified for “lossy’’ compression, and a predictive method for “lossless’’ compression. JPEG features a simple lossy technique known as the Baseline method, a subset of the other DCT-based modes of operation. The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications. This article provides an overview of the JPEG standard, and focuses in detail on the Baseline method.

3,944 citations

Journal ArticleDOI
TL;DR: In this paper, a new technique for image compression called block truncation coding (BTC) is presented and compared with transform and other techniques, which uses a two-level (one-bit) nonparametric quantizer that adapts to local properties of the image.
Abstract: A new technique for image compression called Block Truncation Coding (BTC) is presented and compared with transform and other techniques The BTC algorithm uses a two-level (one-bit) nonparametric quantizer that adapts to local properties of the image The quantizer that shows great promise is one which preserves the local sample moments This quantizer produces good quality images that appear to be enhanced at data rates of 15 bits/picture element No large data storage is required, and the computation is small The quantizer is compared with standard (minimum mean-square error and mean absolute error) one-bit quantizers Modifications of the basic BTC algorithm are discussed along with the performance of BTC in the presence of channel errors

823 citations

01 Jan 2011
TL;DR: The results have shown that the ATBTC algorithm outperforms the BTC and provides better image quality than image compression using BTC at the same bit rate.
Abstract: The present work investigates image compression using block truncation coding. Two algorithms were selected namely, the original block truncation coding (BTC) and Absolute Moment block truncation coding (AMBTC) and a comparative study was performed. Both of two techniques rely on applying divided image into non overlapping blocks. They differ in the way of selecting the quantization level in order to remove redundancy. Objectives measures were used to evaluate the image quality such as: Peak Signal to Noise Ratio (PSNR), Weighted Peak Signal to Noise Ratio (WPSNR), Bit Rate (BR) and Structural Similarity Index (SSIM).The results have shown that the ATBTC algorithm outperforms the BTC. It has been show that the image compression using AMBTC provides better image quality than image compression using BTC at the same bit rate. Moreover, the AMBTC is quite faster compared to BTC Index Terms—BTC, AMBTC, WPSNR, SSIM.

423 citations

Journal ArticleDOI
TL;DR: This paper presents an extensive survey on the development of neural networks for image compression which covers three categories: direct image compression by neural networks; neural network implementation of existing techniques, and neural network based technology which provide improvement over traditional algorithms.
Abstract: Apart from the existing technology on image compression represented by series of JPEG, MPEG and H.26x standards, new technology such as neural networks and genetic algorithms are being developed to explore the future of image coding. Successful applications of neural networks to vector quantization have now become well established, and other aspects of neural network involvement in this area are stepping up to play significant roles in assisting with those traditional technologies. This paper presents an extensive survey on the development of neural networks for image compression which covers three categories: direct image compression by neural networks; neural network implementation of existing techniques, and neural network based technology which provide improvement over traditional algorithms.

187 citations

Journal ArticleDOI
TL;DR: The higher the compression ratio and the smoother the original image, the better the quality of the reconstructed image.
Abstract: This work proposes a novel scheme for lossy compression of an encrypted image with flexible compression ratio. A pseudorandom permutation is used to encrypt an original image, and the encrypted data are efficiently compressed by discarding the excessively rough and fine information of coefficients generated from orthogonal transform. After receiving the compressed data, with the aid of spatial correlation in natural image, a receiver can reconstruct the principal content of the original image by iteratively updating the values of coefficients. This way, the higher the compression ratio and the smoother the original image, the better the quality of the reconstructed image.

172 citations