scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Various Image Compression Techniques: Lossy and Lossless

17 May 2016-International Journal of Computer Applications (Foundation of Computer Science (FCS), NY, USA)-Vol. 142, Iss: 6, pp 23-26
TL;DR: The purpose of the image compression is to decrease the redundancy and irrelevance of image data to be capable to record or send data in an effective form, which decreases the time of transmit in the network and raises the transmission speed.
Abstract: Image compression is an implementation of the data compression which encodes actual image with some bits. Thepurpose of the image compression is to decrease the redundancy and irrelevance of image data to be capable to record or send data in an effective form. Hence the image compression decreases the time of transmit in the network and raises the transmission speed. In Lossless technique of image compression, no data get lost while doing the compression. To solve these types of issues various techniques for the image compression are used. Now questions like how to do mage compression and second one is which types of technology is used, may be arises. For this reason commonly two types’ of approaches are explained called as lossless and the lossy image compression approaches. These techniques are easy in their applications and consume very little memory. An algorithm has also been introduced and applied to compress images and to decompress them back, by using the Huffman encoding techniques.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
23 Feb 2019
TL;DR: Haar was proven as the most accurate iris recognition algorithm while Reverse-Biorthogonal was the better wavelet algorithm image compression.
Abstract: Biometrics is popular nowadays because of its very useful security applications. There are different biometric technologies but iris recognition system was considered the most reliable since human irises are unique and cannot be forged easily. The study aims to segment ideal and non-ideal iris images with the help of Zuo and Xin Li's algorithm to determine the most accurate wavelet family and its coefficient for encoding the iris templates using Haar, Daubechies, Biorthogonal and Reverse- Biorthogonal wavelets. Test metrics, like False Rejection Rate (FRR), False Acceptance Rate (FAR), Compression Rate (CR), and Degrees-of-Freedom (DOF) were used in evaluating the performance of the system. Based on the results, the algorithm was able to segment ideal and non-ideal iris images, encode the irises and match the irises accurately. Haar was proven as the most accurate iris recognition algorithm while Reverse-Biorthogonal was the better wavelet algorithm image compression. There were different wavelets that would give better result in the recognition process depending on the algorithm or system used. The metrics suggested that the developed algorithm was sufficient for iris recognition.

12 citations

Journal ArticleDOI
TL;DR: This paper has shown the benefits of a DWT-based approach by utilizing the canonical Huffman coding as an entropy encoder and has an improvement over Wavelet Scalar Quantization often used for image compression of fingerprints.
Abstract: The explosive growth of digital imaging, especially in the fields of medicine, education, and e-commerce, has made data maintenance and transmission over networks a daunting task. Therefore, the development and use of image compression techniques have become vital for overcoming the problems of storage and transmission of digital image data. Two methods that are extensively used for data compression are Discrete Cosine Transformation and Discrete Wavelet Transform (DWT). In our present study, we have shown the benefits of a DWT-based approach by utilizing the canonical Huffman coding as an entropy encoder. DWT decomposes the image into different sub-bands. These sub bands are known as approximate image and detail images. The approximate image is normalized in the range (0, 1) for obtaining the Canonical Huffman coding bit stream. In a similar way, details coefficients are also normalized in the range (0, 1) for obtaining the canonical Huffman coding bit stream of detail images. Hard thresholding is often used to discard insignificant coefficients of detail images. Our proposed method takes less computing time and has a smaller codebook size than that of conventional Huffman coding. Moreover, the results show an improvement over Wavelet Scalar Quantization often used for image compression of fingerprints. We have applied our method to various popular images and obtained promising PSNR, CR, and BPP that highlight the advantages of our approach and the efficiency of our algorithms.

12 citations

Journal ArticleDOI
TL;DR: The proposed block-based lossless coding technique presented in this paper targets at compression of volumetric medical images of 8-bit and 16-bit depth and is capable of effective reduction of inter-pixel and coding redundancy.
Abstract: The proposed block-based lossless coding technique presented in this paper targets at compression of volumetric medical images of 8-bit and 16-bit depth. The novelty of the proposed technique lies in its ability of threshold selection for prediction and optimal block size for encoding. A resolution independent gradient edge detector is used along with the block adaptive arithmetic encoding algorithm with extensive experimental tests to find a universal threshold value and optimal block size independent of image resolution and modality. Performance of the proposed technique is demonstrated and compared with benchmark lossless compression algorithms. BPP values obtained from the proposed algorithm show that it is capable of effective reduction of inter-pixel and coding redundancy. In terms of coding efficiency, the proposed technique for volumetric medical images outperforms CALIC and JPEG-LS by 0.70 % and 4.62 %, respectively.

8 citations


Cites background from "Various Image Compression Technique..."

  • ...Lossy compression techniques provide high compression at the expense of image quality due to loss of information [4]....

    [...]

Journal ArticleDOI
TL;DR: A lightweight data compression algorithm for image encryption is proposed in this paper which utilizes scan-based block compression and selective pixel encryption approach to encrypt the image data in only one round, resulting in low computational complexity and reduced data volume.
Abstract: Devices in the Internet of Things (IoT) have resource constraints in terms of energy, computing power, and memory that make them vulnerable to some security attacks. Due to the increasing volume of multimedia content, lightweight encryption algorithms have been developed to allow IoT nodes to communicate securely with the least computational complexity and bandwidth usage. To adapt the low data rate of IoT devices, a lightweight data compression algorithm for image encryption is proposed in this paper which utilizes scan-based block compression and selective pixel encryption approach to encrypt the image data in only one round, resulting in low computational complexity and reduced data volume. The results of the implementing the proposed approach in IoT testbed show that, on average, the power consumption of the devices and packet rate is decreased by 15% and 26%, respectively, compared to the existing algorithms.

7 citations

Journal ArticleDOI
TL;DR: A grey image compression algorithm based on variation partial differential equations is proposed that can obtain higher compression ratio and peak signal-to-noise ratio, especially for images with larger size and less texture detail, and can better maintain large changes in grayscale in the original image.
Abstract: Compression is the key technology for the rapid development of multimedia technology, and images are an important part of multimedia information. For image compression, a grey image compression algorithm based on variation partial differential equations is proposed. On the encoding side, this article first uses a quad tree to segment the image, and then encodes and transmits some pixels. Secondly, an image interpolation algorithm based on variation partial differential equations is used at the decoding end to regenerate the image, effectively eliminating the block effect in the decoded image. Experiments show that this method can obtain higher compression ratio and peak signal-to-noise ratio, especially for images with larger size and less texture detail, and can better maintain large changes in grayscale in the original image. Details, which can easily remove the block effect, have high practical value.

6 citations

References
More filters
01 Jan 2010
TL;DR: Huffman algorithm is analyzed and compared with other common compression techniques like Arithmetic, LZW and Run Length Encoding to make storing easier for large amount of data.
Abstract: Data compression is also called as source coding. It is the process of encoding information using fewer bits than an uncoded representation is also making a use of specific encoding schemes. Compression is a technology for reducing the quantity of data used to represent any content without excessively reducing the quality of the picture. It also reduces the number of bits required to store and/or transmit digital media. Compression is a technique that makes storing easier for large amount of data. There are various techniques available for compression in my paper work , I have analyzed Huffman algorithm and compare it with other common compression techniques like Arithmetic, LZW and Run Length Encoding.

166 citations

Journal ArticleDOI
TL;DR: The efficiency of the proposed scheme is demonstrated by results, especially, when faced to the method presented in the recently published paper based on the block truncation coding using pattern fitting principle.
Abstract: This paper considers the design of a lossy image compression algorithm dedicated to color still images. After a preprocessing step (mean removing and RGB to YCbCr transformation), the DCT transform is applied and followed by an iterative phase (using the bisection method) including the thresholding, the quantization, dequantization, the inverse DCT, YCbCr to RGB transform and the mean recovering. This is done in order to guarantee that a desired quality (fixed in advance using the well known PSNR metric) is checked. For the aim to obtain the best possible compression ratio CR, the next step is the application of a proposed adaptive scanning providing, for each (n, n) DCT block a corresponding (n×n) vector containing the maximum possible run of zeros at its end. The last step is the application of a modified systematic lossless encoder. The efficiency of the proposed scheme is demonstrated by results, especially, when faced to the method presented in the recently published paper based on the block truncation coding using pattern fitting principle.

76 citations

Journal Article
TL;DR: Two image compression techniques are simulated based on Discrete Cosine Transform and Discrete Wavelet Transform and the results are shown and different quality parameters of its by applying on various images are compared.
Abstract: Image compression is a method through which we can reduce the storage space of images, videos which will helpful to increase storage and transmission process's performance. In image compression, we do not only concentrate on reducing size but also concentrate on doing it without losing quality and information of image. In this paper, two image compression techniques are simulated. The first technique is based on Discrete Cosine Transform (DCT) and the second one is based on Discrete Wavelet Transform (DWT). The results of simulation are shown and compared different quality parameters of its by applying on various images Keywords: DCT, DWT, Image compression, Image processing

64 citations

Posted Content
TL;DR: In this paper, a new method called Five Modulus Method (FMM) was proposed for image compression which consists of converting each pixel value in an 8-by-8 block into a multiple of 5 for each of the R, G and B arrays.
Abstract: Data is compressed by reducing its redundancy, but this also makes the data less reliable, more prone to errors. In this paper a novel approach of image compression based on a new method that has been created for image compression which is called Five Modulus Method (FMM). The new method consists of converting each pixel value in an 8-by-8 block into a multiple of 5 for each of the R, G and B arrays. After that, the new values could be divided by 5 to get new values which are 6-bit length for each pixel and it is less in storage space than the original value which is 8-bits. Also, a new protocol for compression of the new values as a stream of bits has been presented that gives the opportunity to store and transfer the new compressed image easily.

33 citations

Journal ArticleDOI
TL;DR: The experimental results show that the proposed algorithm could achieve an excellent compression ratio without losing data when compared to the standard compression algorithms.
Abstract: The development of multimedia and digital imaging has led to high quantity of data required to represent modern imagery. This requires large disk space for storage, and long time for transmission over computer networks, and these two are relatively expensive. These factors prove the need for images compression. Image compression addresses the problem of reducing the amount of space required to represent a digital image yielding a compact representation of an image, and thereby reducing the image storage/transmission time requirements. The key idea here is to remove redundancy of data presented within an image to reduce its size without affecting the essential information of it. We are concerned with lossless image compression in this paper. Our proposed approach is a mix of a number of already existing techniques. Our approach works as follows: first, we apply the well-known Lempel-Ziv-Welch (LZW) algorithm on the image in hand. What comes out of the first step is forward to the second step where the Bose, Chaudhuri and Hocquenghem (BCH) error correction and detected algorithm is used. To improve the compression ratio, the proposed approach applies the BCH algorithms repeatedly until “inflation” is detected. The experimental results show that the proposed algorithm could achieve an excellent compression ratio without losing data when compared to the standard compression algorithms.

29 citations