scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A proposed algorithm for image compression using clustering approach

01 Aug 2017-pp 322-325
TL;DR: A new proposed compression algorithm to perform lossy image compression by implementing the clustering approach using the Euclidean distance method, which is of low complexity and is most suited for compressing a large image file having spatial redundancy.
Abstract: Image compression means reducing the size of bytes of the file which allows the user to store a higher amount of data within a fixed memory. Our paper discusses a new proposed compression algorithm to perform lossy image compression by implementing the clustering approach using the Euclidean distance method. The main objective is to minimize the storage needs by realizing the closest mean pixel data of the cluster. The algorithm clusters the fragments (the bit-stream data) of the image using the Euclidean distance method. It is of low complexity and is most suited for compressing a large image file having spatial redundancy.
Citations
More filters
Journal ArticleDOI
TL;DR: The simulations and obtained results demonstrate that the performance of the proposed CFA‐Clustering method is superior to the other counterpart algorithms in most cases, therefore, the CFA can be considered as an alternative stochastic method to solve clustering problems.

13 citations


Additional excerpts

  • ...…recent years, data clustering has been employed in many areas, such as data mining (Peters, 2006), (Shanghooshabad & Abadeh, 2016), image compression (Gupta & Sinha, 2017), (Ryu, Lee, & Lee, 2014), image segmentation (Ray & Turi, 2000), machine learning (Al-Omary & Jamil, 2006), (Min et al., 2018)....

    [...]

Proceedings ArticleDOI
01 Aug 2018
TL;DR: An efficient algorithm is proposed using EUCLIDEan distance, COLOR HISTOGRAM USING K-MEANS, HOG and KNN, to extract features of vehicle logos and to classify each brand of logos, best suited for classifying complex and similar logos of different vehicles brands.
Abstract: VLR enrolls a vital role in vehicle identification for the intelligent traffic systems. Recognition approach typically gives much importance to the training dataset. In this paper an efficient algorithm is proposed using EUCLIDEAN DISTANCE, COLOR HISTOGRAM USING K-MEANS, HOG and KNN, to extract features of vehicle logos and to classify each brand of logos. Here training dataset is shaped by applying Euclidean distance and k-means to the raw dataset. Testing data samples were also taken from the standardized data sets. At the final stage, accuracy of the logo classification is calculated based on CCA and MCCA. CCA is class-wise accuracy, whereas MCCA is overall accuracy for VLR system. This algorithm is best suited for classifying complex and similar logos of different vehicles brands.

1 citations

Journal ArticleDOI
TL;DR: An innovative image compression scheme by utilizing the Adaptive Discrete Wavelet Transform-based Lifting Scheme (ADWT-LS) with a single objective function that relates multi-constraints like the Peak Signal-to-Noise Ratio (PSNR) as well as Compression Ratio (CR).
Abstract: This paper proposes an innovative image compression scheme by utilizing the Adaptive Discrete Wavelet Transform-based Lifting Scheme (ADWT-LS). The most important feature of the proposed DWT lifting method is splitting the low-pass and high-pass filters into upper and lower triangular matrices. It also converts the filter execution into banded matrix multiplications with an innovative lifting factorization presented with fine-tuned parameters. Further, optimal tuning is the most important contribution that is achieved via a new hybrid algorithm known as Lioness-Integrated Whale Optimization Algorithm (LI-WOA). The proposed algorithm hybridizes the concepts of both the Lion Algorithm (LA) and Whale Optimization Algorithm (WOA). In addition, innovative cosine evaluation is initiated in this work under the CORDIC algorithm. Also, this paper defines a single objective function that relates multi-constraints like the Peak Signal-to-Noise Ratio (PSNR) as well as Compression Ratio (CR). Finally, the performance of the proposed work is compared over other conventional models regarding certain performance measures.

1 citations

Proceedings ArticleDOI
02 Dec 2022
TL;DR: In this article , a comparative analysis of conventional and contemporary lossy image compression techniques on the Kodak Dataset, including Autoencoders, Principal Component Analysis (PCA), K-Means, and Discrete Wavelet Transform (DWT), was conducted.
Abstract: For many years, lossless image compression has been a promising topic of study. Various techniques have been created over time to obtain an approximation of the reduced data size. While discrete wavelet transform (DWT) and discrete cosine transform (DCT) have historically been employed for the purpose of compressing images, various machine learning methods and deep learning networks are now being offered. In this research, we conduct a comparative analysis of conventional and contemporary lossy image compression techniques on the Kodak Dataset, including Autoencoders, Principal Component Analysis (PCA), K-Means, and Discrete Wavelet Transform (DWT). The metrics used for the evaluation of the proposed study are Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), Compression Ratio (CR), and Structural Similarity Index (SSIM).
References
More filters
Journal ArticleDOI
TL;DR: Some of the most significant features of the standard are presented, such as region-of-interest coding, scalability, visual weighting, error resilience and file format aspects, and some comparative results are reported.
Abstract: One of the aims of the standardization committee has been the development of Part I, which could be used on a royalty- and fee-free basis. This is important for the standard to become widely accepted. The standardization process, which is coordinated by the JTCI/SC29/WG1 of the ISO/IEC has already produced the international standard (IS) for Part I. In this article the structure of Part I of the JPFG 2000 standard is presented and performance comparisons with established standards are reported. This article is intended to serve as a tutorial for the JPEG 2000 standard. The main application areas and their requirements are given. The architecture of the standard follows with the description of the tiling, multicomponent transformations, wavelet transforms, quantization and entropy coding. Some of the most significant features of the standard are presented, such as region-of-interest coding, scalability, visual weighting, error resilience and file format aspects. Finally, some comparative results are reported and the future parts of the standard are discussed.

1,842 citations


"A proposed algorithm for image comp..." refers background in this paper

  • ...Al-Shaykh, Iole Moccagatta, and Homer Chen discuss the JPEG-2000 image compression coding standard in their paper[13] where they intend to create a unified standard for various images....

    [...]

Journal ArticleDOI
TL;DR: LOCO-I as discussed by the authors is a low complexity projection of the universal context modeling paradigm, matching its modeling unit to a simple coding unit, which is based on a simple fixed context model, which approaches the capability of more complex universal techniques for capturing high-order dependencies.
Abstract: LOCO-I (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and near-lossless compression of continuous-tone images, JPEG-LS. It is conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm "enjoys the best of both worlds." It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing high-order dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golomb (1966) type codes, which are adaptively chosen, and an embedded alphabet extension for coding of low-entropy image regions. LOCO-I attains compression ratios similar or superior to those obtained with state-of-the-art schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCO-I, and its standardization into JPEC-LS.

1,668 citations


"A proposed algorithm for image comp..." refers background in this paper

  • ...rk, [12] he di LOssless C of ISO/ITU st chieves the co eme along wi epends exten tended family...

    [...]

Journal ArticleDOI
TL;DR: Algorithms for computing various functions on a digital picture which depend on the distance to a given subset of the picture, which involve local operations which are performed repeatedly, "in parallel”, on every picture element and its immediate neighbors are described.

855 citations


"A proposed algorithm for image comp..." refers background in this paper

  • ...Distance maps can be used for several purposes [9, 10]....

    [...]

Journal ArticleDOI
TL;DR: A new image multiresolution transform that is suited for both lossless (reversible) and lossy compression, and entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity.
Abstract: We propose a new image multiresolution transform that is suited for both lossless (reversible) and lossy compression. The new transformation is similar to the subband decomposition, but can be computed with only integer addition and bit-shift operations. During its calculation, the number of bits required to represent the transformed image is kept small through careful scaling and truncations. Numerical results show that the entropy obtained with the new transform is smaller than that obtained with predictive coding of similar complexity. In addition, we propose entropy-coding methods that exploit the multiresolution structure, and can efficiently compress the transformed image for progressive transmission (up to exact recovery). The lossless compression ratios are among the best in the literature, and simultaneously the rate versus distortion performance is comparable to those of the most efficient lossy compression methods.

738 citations


"A proposed algorithm for image comp..." refers background in this paper

  • ...While lossy compression is irreversible compression as the reconstructed image contains degradations compared to the original image.[8] Distance mapping is an essential concept in image computing domain....

    [...]

Proceedings ArticleDOI
28 Mar 2000
TL;DR: The JPEG-2000 standard as discussed by the authors is an emerging standard for still image compression, which defines the minimum compliant decoder and bitstream syntax, as well as optional, value-added extensions.
Abstract: JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers.

391 citations