scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A New Compression Technique Using an Artificial Neural Network

TL;DR: The proposed technique includes steps to break down large images into smaller windows and eliminate redundant information and employs a neural network trained by a non-iterative, direct solution method for image compression.
Abstract: In this paper, we present a direct solution method based neural network for image compression. The proposed technique includes steps to break down large images into smaller windows and eliminate redundant information. Furthermore, the technique employs a neural network trained by a non-iterative, direct solution method. An error backpropagation algorithm is also used to train the neural network, and both training algorithms are compared. The proposed technique has been implemented in C on the SP2 Supercomputer. A number of experiments have been conducted. The results obtained, such as compression ratio and transfer time of the compressed images are presented in this paper.
Citations
More filters
Journal Article
TL;DR: Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network.
Abstract: Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly. Keywords—Back-propagation Neural Network, Cumulative Distribution Function, Correlation, Convergence.

76 citations

Journal ArticleDOI
Udo Seiffert1
TL;DR: A technical system is developed that incorporates a priori information on typical image contents in image compression on the basis of artificial neural networks and thus increases compression performance for larger image data sets with frequently recurring image contents.

13 citations


Cites background from "A New Compression Technique Using a..."

  • ...Artificial neural networks [8] are an innovative tool for smart, adaptive, and trainable image and signal processing [10,2,16]....

    [...]

Journal ArticleDOI
TL;DR: Investigation of the performance of ANN with GA in the application of image compression for obtaining optimal set of weights reveals that general concept about GA, it performs better over gradient decent based learning, is not applicable for image compression.
Abstract: It is well known that the classic image compression techniques such as JPEG and MPEG have serious limitations at high compression rate; the decompressed image gets really fuzzy or indistinguishable. To overcome problems associated with conventional methods, artificial neural networks based method can be used. Genetic algorithm is a very powerful method for solving real life problems and this has been proven by applying to number of different applications. There is lots of interest to involve the GA with ANN for various reasons at various levels. Trapping in the local minima is one of the well-known problems of gradient decent based learning in ANN. The problem can be addressed using GA algorithm. But no work has been done to evaluate the performance of both learning methods from the image compression point of view. In this paper, we investigate the performance of ANN with GA in the application of image compression for obtaining optimal set of weights. Direct method of compression has been applied with neural network to get the additive advantage for security of compressed data. The experiments reveal that the standard BP with proper parameters provide good generalize capability for compression and is much faster compared to earlier work in the literature, based on cumulative distribution function. Further, the results obtained shows that general concept about GA, it performs better over gradient decent based learning, is not applicable for image compression.

9 citations


Cites methods from "A New Compression Technique Using a..."

  • ...Firstly, develop the existence method of compression by use of ANN technology so that improvement in the design of existing method can be achieved....

    [...]

01 Jan 1991
TL;DR: LTS1 Reference LTS-CONF-1991-009 Record created on 2006-06-14, modified on 2016-08-08.
Abstract: Keywords: LTS1 Reference LTS-CONF-1991-009 Record created on 2006-06-14, modified on 2016-08-08

9 citations

Proceedings ArticleDOI
01 Sep 2020
TL;DR: The results of image identification in the presence of "noise", optimization based on filtering systematic error and NN extrapolation of the trend of the contour curve of the images of pollen grains were obtained.
Abstract: A methodology has been developed for optimizing the identification of micro-objects based on the use of neural networks (NN) of various topologies, synthesis of image processing mechanisms, extracting statistical, dynamic, specific characteristics, selecting and segmenting a contour, selecting reference points and reducing redundant points, taking into account systematic error factors, choosing an adequate model, setting variables and optimization. Methods and algorithms for determined and multivariate analysis, obtaining the coefficients of influence and elasticity of factors, approximating the contours represented by time series are proposed. Modified component schemes of the NN, training algorithms, developed a software package (SP) for visualization, recognition, classification of images of pollen grains, implemented a hybrid identification model taking into account the non-linearity of the effects of factors under the condition of a priori insufficiency and uncertainty of parameters. The efficiency of the SP was studied on the basis of a three-layer NN of forward and backward propagation of errors, learning algorithms with and without a teacher, Kohonen network with procedures for vector quantization, clustering and segmentation and the formation of a "sliding windows". The results of image identification in the presence of "noise", optimization based on filtering systematic error and NN extrapolation of the trend of the contour curve of the images of pollen grains were obtained.

7 citations

References
More filters
Book
01 Jan 1991
TL;DR: In this article, the authors present a guide to data compression techniques, including Shannon-Fano and Huffman coding techniques, lossy compression, JPEG compression algorithm, and fractal compression.
Abstract: From the Publisher: Topics in this guide to data compression techniques include the Shannon-Fano and Huffman coding techniques, Lossy compression, the JPEG compression algorithm, and fractal compression. Readers also study adaptive Huffman coding, arithmetic coding, dictionary compression methods, and learn to write C programs for nearly any environment. The disk illustrates each learned technique and demonstrates how data compression works.

618 citations

Book
01 Jul 1991
TL;DR: In this paper, the authors present a guide to data compression techniques, including Shannon-Fano and Huffman coding techniques, lossy compression, JPEG compression algorithm, and fractal compression.
Abstract: From the Publisher: Topics in this guide to data compression techniques include the Shannon-Fano and Huffman coding techniques, Lossy compression, the JPEG compression algorithm, and fractal compression. Readers also study adaptive Huffman coding, arithmetic coding, dictionary compression methods, and learn to write C programs for nearly any environment. The disk illustrates each learned technique and demonstrates how data compression works.

548 citations

Journal ArticleDOI
TL;DR: A new method called Dynamic Node Creation (DNC) which automatically grows BP networks until the target problem is solved, and yielded a solution for every problem tried.
Abstract: This paper introduces a new method called Dynamic Node Creation (DNC) which automatically grows BP networks until the target problem is solved. DNC sequentially adds nodes one at a time to the hidden layer(s) of the network until the desired approximation accuracy is achieved. Simulation results for parity, symmetry, binary addition, and the encoder problem are presented. The procedure was capable of finding known minimal topologies in many cases, and was always within three nodes of the minimum. Computational expense for finding the solutions was comparable to training normal BP networks with the same final topologies. Starting out with fewer nodes than needed to solve the problem actually seems to help find a solution. The method yielded a solution for every problem tried.

448 citations

Journal Article
TL;DR: The applications of digital data compression and the major components of compression systems are described and data modeling is discussed, and the role of entropy and data statistics is examined.
Abstract: The applications of digital data compression and the major components of compression systems are described. Data modeling is discussed, and the role of entropy and data statistics is examined. Gray-scale image modeling is used to illustrate some of these mechanisms. The coding mechanisms are examined, and prefix codes are explained. Arithmetic coding is considered. >

440 citations

Journal ArticleDOI
01 Feb 1995
TL;DR: This paper presents a tutor a overview of neural networks as signal processing tools for image compression due to their massively parallel and distributed architecture.
Abstract: This paper presents a tutor a overview of neural networks as signal processing tools for image compression. They are well suited to the problem of image compression due to their massively parallel and distributed architecture. Their characteristics are analogous to some of the features of our own visual system, which allow us to process visual information with much ease. For example, multilayer perceptions can be used as nonlinear predictors in differential pulse-code modulation (DPCM). Such predictors have been shown to increase the predictive gain relative to a linear predictor. Another active area of research is in the application of Hebbian learning to the extraction of principal components, which are the basis vectors for the optimal linear Karhunen-Loeve transform (KLT). These learning algorithms are iterative, have some computational advantages over standard eigendecomposition techniques, and can be made to adapt to changes in the input signal. Yet another model, the self-organizing feature map (SOFM), has been used with a great deal of success in the design of codebooks for vector quantization (VQ). The resulting codebooks are less sensitive to initial conditions than the standard LBG algorithm, and the topological ordering of the entries can be exploited to further increasing the coding efficiency and reducing the computational complexity. >

283 citations


"A New Compression Technique Using a..." refers background or methods in this paper

  • ...There have already been an exhaustive number of papers published applying ANNs to image compression [15-19]....

    [...]

  • ...Following the removal of redundant data, a more compressed image or signal may be transmitted [15]....

    [...]

  • ...Traditional techniques that have already been identified for data compression include: Predictive coding, Transform coding and Vector Quantization.[15],[19]....

    [...]

  • ...Traditional techniques that have already been identified for data compression include: Predictive coding, Transform coding and Vector Quantization....

    [...]

  • ...Some of the more notable in the literature are: nested training algorithms used with symmetrical multilayer neural networks [19], Self organising maps, for codebook generation [15], principal component analysis networks [14], backpropagation networks [18], and the adaptive principal component extraction algorithm [17]....

    [...]