scispace - formally typeset
Proceedings ArticleDOI

Two-value image data compressing and recovering using improved neural network

Dai Kui, +2 more
- pp 491-494
Reads0
Chats0
TLDR
The hybrid network architecture, its learning process and the improved learning algorithm are presented in this paper and the applied neural network models are the improved ART1 and the feedforward types.
Abstract
Data compression and generalization capability are important characteristics of a neural network model. From this point of view, the two-value image data compression and recovering of a hybrid neural network are examined experimentally. The applied neural network models are the improved ART1 and the feedforward types. The hybrid network architecture, its learning process and the improved learning algorithm are presented in this paper. The whole work has been finished using a large scale general-purpose neural network simulating system, the GKD-N/sup 2/S/sup 2/ on the SUN3 workstation. Some experimental results also have been given and are discussed. >

read more

Citations
More filters
Patent

Apparatus and method for compressing data, apparatus and method for analyzing data, and data management system

TL;DR: In this article, the authors provide an apparatus and a method for compressing data, an apparatus for analyzing data and a data management system, which are capable of compressing huge data and accurately reproducing the characteristics of the original data from the compressed data.
References
More filters
Journal ArticleDOI

Discrete Cosine Transform

TL;DR: In this article, a discrete cosine transform (DCT) is defined and an algorithm to compute it using the fast Fourier transform is developed, which can be used in the area of digital processing for the purposes of pattern recognition and Wiener filtering.
Journal ArticleDOI

A massively parallel architecture for a self-organizing neural pattern recognition machine

TL;DR: A neural network architecture for the learning of recognition categories is derived which circumvents the noise, saturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models.
Journal ArticleDOI

Unified approach to quadratically convergent algorithms for function minimization

TL;DR: With this unified method, a generalized algorithm is derived and it is shown that all the existing conjugate-gradient algorithms and variable-metric algorithms can be obtained as particular cases.
Proceedings ArticleDOI

Image data compression using a neural network model

TL;DR: The applied network model is a feedforward-type, three-layered network with the backpropagation learning algorithm, and the implementation of this model on a hypercube parallel computer and its computation performance are described.