scispace - formally typeset
Search or ask a question

Showing papers by "Teuvo Kohonen published in 1988"


Journal ArticleDOI
TL;DR: In this paper, the authors describe a self-organizing system in which the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events.
Abstract: This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails.

8,247 citations


Journal ArticleDOI
TL;DR: A brief survey of the motivations, fundamentals, and applications of artificial neural networks, as well as some detailed analytical expressions for their theory.

1,418 citations


Book
01 Jan 1988
TL;DR: In this article, a new model for associative memory based on a correlation matrix is proposed, which is failure tolerant and facilitates associative search of information; these are properties that are usually assigned to holographic memories.
Abstract: A new model for associative memory, based on a correlation matrix, is suggested. In this model information is accumulated on memory elements as products of component data. Denoting a key vector by q(p), and the data associated with it by another vector x(p), the pairs (q(p), x(p)) are memorized in the form of a matrix {see the Equation in PDF File} where c is a constant. A randomly selected subset of the elements of M xq can also be used for memorizing. The recalling of a particular datum x(r) is made by a transformation x(r)=M xq q(r). This model is failure tolerant and facilitates associative search of information; these are properties that are usually assigned to holographic memories. Two classes of memories are discussed: a complete correlation matrix memory (CCMM), and randomly organized incomplete correlation matrix memories (ICMM). The data recalled from the latter are stochastic variables but the fidelity of recall is shown to have a deterministic limit if the number of memory elements grows without limits. A special case of correlation matrix memories is the auto-associative memory in which any part of the memorized information can be used as a key. The memories are selective with respect to accumulated data. The ICMM exhibits adaptive improvement under certain circumstances. It is also suggested that correlation matrix memories could be applied for the classification of data.

800 citations


Journal ArticleDOI
TL;DR: A speaker-adaptive system that transcribes dictation using an unlimited vocabulary is presented that is based on a neural network processor for the recognition of phonetic units of speech.
Abstract: The factors that make speech recognition difficult are examined, and the potential of neural computers for this purpose is discussed. A speaker-adaptive system that transcribes dictation using an unlimited vocabulary is presented that is based on a neural network processor for the recognition of phonetic units of speech. The acoustic preprocessing, vector quantization, neural network model, and shortcut learning algorithm used are described. The utilization of phonotopic maps and of postprocessing in symbolic forms are discussed. Hardware implementations and performance of the neural networks are considered. >

647 citations


01 Jan 1988
TL;DR: In this article, three basic types of neural-like networks (Backpropagation network, Boltzmann machine, and Learning Vector Qumtization) were applied to two representative artificial statistical pattern recognition tasks, each with varying dimensionality.
Abstract: Successful recognition of natural signals, e.g., speech recognition, requires substantial statistical pattern recognition capabilities. This is at odds with the fact that the bulk of work on applying neural networks to pattern recognition has concentrated on non-statistical problems. Three basic types of neural-like networks (Backpropagation network, Boltzmann machine, and Learning Vector Qumtization), were applied in this work to two representative artificial statistical pattern recognition tasks, each with varying dimensionality. The performance of each network's different approach to solving the tasks was evaluated and compared, both to the performance of the other two networks, and to the theoretical limit. The Learning Vector Quantization was further benchmarked against the parametric Bayes classifier and the k-nearestneighbor classifier using natural speech data. A novel Learning Vector Quantization classifier (LVQ2) is introduced the first time in this work.

413 citations


Book ChapterDOI
01 Jan 1988
TL;DR: A property which is commonplace in the brain but which has always been ignored in the “learning machines” is a meaningful order of their processing units.
Abstract: A property which is commonplace in the brain but which has always been ignored in the “learning machines” is a meaningful order of their processing units. “Ordering” thereby usually does not mean moving of units to new places. The units may even be structurally identical; the specialized role is determined by their internal parameters which are made to change in certain processes. It then appears as if specific units having a meaningful organization were produced.

114 citations



Proceedings ArticleDOI
11 Apr 1988
TL;DR: A microprocessor-based real-time speech recognition system that is able to produce orthographic transcriptions for arbitrary words or phrases uttered in Finnish or Japanese and can also be used as a large-vocabulary isolated word recognizer.
Abstract: A microprocessor-based real-time speech recognition system is described. It is able to produce orthographic transcriptions for arbitrary words or phrases uttered in Finnish or Japanese. It can also be used as a large-vocabulary isolated word recognizer. The acoustic processor of the system transcribing speech into phonemes is based on neural network principles. The so-called phonotopic maps constructed by a self-organizing process are employed. The coarticulation effects in phonetic transcriptions are compensated by means of automatically derived rules which describe the morphology of errors at the acoustic processor output. Without applying any language model, the recognition result is correct up to 92 or even 97 per cent referring to individual letters. >

44 citations



Book ChapterDOI
01 Jan 1988
TL;DR: This book focuses on the associative memories of the highly parallel optical computers (with a two-dimensional topology of their distributed “processing elements”), which are of the distributed type and can be divided in two categories.
Abstract: Development of optical computing has been particularly rapid in recent years; cf, eg, an extensive survey in a special issue of IEEE Spectrum, August 1986 (“Optical Computing: A Field in Flux”) Of course, the optical fibers have been adopted to communication technology, and the optical archival storages are spreading to various fields, but in addition, development of new optically active materials has opened completely new possibilities for performing bulk computations in distributed media There exist two lines of development: optical associative memories, and the highly parallel optical computers (with a two-dimensional topology of their distributed “processing elements”) As the latter fall outside the scope of this book, we may concentrate on the associative memories Being of the distributed type, they can be divided in two categories In the first of them, the matrix operations for associative recall are performed by multiplying light intensities using some kind of lightmodulating matrix arrays, and summing up convergent light beams by discrete photensensitive elements The second type is based on holography

15 citations


Book ChapterDOI
01 Jan 1988
TL;DR: “Neural computing” has recently become a popular science, following the linguistically and computationally oriented Artificial Intelligence research, but one needs knowledge about facts and ideas which have already been known for some time, and a general philosophy telling what this new line of thinking is, and what the “neural computers” actually can do.
Abstract: “Neural computing” has recently become a popular science, following the linguistically and computationally oriented Artificial Intelligence research. Many expectations are aroused of new types of intriguing applications. In addition to great enthusiasm, however, one also needs some knowledge about facts and ideas which have already been known for some time, and a general philosophy telling what this new line of thinking is, and what the “neural computers” actually can do.

Book ChapterDOI
01 Jan 1988
TL;DR: The fact that biological memory closely interacts with mental processes is an old notion: "Thus memory belongs to the faculty of the soul to which imagination belongs; all objects which are imaginable are essentially objects of memory; all those that necessarily involve images are objects of memories incidentally" as mentioned in this paper.
Abstract: In order to control behaviour, the biological brains must be able to form internal models of the sensory environment and its history. Such a “miniature environment” to which all decisions are related is provided by memony. The recollections from memory occur as operands in thinking and problem-solving operations. In an exact scientific approach to these phenomena, a somewhat confusing aspect is that thinking and reminiscence are mental operations associated with the cognitive ability of living organisms. The fact that biological memory closely interacts with mental processes is an old notion: “Thus memory belongs to the faculty of the soul to which imagination belongs; all objects which are imaginable are essentially objects of memory; all those that necessarily involve images are objects of memory incidentally.” (Aristotle, 384-322 B.C.)

Book ChapterDOI
01 Jan 1988
TL;DR: A critical analysis is aiming at a critical analysis, mainly with an objective to find amendments to the early ideas of artificial intelligence using formal models of neurons and Perceptron networks.
Abstract: The early works around 1960 on learning machines may be characterized as attempts to implement artificial intelligence using formal models of neurons and Perceptron networks, obviously in the hope that more and more complex functions would gradually evolve from such structures. There is no doubt about the biological organisms having that fundamental organization. Why was the success in artificial constructs not straightforward as expected? Below I am aiming at a critical analysis, mainly with an objective to find amendments to the early ideas.

Book ChapterDOI
01 Jan 1988
TL;DR: Apparently the memory functions of biological organisms have been implemented in the neural realms; but in spite of extensive experimental research pursued on biological memory, it seems that many central questions concerning its functional and organizational principles have remained unanswered.
Abstract: Apparently the memory functions of biological organisms have been implemented in the neural realms; but in spite of extensive experimental research pursued on biological memory, it seems that many central questions concerning its functional and organizational principles have remained unanswered. In view of the theoretical knowledge recently acquired about the information processing principles of adaptive networks, it seems that the experimental results need a new theory to which they can be related.

Book ChapterDOI
01 Jan 1988
TL;DR: This paper focuses on the implementation of classical learning networks, which are signal-transforming systems the parameters of which are slowly changed by the effect of signal energy.
Abstract: For the implementation of information processes, some kind of fundamental physical systems which transform signals and patterns are needed. In the simplest cases the relationship between representations of information at input and output, respectively, can be described in terms of a transformation function. Such systems are often named filters. The classical learning networks are signal-transforming systems the parameters of which are slowly changed by the effect of signal energy. Such systems are also named adaptive,and they may automatically adjust themselves to become optimally selective with respect to certain signal or pattern statistics.