scispace - formally typeset
Search or ask a question

Showing papers by "Teuvo Kohonen published in 2009"


01 Jan 2009
TL;DR: The present addendum contains 2333 new articles on the SOM from the years 20022005, and has provided a keyword index to help finding the articles of interest.
Abstract: Two comprehensive lists of articles on the Self-Organizing Map (SOM) have been published earlier in the Neural Computing Surveys. They contain references to scientific papers that deal with analyses and applications of the SOM, or have essentially benefited from the SOM. The previous lists together contained 5384 papers from the years 1981-2001. The present addendum contains 2333 new articles on the SOM from the years 20022005. We have also provided a keyword index to help finding the articles of interest.

71 citations


Book ChapterDOI
04 Jun 2009
TL;DR: The self-organizing map (SOM) is related to the classical vector quantization (VQ) and the rms QE of a SOM can be smaller than that of a VQ with the same number of models and the same input data.
Abstract: The self-organizing map (SOM) is related to the classical vector quantization (VQ). Like in the VQ, the SOM represents a distribution of input data vectors using a finite set of models. In both methods, the quantization error (QE) of an input vector can be expressed, e.g., as the Euclidean norm of the difference of the input vector and the best-matching model. Since the models are usually optimized in the VQ so that the sum of the squared QEs is minimized for the given set of training vectors, a common notion is that it will be impossible to find models that produce a smaller rms QE. Therefore it has come as a surprise that in some cases the rms QE of a SOM can be smaller than that of a VQ with the same number of models and the same input data. This effect may manifest itself if the number of training vectors per model is on the order of small integers and the testing is made with an independent set of test vectors. An explanation seems to ensue from statistics. Each model vector in the VQ is determined as the average of those training vectors that are mapped into the same Voronoi domain as the model vector. On the contrary, each model vector of the SOM is determined as a weighted average of all of those training vectors that are mapped into the "topological" neighborhood around the corresponding model. The number of training vectors mapped into the neighborhood of a SOM model is generally much larger than that mapped into a Voronoi domain around a model in the VQ. Since the SOM model vectors are then determined with a significantly higher statistical accuracy, the Voronoi domains of the SOM are significantly more regular, and the resulting rms QE may then be smaller than in the VQ. However, the effective dimensionality of the vectors must also be sufficiently high.

28 citations


Journal ArticleDOI
TL;DR: It seems that both of the above functions, viz. formation of the internal representations and their storage, can be implemented simultaneously by an adaptive, self-organizing neural structure which consists of a large number of neural units arranged into a two-dimensional network.
Abstract: The main purpose of thinking is to forecast phenomena that take place in the environment. To this end, humans and animals must refer to a complicated knowledge base which is somewhat vaguely called memory. One has to realize the two main problem areas in a discussion of memory: (1) The internal representations of sensory information in the brain networks. (2) The memory mechanism itself. Most of the experimental and theoretical work has concentrated on the latter. Although it has been extremely difficult to detect memory traces experimentally, the storage mechanism is theoretically still the easier part of the problem. Contrary to this, it has been almost a mystery how a physical system can automatically extract various kinds of abstractions from the vast number of vague sensory signals. This article contains some views and results about the formation of such internal representations in idealized neural networks, and their memorization. It seems that both of the above functions, viz. formation of the internal representations and their storage, can be implemented simultaneously by an adaptive, self-organizing neural structure which consists of a large number of neural units arranged into a two-dimensional network. All units of this network receive sensory signals through many input channels and they are further provided with abundant mutual feedback in the lateral direction through adaptive neural fibers. If the spatial distribution of the feedback connections is distance dependent in a particular way, and if the strengths of the connectivities change in proportion to the signals according to certain simple nonlinear laws, different neurons will be sensitized to different features of the input signals in an orderly fashion, making up various feature maps of the types found in the brain. On the other hand, activity patterns over such feature maps will be memorized by the same network in spatially distributed form and recalled associatively, in relation to an incomplete cue or key pattern.

7 citations