scispace - formally typeset
Search or ask a question
Author

K. P. Unnikrishnan

Bio: K. P. Unnikrishnan is an academic researcher from General Motors. The author has contributed to research in topics: Spike train & Artificial neural network. The author has an hindex of 19, co-authored 41 publications receiving 1582 citations. Previous affiliations of K. P. Unnikrishnan include Wayne State University & NorthShore University HealthSystem.

Papers
More filters
Journal ArticleDOI
TL;DR: It is argued that for satisfactory modeling of dynamical systems, neural networks should be endowed with such internal memory as to identify systems whose order is unknown or systems with unknown delay.
Abstract: This paper discusses memory neuron networks as models for identification and adaptive control of nonlinear dynamical systems. These are a class of recurrent networks obtained by adding trainable temporal elements to feedforward networks that makes the output history-sensitive. By virtue of this capability, these networks can identify dynamical systems without having to be explicitly fed with past inputs and outputs. Thus, they can identify systems whose order is unknown or systems with unknown delay. It is argued that for satisfactory modeling of dynamical systems, neural networks should be endowed with such internal memory. The paper presents a preliminary analysis of the learning algorithm, providing theoretical justification for the identification method. Methods for adaptive control of nonlinear systems using these networks are presented. Through extensive simulations, these models are shown to be effective both for identification and model reference adaptive control of nonlinear systems. >

355 citations

Proceedings ArticleDOI
12 Aug 2007
TL;DR: This paper presents some new algorithms for frequent episode discovery under this non-overlapped occurrences-based frequency definition, and shows through some simulation experiments, that these algorithms are very efficient.
Abstract: Frequent episode discovery is a popular framework for mining data available as a long sequence of events. An episode is essentially a short ordered sequence of event types and the frequency of an episode is some suitable measure of how often the episode occurs in the data sequence. Recently,we proposed a new frequency measure for episodes based on the notion of non-overlapped occurrences of episodes in the event sequence, and showed that, such a definition, in addition to yielding computationally efficient algorithms, has some important theoretical properties in connecting frequent episode discovery with HMM learning. This paper presents some new algorithms for frequent episode discovery under this non-overlapped occurrences-based frequency definition. The algorithms presented here are better (by a factor of N, where N denotes the size of episodes being discovered) in terms of both time and space complexities when compared to existing methods for frequent episode discovery. We show through some simulation experiments, that our algorithms are very efficient. The new algorithms presented here have arguably the least possible orders of spaceand time complexities for the task of frequent episode discovery.

146 citations

Journal ArticleDOI
TL;DR: A special class of discrete hidden Markov models, called episode generating HMMs (EGHs), are introduced, and each episode is associated with a unique EGH, and it is proved that, given any two episodes, the EGH that is more likely to generate a given data sequence is the one associated with the more frequent episode.
Abstract: This paper establishes a formal connection between two common, but previously unconnected methods for analyzing data streams: discovering frequent episodes in a computer science framework and learning generative models in a statistics framework. We introduce a special class of discrete hidden Markov models (HMMs), called episode generating HMMs (EGHs), and associate each episode with a unique EGH. We prove that, given any two episodes, the EGH that is more likely to generate a given data sequence is the one associated with the more frequent episode. To be able to establish such a relationship, we define a new measure of frequency of an episode, based on what we call nonoverlapping occurrences of the episode in the data. An efficient algorithm is proposed for counting the frequencies for a set of episodes. Through extensive simulations, we show that our algorithm is both effective and more efficient than current methods for frequent episode discovery. We also show how the association between frequent episodes and EGHs can be exploited to assess the significance of frequent episodes discovered and illustrate empirically how this idea may be used to improve the efficiency of the frequent episode discovery.

142 citations

Journal ArticleDOI
TL;DR: This paper presents a learning algorithm for neural networks, called Alopex, which uses local correlations between changes in individual weights and changes in the global error measure, and shows that learning times are comparable to those for standard gradient descent methods.
Abstract: We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feedforward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a “temperature” parameter in a manner similar to that in simulated annealing. A heuristic “annealing schedule” is presented that is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feedforward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems.

117 citations

Journal ArticleDOI
10 Jul 1987-Science
TL;DR: A model is proposed in which the feedback pathways serve to modify afferent sensory stimuli in ways that enhance and complete sensory input patterns, suppress irrelevant features, and generate quasi-sensory patterns when afferent stimulation is weak or absent.
Abstract: The mammalian visual system has a hierarchic structure with extensive reciprocal connections. A model is proposed in which the feedback pathways serve to modify afferent sensory stimuli in ways that enhance and complete sensory input patterns, suppress irrelevant features, and generate quasi-sensory patterns when afferent stimulation is weak or absent. Such inversion of sensory coding and feature extraction can be achieved by optimization processes in which scalar responses derived from high-level neural analyzers are used as cost functions to modify the filter properties of more peripheral sensory relays. An optimization algorithm, Alopex, which is used in the model, is readily implemented with known neural circuitry. The functioning of the system is investigated by computer simulations.

97 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations

Journal ArticleDOI
TL;DR: This paper attempts to summarise and review the recent research and developments in diagnostics and prognostics of mechanical systems implementing CBM with emphasis on models, algorithms and technologies for data processing and maintenance decision-making.

3,848 citations

Journal ArticleDOI
TL;DR: The aims of this article are to encompass many apparently unrelated anatomical, physiological and psychophysical attributes of the brain within a single theoretical perspective and to provide a principled way to understand many aspects of cortical organization and responses.
Abstract: This article concerns the nature of evoked brain responses and the principles underlying their generation. We start with the premise that the sensory brain has evolved to represent or infer the causes of changes in its sensory inputs. The problem of inference is well formulated in statistical terms. The statistical fundaments of inference may therefore afford important constraints on neuronal implementation. By formulating the original ideas of Helmholtz on perception, in terms of modern-day statistical theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. It turns out that the problems of inferring the causes of sensory input (perceptual inference) and learning the relationship between input and cause (perceptual learning) can be resolved using exactly the same principle. Specifically, both inference and learning rest on minimizing the brain’s free energy, as defined in statistical physics. Furthermore, inference and learning can proceed in a biologically plausible fashion. Cortical responses can be seen as the brain’s attempt to minimize the free energy induced by a stimulus and thereby encode the most likely cause of that stimulus. Similarly, learning emerges from changes in synaptic efficacy that minimize the free energy, averaged over all stimuli encountered. The underlying scheme rests on empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organization and responses. The aim of this article is to encompass many apparently unrelated anatomical, physiological and psychophysical attributes of the brain within a single theoretical perspective. In terms of cortical architectures, the theoretical treatment predicts that sensory cortex should be arranged hierarchically, that connections should be reciprocal and that forward and backward connections should show a functional asymmetry (forward connections are driving, whereas backward connections are both driving and modulatory). In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology, it accounts for classical and extra classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena such as repetition suppression, mismatch negativity (MMN) and the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, for example, priming and global precedence. The final focus of this article is on perceptual learning as measured with the MMN and the implications for empirical studies of coupling among cortical areas using evoked sensory responses.

3,569 citations

Book
01 Jan 2000
TL;DR: The relationship between the structural and physiological mechanisms of the brain/nervous system has been studied in this paper, from the molecular level up to that of human consciousness, and contributions cover one of the most fascinating areas of science.
Abstract: Shows the many advances in the field of cognitive neurosciences. From the molecular level up to that of human consciousness, the contributions cover one of the most fascinating areas of science - the relationship between the structural and physiological mechanisms of the brain/nervous system.

1,531 citations