scispace - formally typeset
Search or ask a question

Showing papers on "Models of neural computation published in 2009"


Journal ArticleDOI
TL;DR: This study describes a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings.
Abstract: For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.

147 citations


Journal ArticleDOI
TL;DR: Simulation results show that picking stimuli by maximizing the mutual information can speed up convergence to the optimal values of the parameters by an order of magnitude compared to using random (nonadaptive) stimuli.
Abstract: Adaptively optimizing experiments has the potential to significantly reduce the number of trials needed to build parametric statistical models of neural systems. However, application of adaptive methods to neurophysiology has been limited by severe computational challenges. Since most neurons are high-dimensional systems, optimizing neurophysiology experiments requires computing high-dimensional integrations and optimizations in real time. Here we present a fast algorithm for choosing the most informative stimulus by maximizing the mutual information between the data and the unknown parameters of a generalized linear model (GLM) that we want to fit to the neuron's activity. We rely on important log concavity and asymptotic normality properties of the posterior to facilitate the required computations. Our algorithm requires only low-rank matrix manipulations and a two-dimensional search to choose the optimal stimulus. The average running time of these operations scales quadratically with the dimensionality of the GLM, making real-time adaptive experimental design feasible even for high-dimensional stimulus and parameter spaces. For example, we require roughly 10 milliseconds on a desktop computer to optimize a 100-dimensional stimulus. Despite using some approximations to make the algorithm efficient, our algorithm asymptotically decreases the uncertainty about the model parameters at a rate equal to the maximum rate predicted by an asymptotic analysis. Simulation results show that picking stimuli by maximizing the mutual information can speed up convergence to the optimal values of the parameters by an order of magnitude compared to using random (nonadaptive) stimuli. Finally, applying our design procedure to real neurophysiology experiments requires addressing the nonstationarities that we would expect to see in neural responses; our algorithm can efficiently handle both fast adaptation due to spike history effects and slow, nonsystematic drifts in a neuron's activity.

141 citations


Journal ArticleDOI
TL;DR: This work extends previous theoretical results showing that a WTA recurrent network receiving regular spike inputs can select the correct winner within one interspike interval, and uses a simplified Markov model of the spiking network to examine analytically the ability of a spike-based WTA network to discriminate the statistics of inputs ranging from stationary regular to nonstationary Poisson events.
Abstract: The winner-take-all (WTA) computation in networks of recurrently connected neurons is an important decision element of many models of cortical processing. However, analytical studies of the WTA performance in recurrent networks have generally addressed rate-based models. Very few have addressed networks of spiking neurons, which are relevant for understanding the biological networks themselves and also for the development of neuromorphic electronic neurons that commmunicate by action potential like address-events. Here, we make steps in that direction by using a simplified Markov model of the spiking network to examine analytically the ability of a spike-based WTA network to discriminate the statistics of inputs ranging from stationary regular to nonstationary Poisson events. Our work extends previous theoretical results showing that a WTA recurrent network receiving regular spike inputs can select the correct winner within one interspike interval. We show first for the case of spike rate inputs that input discrimination and the effects of self-excitation and inhibition on this discrimination are consistent with results obtained from the standard rate-based WTA models. We also extend this discrimination analysis of spiking WTAs to nonstationary inputs with time-varying spike rates resembling statistics of real-world sensory stimuli. We conclude that spiking WTAs are consistent with their continuous counterparts for steady-state inputs, but they also exhibit high discrimination performance with nonstationary inputs.

107 citations


Journal ArticleDOI
TL;DR: A genetic algorithm-based ANN model is proposed for the turning process in manufacturing Industry that satisfies all the accuracy requirements and is found to be a time-saving model.
Abstract: Artificial intelligent tools like genetic algorithm, artificial neural network (ANN) and fuzzy logic are found to be extremely useful in modeling reliable processes in the field of computer integrated manufacturing (for example, selecting optimal parameters during process planning, design and implementing the adaptive control systems). When knowledge about the relationship among the various parameters of manufacturing are found to be lacking, ANNs are used as process models, because they can handle strong nonlinearities, a large number of parameters and missing information. When the dependencies between parameters become noninvertible, the input and output configurations used in ANN strongly influence the accuracy. However, running of a neural network is found to be time consuming. If genetic algorithm-based ANNs are used to construct models, it can provide more accurate results in less time. This article proposes a genetic algorithm-based ANN model for the turning process in manufacturing Industry. This model is found to be a time-saving model that satisfies all the accuracy requirements.

89 citations


Book
01 Jan 2009
TL;DR: Signal processing and neural computation have separately and significantly influenced many disciplines, but the cross-fertilization of the two fields has begun only recently as mentioned in this paper, as we see highly sophisticated kinds of signal processing and elaborate hierachical levels of neural computation performed side by side in the brain.
Abstract: Signal processing and neural computation have separately and significantly influenced many disciplines, but the cross-fertilization of the two fields has begun only recently. Research now shows that each has much to teach the other, as we see highly sophisticated kinds of signal processing and elaborate hierachical levels of neural computation performed side by side in the brain. In New Directions in Statistical Signal Processing, leading researchers from both signal processing and neural computation present new work that aims to promote interaction between the two disciplines.The book's 14 chapters, almost evenly divided between signal processing and neural computation, begin with the brain and move on to communication, signal processing, and learning systems. They examine such topics as how computational models help us understand the brain's information processing, how an intelligent machine could solve the "cocktail party problem" with "active audition" in a noisy environment, graphical and network structure modeling approaches, uncertainty in network communications, the geometric approach to blind signal processing, game-theoretic learning algorithms, and observable operator models (OOMs) as an alternative to hidden Markov models (HMMs).

85 citations


Journal ArticleDOI
02 Jun 2009
TL;DR: The results of an extensive study on the performance of neural networks as compared to other modeling techniques in the context of active learning are presented and the scalability and accuracy in function of the number design variables and number of datapoints are investigated.
Abstract: The use of global surrogate models has become commonplace as a cost effective alternative for performing complex high fidelity computer simulations. Due to their compact formulation and negligible evaluation time, global surrogate models are very useful tools for exploring the design space, what-if analysis, optimization, prototyping, visualization, and sensitivity analysis. Neural networks have been proven particularly useful in this respect due to their ability to model high dimensional, non-linear responses accurately. In this article, we present the results of an extensive study on the performance of neural networks as compared to other modeling techniques in the context of active learning. We investigate the scalability and accuracy in function of the number design variables and number of datapoints. The case study under consideration is a high dimensional, parametrized low noise amplifier RF circuit block.

79 citations


Journal ArticleDOI
TL;DR: Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, this work focuses on the cortical microcircuitry and proposes how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation.
Abstract: In this letter, we develop and simulate a large-scale network of spiking neurons that approximates the inference computations performed by graphical models. Unlike previous related schemes, which used sum and product operations in either the log or linear domains, the current model uses an inference scheme based on the sum and maximization operations in the log domain. Simulations show that using these operations, a large-scale circuit, which combines populations of spiking neurons as basic building blocks, is capable of finding close approximations to the full mathematical computations performed by graphical models within a few hundred milliseconds. The circuit is general in the sense that it can be wired for any graph structure, it supports multistate variables, and it uses standard leaky integrate-and-fire neuronal units. Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, we focus on the cortical microcircuitry and propose how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation. We discuss in particular the roles of three major types of inhibitory neurons (small fast-spiking basket cells, large layer 2/3 basket cells, and double-bouquet neurons), subpopulations of strongly interconnected neurons with their unique connectivity patterns in different cortical layers, and the possible role of minicolumns in the realization of the population-based maximum operation.

59 citations


Journal ArticleDOI
TL;DR: This work shows in detailed simulations how the belief propagation algorithm on a factor graph can be embedded in a network of spiking neurons, and demonstrates good agreement between the performance of the networks and the direct numerical evaluation of belief propagation.
Abstract: From a theoretical point of view, statistical inference is an attractive model of brain operation. However, it is unclear how to implement these inferential processes in neuronal networks. We offer a solution to this problem by showing in detailed simulations how the belief propagation algorithm on a factor graph can be embedded in a network of spiking neurons. We use pools of spiking neurons as the function nodes of the factor graph. Each pool gathers “messages” in the form of population activities from its input nodes and combines them through its network dynamics. Each of the various output messages to be transmitted over the edges of the graph is computed by a group of readout neurons that feed in their respective destination pools. We use this approach to implement two examples of factor graphs. The first example, drawn from coding theory, models the transmission of signals through an unreliable channel and demonstrates the principles and generality of our network approach. The second, more applied example is of a psychophysical mechanism in which visual cues are used to resolve hypotheses about the interpretation of an object's shape and illumination. These two examples, and also a statistical analysis, demonstrate good agreement between the performance of our networks and the direct numerical evaluation of belief propagation.

58 citations


Journal ArticleDOI
TL;DR: This work makes use of rigorous mathematical results from the theory of continuous time point process filtering and shows how optimal real-time state estimation and prediction may be implemented in a general setting using simple recurrent neural networks.
Abstract: A key requirement facing organisms acting in uncertain dynamic environments is the real-time estimation and prediction of environmental states, based on which effective actions can be selected. While it is becoming evident that organisms employ exact or approximate Bayesian statistical calculations for these purposes, it is far less clear how these putative computations are implemented by neural networks in a strictly dynamic setting. In this work, we make use of rigorous mathematical results from the theory of continuous time point process filtering and show how optimal real-time state estimation and prediction may be implemented in a general setting using simple recurrent neural networks. The framework is applicable to many situations of common interest, including noisy observations, non-Poisson spike trains (incorporating adaptation), multisensory integration, and state prediction. The optimal network properties are shown to relate to the statistical structure of the environment, and the benefits of adaptation are studied and explicitly demonstrated. Finally, we recover several existing results as appropriate limits of our general setting.

54 citations


Journal ArticleDOI
TL;DR: This work develops mean-field methods for approximating the stimulus-driven firing rates, auto- and cross-correlations, and stimulus-dependent filtering properties of these networks, and introduces a model that captures strong refractoriness, retains all of the easy fitting properties of the standard generalized linear model, and leads to much more accurate approximations of mean firing rates andCross-Correlations.
Abstract: There has recently been a great deal of interest in inferring network connectivity from the spike trains in populations of neurons. One class of useful models that can be fit easily to spiking data is based on generalized linear point process models from statistics. Once the parameters for these models are fit, the analyst is left with a nonlinear spiking network model with delays, which in general may be very difficult to understand analytically. Here we develop mean-field methods for approximating the stimulus-driven firing rates (in both the time-varying and steady-state cases), auto- and cross-correlations, and stimulus-dependent filtering properties of these networks. These approximations are valid when the contributions of individual network coupling terms are small and, hence, the total input to a neuron is approximately gaussian. These approximations lead to deterministic ordinary differential equations that are much easier to solve and analyze than direct Monte Carlo simulation of the network activity. These approximations also provide an analytical way to evaluate the linear input-output filter of neurons and how the filters are modulated by network interactions and some stimulus feature. Finally, in the case of strong refractory effects, the mean-field approximations in the generalized linear model become inaccurate; therefore, we introduce a model that captures strong refractoriness, retains all of the easy fitting properties of the standard generalized linear model, and leads to much more accurate approximations of mean firing rates and cross-correlations that retain fine temporal behaviors.

53 citations


Journal ArticleDOI
TL;DR: This article shows in this article how stochastically spiking neurons with refractoriness could in principle learn in an unsupervised manner to carry out both information bottleneck optimization and the extraction of independent components.
Abstract: Independent component analysis (or blind source separation) is assumed to be an essential component of sensory processing in the brain and could provide a less redundant representation about the external world. Another powerful processing strategy is the optimization of internal representations according to the information bottleneck method. This method would allow extracting preferentially those components from high-dimensional sensory input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. However, there exists a lack of models that could explain how spiking neurons could learn to execute either of these two processing strategies. We show in this article how stochastically spiking neurons with refractoriness could in principle learn in an unsupervised manner to carry out both information bottleneck optimization and the extraction of independent components. We derive suitable learning rules, which extend the well-known BCM rule, from abstract information optimization principles. These rules will simultaneously keep the firing rate of the neuron within a biologically realistic range.

Book ChapterDOI
01 Jan 2009
TL;DR: This article reviews experimental and theoretical work related to the idea that recurrent synaptic or cellular mechanisms can instantiate an integration time much longer than intrinsic biophysical time constants of the system.
Abstract: Integration of information across time is a neural computation of critical importance to a variety of brain functions. Examples include oculomotor neural integrators and head direction cells that integrate velocity signals into positional or directional signals, parametric working memory circuits which convert transient input pulses into self-sustained persistent neural activity patterns, and linear ramping neural activity underlying the accumulation of information during decision making. How is integration over long timescales realized in neural circuits? This article reviews experimental and theoretical work related to this fundamental question, with a focus on the idea that recurrent synaptic or cellular mechanisms can instantiate an integration time much longer than intrinsic biophysical time constants of the system. We first introduce some basic concepts and present two types of codes used by neural integrators – the location code and the rate code. Then we summarize models that implement a variety of candidate mechanisms for neural integration in the brain, and we discuss the problem of fine-tuning of model parameters and possible solutions to this problem. Finally, we outline challenges for future research.

Journal ArticleDOI
TL;DR: A two-stage computational framework that uses point process filters to simultaneously estimate the animal's location and predict future behavior from ensemble neural spiking activity is presented and provides a reliable approach for characterizing and extracting information from ensembles of neurons with spatially specific context or task-dependent firing activity.
Abstract: Firing activity from neural ensembles in rat hippocampus has been previously used to determine an animal's position in an open environment and separately to predict future behavioral decisions. However, a unified statistical procedure to combine information about position and behavior in environments with complex topological features from ensemble hippocampal activity has yet to be described. Here we present a two-stage computational framework that uses point process filters to simultaneously estimate the animal's location and predict future behavior from ensemble neural spiking activity. First, in the encoding stage, we linearized a two-dimensional T-maze, and used spline-based generalized linear models to characterize the place-field structure of different neurons. All of these neurons displayed highly specific position-dependent firing, which frequently had several peaks at multiple locations along the maze. When the rat was at the stem of the T-maze, the firing activity of several of these neurons also varied significantly as a function of the direction it would turn at the decision point, as detected by ANOVA. Second, in the decoding stage, we developed a state-space model for the animal's movement along a T-maze and used point process filters to accurately reconstruct both the location of the animal and the probability of the next decision. The filter yielded exact full posterior densities that were highly nongaussian and often multimodal. Our computational framework provides a reliable approach for characterizing and extracting information from ensembles of neurons with spatially specific context or task-dependent firing activity.

Journal ArticleDOI
TL;DR: The derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons, a one-dimensional nonlinear neuron model, shows that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of nonlinear integrate and fire neurons.
Abstract: The main contribution of this letter is the derivation of a steepest gradient descent learning rule for a multilayer network of theta neurons, a one-dimensional nonlinear neuron model. Central to our model is the assumption that the intrinsic neuron dynamics are sufficient to achieve consistent time coding, with no need to involve the precise shape of postsynaptic currents; this assumption departs from other related models such as SpikeProp and Tempotron learning. Our results clearly show that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of nonlinear integrate and fire neurons. Networks trained with our multilayer training rule are shown to have similar generalization abilities for spike latency pattern classification as Tempotron learning. The rule is also able to train networks to perform complex regression tasks that neither SpikeProp or Tempotron learning appears to be capable of.

Journal ArticleDOI
TL;DR: The approach provides a statistical characterization of UP-DOWN state dynamics that can serve as a basis for verifying and refining mechanistic descriptions of this process.
Abstract: UP and DOWN states, the periodic fluctuations between increased and decreased spiking activity of a neuronal population, are a fundamental feature of cortical circuits. Understanding UP-DOWN state dynamics is important for understanding how these circuits represent and transmit information in the brain. To date, limited work has been done on characterizing the stochastic properties of UP-DOWN state dynamics. We present a set of Markov and semi-Markov discrete-and continuous-time probability models for estimating UP and DOWN states from multiunit neural spiking activity. We model multiunit neural spiking activity as a stochastic point process, modulated by the hidden (UP and DOWN) states and the ensemble spiking history. We estimate jointly the hidden states and the model parameters by maximum likelihood using an expectation-maximization (EM) algorithm and a Monte Carlo EM algorithm that uses reversible-jump Markov chain Monte Carlo sampling in the E-step. We apply our models and algorithms in the analysis of both simulated multiunit spiking activity and actual multi-unit spiking activity recorded from primary somatosensory cortex in a behaving rat during slow-wave sleep. Our approach provides a statistical characterization of UP-DOWN state dynamics that can serve as a basis for verifying and refining mechanistic descriptions of this process.

Journal ArticleDOI
TL;DR: An elegant algorithm is proposed for the simulation of leaky integrate-and-fire (LIF) neurons with an arbitrary number of (unconstrained) synaptic time constants, which is able to combine these algorithmic techniques efficiently, resulting in very high simulation speed.
Abstract: The simulation of spiking neural networks (SNNs) is known to be a very time-consuming task. This limits the size of SNN that can be simulated in reasonable time or forces users to overly limit the complexity of the neuron models. This is one of the driving forces behind much of the recent research on event-driven simulation strategies. Although event-driven simulation allows precise and efficient simulation of certain spiking neuron models, it is not straightforward to generalize the technique to more complex neuron models, mostly because the firing time of these neuron models is computationally expensive to evaluate. Most solutions proposed in literature concentrate on algorithms that can solve this problem efficiently. However, these solutions do not scale well when more state variables are involved in the neuron model, which is, for example, the case when multiple synaptic time constants for each neuron are used. In this letter, we show that an exact prediction of the firing time is not required in order to guarantee exact simulation results. Several techniques are presented that try to do the least possible amount of work to predict the firing times. We propose an elegant algorithm for the simulation of leaky integrate-and-fire (LIF) neurons with an arbitrary number of (unconstrained) synaptic time constants, which is able to combine these algorithmic techniques efficiently, resulting in very high simulation speed. Moreover, our algorithm is highly independent of the complexity (i.e., number of synaptic time constants) of the underlying neuron model.

Journal ArticleDOI
TL;DR: Using a population density approach for integrate-and-fire neurons with dynamic and temporally rich inputs, it is found that the same fluctuation-induced divisive gain modulation is operative for dynamic inputs driving nonequilibrium responses.
Abstract: The modulation of the sensitivity, or gain, of neural responses to input is an important component of neural computation. It has been shown that divisive gain modulation of neural responses can result from a stochastic shunting from balanced (mixed excitation and inhibition) background activity. This gain control scheme was developed and explored with static inputs, where the membrane and spike train statistics were stationary in time. However, input statistics, such as the firing rates of pre-synaptic neurons, are often dynamic, varying on timescales comparable to typical membrane time constants. Using a population density approach for integrate-and-fire neurons with dynamic and temporally rich inputs, we find that the same fluctuation-induced divisive gain modulation is operative for dynamic inputs driving nonequilibrium responses. Moreover, the degree of divisive scaling of the dynamic response is quantitatively the same as the steady-state responses--thus, gain modulation via balanced conductance fluctuations generalizes in a straight-forward way to a dynamic setting.

Journal ArticleDOI
TL;DR: In this article, a gradient-based sequential RBFNN (GS-RBFNN) model is proposed to improve the approximation ability with samples as few as possible, so as to limit the network complexity.
Abstract: Radial basis function neural network (RBFNN) is widely used in nonlinear function approximation. One of the key issues in RBFNN modeling is to improve the approximation ability with samples as few as possible, so as to limit the network’s complexity. To solve this problem, a gradient-based sequential RBFNN modeling method is proposed. This method can utilize the gradient information of the present model to expand the sample set and refine the model sequentially, so as to improve the approximation accuracy effectively. Two mathematical examples and one practical problem are tested to verify the efficiency of this method.

Journal ArticleDOI
TL;DR: The dynamics of the Bayesian models are analyzed by considering simplified, approximate systems that are linear and decoupled, and it is demonstrated that Bayesian updating is closely related to a drift-diffusion process, whose implementation in neural network models has been extensively studied.
Abstract: The Eriksen task is a classical paradigm that explores the effects of competing sensory inputs on response tendencies and the nature of selective attention in controlling these processes. In this task, conflicting flanker stimuli interfere with the processing of a central target, especially on short reaction time trials. This task has been modeled by neural networks and more recently by a normative Bayesian account. Here, we analyze the dynamics of the Bayesian models, which are nonlinear, coupled discrete time dynamical systems, by considering simplified, approximate systems that are linear and decoupled. Analytical solutions of these allow us to describe how posterior probabilities and psychometric functions depend on model parameters. We compare our results with numerical simulations of the original models and derive fits to experimental data, showing that agreements are rather good. We also investigate continuum limits of these simplified dynamical systems and demonstrate that Bayesian updating is closely related to a drift-diffusion process, whose implementation in neural network models has been extensively studied. This provides insight into how neural substrates can implement Bayesian computations.

Journal ArticleDOI
TL;DR: Basic biological characteristics of neural activity are demonstrated, such as a decrease in the spontaneous rate at higher brain levels and improved signal-to-noise ratio for harmonic input signals.
Abstract: Neural information is characterized by sets of spiking events that travel within the brain through neuron junctions that receive, transmit, and process streams of spikes. Coincidence detection is one of the ways to describe the functionality of a single neural cell. This letter presents an analytical derivation of the output stochastic behavior of a coincidence detector (CD) cell whose stochastic inputs behave as a nonhomogeneous Poisson process (NHPP) with both excitatory and inhibitory inputs. The derivation, which is based on an efficient breakdown of the cell into basic functional elements, results in an output process whose behavior can be approximated as an NHPP as long as the coincidence interval is much smaller than the refractory period of the cell's inputs. Intuitively, the approximation is valid as long as the processing rate is much faster than the incoming information rate. This type of modeling is a simplified but very useful description of neurons since it enables analytical derivations. The statistical properties of single CD cell's output make it possible to integrate and analyze complex neural cells in a feedforward network using the methodology presented here. Accordingly, basic biological characteristics of neural activity are demonstrated, such as a decrease in the spontaneous rate at higher brain levels and improved signal-to-noise ratio for harmonic input signals.

Journal ArticleDOI
TL;DR: An inverse approach is utilized in which model neurons with realistic morphologies and ion channel distributions are optimized to perform a computational function, and their thorough analysis provides insights into the relationship between the neurons’ functions, morphologies, ion channel distribution, and electrophysiological dynamics.
Abstract: For many classes of neurons, the relationship between computational function and dendritic morphology remains unclear. To gain insights into this relationship, we utilize an inverse approach in which we optimize model neurons with realistic morphologies and ion channel distributions (of I(KA) and I(CaT)) to perform a computational function. In this study, the desired function is input-order detection: neurons have to respond differentially to the arrival of two inputs in a different temporal order. There is a single free parameter in this function, namely, the time lag between the arrivals of the two inputs. Systematically varying this parameter allowed us to map one axis of function space to structure space. Because the function of the optimized model neurons is known with certainty, their thorough analysis provides insights into the relationship between the neurons' functions, morphologies, ion channel distributions, and electrophysiological dynamics. Finally, we discuss issues of optimality in nervous systems.

Journal ArticleDOI
TL;DR: Simulation results prove that the proposed ARNNC system with structure adaptation algorithm can achieve favorable tracking performance even unknown the control system dynamics function.
Abstract: This paper proposes an adaptive recurrent neural network control (ARNNC) system with structure adaptation algorithm for the uncertain nonlinear systems. The developed ARNNC system is composed of a neural controller and a robust controller. The neural controller which uses a self-structuring recurrent neural network (SRNN) is the principal controller, and the robust controller is designed to achieve L 2 tracking performance with desired attenuation level. The SRNN approximator is used to online estimate an ideal tracking controller with the online structuring and parameter learning algorithms. The structure learning possesses the ability of both adding and pruning hidden neurons, and the parameter learning adjusts the interconnection weights of neural network to achieve favorable approximation performance. And, by the L 2 control design technique, the worst effect of approximation error on the tracking error can be attenuated to be less or equal to a specified level. Finally, the proposed ARNNC system with structure adaptation algorithm is applied to control two nonlinear dynamic systems. Simulation results prove that the proposed ARNNC system with structure adaptation algorithm can achieve favorable tracking performance even unknown the control system dynamics function.

Journal ArticleDOI
TL;DR: Computer simulations and mathematical proofs are presented that provide more rigorous comparisons among one-dimensional stochastic differential equation models and show that for high signal-to-noise ratios, drift-diffusion models with constant and time-varying drift rates can be distinguished from Ornstein-Uhlenbeck processes, but not necessarily from each other.
Abstract: Several integrate-to-threshold models with differing temporal integration mechanisms have been proposed to describe the accumulation of sensory evidence to a prescribed level prior to motor response in perceptual decision-making tasks An experiment and simulation studies have shown that the introduction of time-varying perturbations during integration may distinguish among some of these models Here, we present computer simulations and mathematical proofs that provide more rigorous comparisons among one-dimensional stochastic differential equation models Using two perturbation protocols and focusing on the resulting changes in the means and standard deviations of decision times, we show that for high signal-to-noise ratios, drift-diffusion models with constant and time-varying drift rates can be distinguished from Ornstein-Uhlenbeck processes, but not necessarily from each other The protocols can also distinguish stable from unstable Ornstein-Uhlenbeck processes, and we show that a nonlinear integrator can be distinguished from these linear models by changes in standard deviations The protocols can be implemented in behavioral experiments

Journal ArticleDOI
TL;DR: Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, it is analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively.
Abstract: Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first-and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first-and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.

Journal ArticleDOI
TL;DR: A realistic neural model designed to explore the computational power of pulsed coding at the level of small cognitive systems, accomplished through a set of non-linear dynamic weights and on-line, life-long modulation.
Abstract: Designing a biologically inspired neural architecture as a controller for a complete animat or physical robot environment, to test the hypotheses on intelligence or cognition is non-trivial, particularly, if the controller is a network of spiking neurons. As a result, simulators that integrate spike coding and artificial or real-world platforms are scarce. In this paper, we present artificial intelligence simulator of cognition, a software simulator designed to explore the computational power of pulsed coding at the level of small cognitive systems. Our focus is on convivial graphical user interface, real-time operation and multilevel Hebbian synaptic adaptation, accomplished through a set of non-linear dynamic weights and on-line, life-long modulation. Inclusions of transducer and hormone components, intrinsic oscillator and several learning functions in a discrete spiking neural algorithm are distinctive features of the software. Additional features are the easy link between the production of specific neural architectures and an artificial 2D-world simulator, where one or more animats implement an input–output transfer function in real-time, as do robots in the real world. As a result, the simulator code is exportable to a robot’s microprocessor. This realistic neural model is thus amenable to investigate several time related cognitive problems.

Journal ArticleDOI
TL;DR: It is shown that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Abstract: An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.

Journal ArticleDOI
Meiqin Liu1
TL;DR: By combining Lyapunov functional with S-Procedure, some useful criteria of global asymptotic stability for the discrete-time SNNMs are derived, whose conditions are formulated as linear matrix inequalities.
Abstract: In order to conveniently analyze the stability of various discrete-time recurrent neural networks (RNNs), including bidirectional associative memory, Hopfield, cellular neural network, Cohen-Grossberg neural network, and recurrent multiplayer perceptrons, etc., the novel neural network model, named standard neural network model (SNNM) is advanced to describe this class of discrete-time RNNs. The SNNM is the interconnection of a linear dynamic system and a bounded static nonlinear operator. By combining Lyapunov functional with S-Procedure, some useful criteria of global asymptotic stability for the discrete-time SNNMs are derived, whose conditions are formulated as linear matrix inequalities. Most delayed (or non-delayed) RNNs can be transformed into the SNNMs to be stability analyzed in a unified way. Some application examples of the SNNMs to the stability analysis of the discrete-time RNNs shows that the SNNMs make the stability conditions of the RNNs easily verified.

Journal ArticleDOI
TL;DR: This paper extended the original backpropagation algorithm to a K nearest neighbors algorithm (K-NARX), where the number K is determined genetically along with a set of key parameters, and focused on a flexible structure allowing addition of new minimization algorithms and activation functions in the future.
Abstract: In this paper, we will focus on the use of the three-layer backpropagation network in vector-valued time series estimation problems. The neural network provides a framework for noncomplex calculations to solve the estimation problem, yet the search for optimal or even feasible neural networks for stochastic processes is both time consuming and uncertain. The backpropagation algorithm—written in strict ANSI C—has been implemented as a standalone support library for the genetic hybrid algorithm (GHA) running on any sequential or parallel main frame computer. In order to cope with ill-conditioned time series problems, we extended the original backpropagation algorithm to a K nearest neighbors algorithm (K-NARX), where the number K is determined genetically along with a set of key parameters. In the K-NARX algorithm, the terminal solution at instant t can be used as a starting point for the next t, which tends to stabilize the optimization process when dealing with autocorrelated time series vectors. This possibility has proved to be especially useful in difficult time series problems. Following the prevailing research directions, we use a genetic algorithm to determine optimal parameterizations for the network, including the lag structure for the nonlinear vector time series system, the net structure with one or two hidden layers and the corresponding number of nodes, type of activation function (currently the standard logistic sigmoid, a bipolar transformation, the hyperbolic tangent, an exponential function and the sine function), the type of minimization algorithm, the number K of nearest neighbors in the K-NARX procedure, the initial value of the Levenberg–Marquardt damping parameter and the value of the neural learning (stabilization) coefficient α. We have focused on a flexible structure allowing addition of, e.g., new minimization algorithms and activation functions in the future. We demonstrate the power of the genetically trimmed K-NARX algorithm on a representative data set.

Journal ArticleDOI
TL;DR: This work uses an unsupervised neural spike-timing-based learning rule combined with Hebbian learning to train an algorithm that improves on existing algorithms by not assuming a known topography of the target map and includes a novel method for automatically detecting edge elements.
Abstract: Biological neural systems must grow their own connections and maintain topological relations between elements that are related to the sensory input surface. Artificial systems have traditionally prewired such maps, but the sensor arrangement is not always known and can be expensive to specify before run time. Here we present a method for learning and updating topographic maps in systems comprising modular, event-based elements. Using an unsupervised neural spike-timing-based learning rule combined with Hebbian learning, our algorithm uses the spatiotemporal coherence of the external world to train its network. It improves on existing algorithms by not assuming a known topography of the target map and includes a novel method for automatically detecting edge elements. We show how, for stimuli that are small relative to the sensor resolution, the temporal learning window parameters can be determined without using any user-specified constants. For stimuli that are larger relative to the sensor resolution, we provide a parameter extraction method that generally outperforms the small-stimulus method but requires one user-specified constant. The algorithm was tested on real data from a 64 × 64-pixel section of an event-based temporal contrast silicon retina and a 360-tile tactile luminous floor. It learned 95.8% of the correct neighborhood relations for the silicon retina within about 400 seconds of real-world input from a driving scene and 98.1% correct for the sensory floor after about 160 minutes of human pedestrian traffic. Residual errors occurred in regions receiving little or ambiguous input, and the learned topological representations were able to update automatically in response to simulated damage. Our algorithm has applications in the design of modular autonomous systems in which the interfaces between components are learned during operation rather than at design time.

Journal ArticleDOI
TL;DR: This work proposes an implementation that exploits the synchronization of neural activities within a recurrent network that can be interpreted as a self-stabilizing mechanism for spike-timing-dependent plasticity (STDP).
Abstract: Predictive learning rules, where synaptic changes are driven by the difference between a random input and its reconstruction derived from internal variables, have proven to be very stable and efficient. However, it is not clear how such learning rules could take place in biological synapses. Here we propose an implementation that exploits the synchronization of neural activities within a recurrent network. In this framework, the asymmetric shape of spike-timing-dependent plasticity (STDP) can be interpreted as a self-stabilizing mechanism. Our results suggest a novel hypothesis concerning the computational role of neural synchrony and oscillations.