scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Spiking neural P systems with request rules

TL;DR: It is obtained that SN P systems with request rules are Turing universal, even with a small number of neurons, and with 47 neurons such systems can compute any Turing computable function.
About: This article is published in Neurocomputing.The article was published on 2016-06-12. It has received 108 citations till now. The article focuses on the topics: Membrane computing & Computable function.
Citations
More filters
Journal ArticleDOI
TL;DR: The Spiking Neural Systems with Communication on Request are proved to be computationally universal, that is, equivalent with Turing machines as long as two types of spikes are used.
Abstract: Spiking Neural [Formula: see text] Systems are Neural System models characterized by the fact that each neuron mimics a biological cell and the communication between neurons is based on spikes. In the Spiking Neural [Formula: see text] systems investigated so far, the application of evolution rules depends on the contents of a neuron (checked by means of a regular expression). In these [Formula: see text] systems, a specified number of spikes are consumed and a specified number of spikes are produced, and then sent to each of the neurons linked by a synapse to the evolving neuron. [Formula: see text]In the present work, a novel communication strategy among neurons of Spiking Neural [Formula: see text] Systems is proposed. In the resulting models, called Spiking Neural [Formula: see text] Systems with Communication on Request, the spikes are requested from neighboring neurons, depending on the contents of the neuron (still checked by means of a regular expression). Unlike the traditional Spiking Neural [Formula: see text] systems, no spikes are consumed or created: the spikes are only moved along synapses and replicated (when two or more neurons request the contents of the same neuron). [Formula: see text]The Spiking Neural [Formula: see text] Systems with Communication on Request are proved to be computationally universal, that is, equivalent with Turing machines as long as two types of spikes are used. Following this work, further research questions are listed to be open problems.

152 citations

Journal ArticleDOI
03 Jan 2020-PLOS ONE
TL;DR: Results from experiments on the S&P 500 and DJIA datasets show that the coefficient of determination of the attention-based LSTM model is both higher than 0.94 and the mean square error of the model is lower than0.05.
Abstract: The stock market is known for its extreme complexity and volatility, and people are always looking for an accurate and effective way to guide stock trading. Long short-term memory (LSTM) neural networks are developed by recurrent neural networks (RNN) and have significant application value in many fields. In addition, LSTM avoids long-term dependence issues due to its unique storage unit structure, and it helps predict financial time series. Based on LSTM and an attention mechanism, a wavelet transform is used to denoise historical stock data, extract and train its features, and establish the prediction model of a stock price. We compared the results with the other three models, including the LSTM model, the LSTM model with wavelet denoising and the gated recurrent unit(GRU) neural network model on S&P 500, DJIA, HSI datasets. Results from experiments on the S&P 500 and DJIA datasets show that the coefficient of determination of the attention-based LSTM model is both higher than 0.94, and the mean square error of our model is both lower than 0.05.

151 citations

Journal ArticleDOI
TL;DR: A novel method of constructing logic circuits that work in a neural-like manner is demonstrated, as well as shed some lights on potential directions of designing neural circuits theoretically.

121 citations

Journal ArticleDOI
TL;DR: It is shown that SN P systems with colored spikes having three neurons are sufficient to compute Turing computable sets of numbers, and such system having two neurons is able to compute the family of recursive functions.
Abstract: Spiking neural P systems (SN P systems) are bio-inspired neural-like computing models, which are obtained by abstracting the way of biological neurons’ spiking and communication by means of spikes in central nervous systems. SN P systems performed well in describing and modeling behaviors that occur simultaneously, yet weak at modeling complex systems with the limits of using a single spike. In this paper, drawing on the idea from colored petri nets, SN P systems with colored spikes are proposed, where a finite set of colors is introduced to mark the spikes such that each spike is associated with a unique color. The updated spiking rule is applied by consuming and emitting a number of colored spikes (with the same or different colors). The computation power of the systems is investigated. Specifically, it is shown that SN P systems with colored spikes having three neurons are sufficient to compute Turing computable sets of numbers, and such system having two neurons is able to compute the family of recursive functions. These results improved the corresponding ones on the number of neurons needed to construct universal SN P systems recently appeared in [Neurocomputing, 2016, 193(12): 193–200]. To our best knowledge, this is the smallest number of neurons used to construct Turing universal SN P systems as number generator and function computing device by far.

104 citations

Journal ArticleDOI
TL;DR: Experimental results on an independent dataset shows that iDNA-KACC-EL outperforms all the other state-of-the-art predictors, indicating that it would be a useful computational tool for DNA binding protein identification.
Abstract: DNA-binding proteins play a pivotal role in various intra- and extra-cellular activities ranging from DNA replication to gene expression control. With the rapid development of next generation of sequencing technique, the number of protein sequences is unprecedentedly increasing. Thus it is necessary to develop computational methods to identify the DNA-binding proteins only based on the protein sequence information. In this study, a novel method called iDNA-KACC is presented, which combines the support vector machine (SVM) and the auto-cross covariance transformation. The protein sequences are first converted into profile-based protein representation, and then converted into a series of fixed-length vectors by the auto-cross covariance transformation with Kmer composition. The sequence order effect can be effectively captured by this scheme. These vectors are then fed into support vector machine (SVM) to discriminate the DNA-binding proteins from the non-DNA-binding ones. iDNA-KACC achieves an overall accuracy of 75.16% and Matthew correlation coefficient of 0.5 by a rigorous jackknife test. Its performance is further improved by employing an ensemble learning approach, and the improved predictor is called iDNA-KACC-EL. Experimental results on an independent dataset shows that iDNA-KACC-EL outperforms all the other state-of-the-art predictors, indicating that it would be a useful computational tool for DNA binding protein identification.

72 citations


Cites methods from "Spiking neural P systems with reque..."

  • ...Many machine learning technique have been applied to solve this important task [7], [19]–[25]....

    [...]

References
More filters
Book
29 Dec 1995
TL;DR: This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules, as well as methods for training them and their applications to practical problems.
Abstract: This book, by the authors of the Neural Network Toolbox for MATLAB, provides a clear and detailed coverage of fundamental neural network architectures and learning rules. In it, the authors emphasize a coherent presentation of the principal neural networks, methods for training them and their applications to practical problems. Features Extensive coverage of training methods for both feedforward networks (including multilayer and radial basis networks) and recurrent networks. In addition to conjugate gradient and Levenberg-Marquardt variations of the backpropagation algorithm, the text also covers Bayesian regularization and early stopping, which ensure the generalization ability of trained networks. Associative and competitive networks, including feature maps and learning vector quantization, are explained with simple building blocks. A chapter of practical training tips for function approximation, pattern recognition, clustering and prediction, along with five chapters presenting detailed real-world case studies. Detailed examples and numerous solved problems. Slides and comprehensive demonstration software can be downloaded from hagan.okstate.edu/nnd.html.

6,463 citations

Book
15 Aug 2002
TL;DR: A comparison of single and two-dimensional neuron models for spiking neuron models and models of Synaptic Plasticity shows that the former are superior to the latter, while the latter are better suited to population models.
Abstract: Neurons in the brain communicate by short electrical pulses, the so-called action potentials or spikes. How can we understand the process of spike generation? How can we understand information transmission by neurons? What happens if thousands of neurons are coupled together in a seemingly random network? How does the network connectivity determine the activity patterns? And, vice versa, how does the spike activity influence the connectivity pattern? These questions are addressed in this 2002 introduction to spiking neurons aimed at those taking courses in computational neuroscience, theoretical biology, biophysics, or neural networks. The approach will suit students of physics, mathematics, or computer science; it will also be useful for biologists who are interested in mathematical modelling. The text is enhanced by many worked examples and illustrations. There are no mathematical prerequisites beyond what the audience would meet as undergraduates: more advanced techniques are introduced in an elementary, concrete fashion when needed.

2,814 citations

Book
01 Jan 1967
TL;DR: In this article, the authors present an abstract theory that categorically and systematically describes what all these machines can do and what they cannot do, giving sound theoretical or practical grounds for each judgment, and the abstract theory tells us in no uncertain terms that the machines' potential range is enormous and that its theoretical limitations are of the subtlest and most elusive sort.
Abstract: From the Preface (See Front Matter for full Preface) Man has within a single generation found himself sharing the world with a strange new species: the computers and computer-like machines. Neither history, nor philosophy, nor common sense will tell us how these machines will affect us, for they do not do "work" as did machines of the Industrial Revolution. Instead of dealing with materials or energy, we are told that they handle "control" and "information" and even "intellectual processes." There are very few individuals today who doubt that the computer and its relatives are developing rapidly in capability and complexity, and that these machines are destined to play important (though not as yet fully understood) roles in society's future. Though only some of us deal directly with computers, all of us are falling under the shadow of their ever-growing sphere of influence, and thus we all need to understand their capabilities and their limitations. It would indeed be reassuring to have a book that categorically and systematically described what all these machines can do and what they cannot do, giving sound theoretical or practical grounds for each judgment. However, although some books have purported to do this, it cannot be done for the following reasons: a) Computer-like devices are utterly unlike anything which science has ever considered---we still lack the tools necessary to fully analyze, synthesize, or even think about them; and b) The methods discovered so far are effective in certain areas, but are developing much too rapidly to allow a useful interpretation and interpolation of results. The abstract theory---as described in this book---tells us in no uncertain terms that the machines' potential range is enormous, and that its theoretical limitations are of the subtlest and most elusive sort. There is no reason to suppose machines have any limitations not shared by man.

2,219 citations

BookDOI
01 Apr 1997
TL;DR: This first handbook of formal languages gives a comprehensive up-to-date coverage of all important aspects and subareas of the field.
Abstract: The theory of formal languages is the oldest and most fundamental area of theoretical computer science. It has served as a basis of formal modeling from the early stages of programming languages to the recent beginnings of DNA computing. This first handbook of formal languages gives a comprehensive up-to-date coverage of all important aspects and subareas of the field. Best specialists of various subareas, altogether 50 in number, are among the authors. The maturity of the field makes it possible to include a historical perspective in many presentations. The individual chapters can be studied independently, both as a text and as a source of reference. The Handbook is an invaluable aid for advanced students and specialists in theoretical computer science and related areas in mathematics, linguistics, and biology.

1,915 citations


"Spiking neural P systems with reque..." refers background in this paper

  • ..., from [32], and some basic notions in SN P systems [3,33]....

    [...]

Journal ArticleDOI
TL;DR: It is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than other neural network models based on McCulloch Pitts neurons and sigmoidal gates.

1,731 citations


"Spiking neural P systems with reque..." refers background in this paper

  • ...In terms of motivation of models, SN P systems fall into the third generation of neural network models [4]....

    [...]