scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Design of logic gates using spiking neural P systems with homogeneous neurons and astrocytes-like control

TL;DR: A novel method of constructing logic circuits that work in a neural-like manner is demonstrated, as well as shed some lights on potential directions of designing neural circuits theoretically.
About: This article is published in Information Sciences.The article was published on 2016-12-01. It has received 121 citations till now. The article focuses on the topics: NAND logic & Three-input universal logic gate.
Citations
More filters
Journal ArticleDOI
03 Jan 2020-PLOS ONE
TL;DR: Results from experiments on the S&P 500 and DJIA datasets show that the coefficient of determination of the attention-based LSTM model is both higher than 0.94 and the mean square error of the model is lower than0.05.
Abstract: The stock market is known for its extreme complexity and volatility, and people are always looking for an accurate and effective way to guide stock trading. Long short-term memory (LSTM) neural networks are developed by recurrent neural networks (RNN) and have significant application value in many fields. In addition, LSTM avoids long-term dependence issues due to its unique storage unit structure, and it helps predict financial time series. Based on LSTM and an attention mechanism, a wavelet transform is used to denoise historical stock data, extract and train its features, and establish the prediction model of a stock price. We compared the results with the other three models, including the LSTM model, the LSTM model with wavelet denoising and the gated recurrent unit(GRU) neural network model on S&P 500, DJIA, HSI datasets. Results from experiments on the S&P 500 and DJIA datasets show that the coefficient of determination of the attention-based LSTM model is both higher than 0.94, and the mean square error of our model is both lower than 0.05.

151 citations

Journal ArticleDOI
TL;DR: It is proved that such P systems with one cell and using evolutional symport rules of length at most 3 or using evolutionAL antiport rules of Length 4 are Turing universal (only the family of all finite sets of positive integers can be generated bysuch P systems if standard symport/antiport rules are used).

107 citations

Journal ArticleDOI
TL;DR: It is shown that SN P systems with colored spikes having three neurons are sufficient to compute Turing computable sets of numbers, and such system having two neurons is able to compute the family of recursive functions.
Abstract: Spiking neural P systems (SN P systems) are bio-inspired neural-like computing models, which are obtained by abstracting the way of biological neurons’ spiking and communication by means of spikes in central nervous systems. SN P systems performed well in describing and modeling behaviors that occur simultaneously, yet weak at modeling complex systems with the limits of using a single spike. In this paper, drawing on the idea from colored petri nets, SN P systems with colored spikes are proposed, where a finite set of colors is introduced to mark the spikes such that each spike is associated with a unique color. The updated spiking rule is applied by consuming and emitting a number of colored spikes (with the same or different colors). The computation power of the systems is investigated. Specifically, it is shown that SN P systems with colored spikes having three neurons are sufficient to compute Turing computable sets of numbers, and such system having two neurons is able to compute the family of recursive functions. These results improved the corresponding ones on the number of neurons needed to construct universal SN P systems recently appeared in [Neurocomputing, 2016, 193(12): 193–200]. To our best knowledge, this is the smallest number of neurons used to construct Turing universal SN P systems as number generator and function computing device by far.

104 citations


Cites background from "Design of logic gates using spiking..."

  • ...In terms of applications, SN P systems have been applied to design neural-like logic gates and circuits [45]–[47], and operating systems [48], [49]....

    [...]

Journal ArticleDOI
TL;DR: It is proved that one type of spike is enough to guarantee the Turing universality of SNQ P systems, which have previously been proved to be universal when two types of spikes are considered.
Abstract: Spiking neural P systems are a class of third generation neural networks belonging to the framework of membrane computing. Spiking neural P systems with communication on request (SNQ P systems) are...

101 citations

Journal ArticleDOI
TL;DR: The result of this paper is promising in terms of the fact that it is the first attempt to use SN P systems in pattern recognition after many theoretical advancements ofSN P systems, and SN P system exhibit the feasibility for tackling pattern recognition problems.
Abstract: Spiking neural P systems (SN P systems) are a class of distributed and parallel neural-like computing models, inspired from the way neurons communicate by means of spikes. In this paper, a new variant of the systems, called SN P systems with learning functions, is introduced. Such systems can dynamically strengthen and weaken connections among neurons during the computation. A class of specific SN P systems with simple Hebbian learning function is constructed to recognize English letters. The experimental results show that the SN P systems achieve average accuracy rate 98.76% in the test case without noise. In the test cases with low, medium, and high noises, the SN P systems outperform back propagation neural networks and probabilistic neural networks. Moreover, comparing with spiking neural networks, SN P systems perform a little better in recognizing letters with noise. The result of this paper is promising in terms of the fact that it is the first attempt to use SN P systems in pattern recognition after many theoretical advancements of SN P systems, and SN P systems exhibit the feasibility for tackling pattern recognition problems.

78 citations


Cites background from "Design of logic gates using spiking..."

  • ...Inspired by different biological phenomena and mathematical motivations, lots of variants of SN P systems have been proposed, such as SN P systems with anti-spikes [23], [24], asynchronous SN P systems [25], asynchronous SN P systems with local synchronization [26], SN P systems with weight [27], SN P systems with astrocyte [28], homogeneous SN P systems [29], [30], sequential SN P systems [31], SN P systems with rules on synapses [32]–[34]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Book
01 Jan 2010
TL;DR: Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together.
Abstract: For graduate-level neural network courses offered in the departments of Computer Engineering, Electrical Engineering, and Computer Science. Neural Networks and Learning Machines, Third Edition is renowned for its thoroughness and readability. This well-organized and completely upto-date text remains the most comprehensive treatment of neural networks from an engineering perspective. This is ideal for professional engineers and research scientists. Matlab codes used for the computer experiments in the text are available for download at: http://www.pearsonhighered.com/haykin/ Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together. Ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either independently.

4,943 citations

Journal Article
TL;DR: In this paper, the authors empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples, and they suggest that unsupervised pretraining guides the learning towards basins of attraction of minima that support better generalization.
Abstract: Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks involve an unsupervised learning component, usually in an unsupervised pre-training phase. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. The main question investigated here is the following: how does unsupervised pre-training work? Answering this questions is important if learning in deep architectures is to be further improved. We propose several explanatory hypotheses and test them through extensive simulations. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples. The experiments confirm and clarify the advantage of unsupervised pre-training. The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre-training.

2,036 citations

BookDOI
01 Apr 1997
TL;DR: This first handbook of formal languages gives a comprehensive up-to-date coverage of all important aspects and subareas of the field.
Abstract: The theory of formal languages is the oldest and most fundamental area of theoretical computer science. It has served as a basis of formal modeling from the early stages of programming languages to the recent beginnings of DNA computing. This first handbook of formal languages gives a comprehensive up-to-date coverage of all important aspects and subareas of the field. Best specialists of various subareas, altogether 50 in number, are among the authors. The maturity of the field makes it possible to include a historical perspective in many presentations. The individual chapters can be studied independently, both as a text and as a source of reference. The Handbook is an invaluable aid for advanced students and specialists in theoretical computer science and related areas in mathematics, linguistics, and biology.

1,915 citations

Book
01 Jan 2002
TL;DR: This chapter discusses Membrane Computing, What It Is and What It is Not, and attempts to get back to reality with open problems and Universality results.
Abstract: Preface.- 1. Introduction: Membrane Computing, What It Is and What It Is Not.- 2. Prerequisites.- 3. Membrane Systems with Symbol-Objects.- 4. Trading Evolution for Communication.- 5. Structuring Objects.- 6. Networks of Membranes.- 7. Trading Space for Time.- 8. Further Technical Results.- 9. (Attempts to Get) Back to Reality.- Open Problems.- Universality Results. Bibliography.- Index.

1,760 citations