scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Spiking Neural P Systems With Colored Spikes

TL;DR: It is shown that SN P systems with colored spikes having three neurons are sufficient to compute Turing computable sets of numbers, and such system having two neurons is able to compute the family of recursive functions.
Abstract: Spiking neural P systems (SN P systems) are bio-inspired neural-like computing models, which are obtained by abstracting the way of biological neurons’ spiking and communication by means of spikes in central nervous systems. SN P systems performed well in describing and modeling behaviors that occur simultaneously, yet weak at modeling complex systems with the limits of using a single spike. In this paper, drawing on the idea from colored petri nets, SN P systems with colored spikes are proposed, where a finite set of colors is introduced to mark the spikes such that each spike is associated with a unique color. The updated spiking rule is applied by consuming and emitting a number of colored spikes (with the same or different colors). The computation power of the systems is investigated. Specifically, it is shown that SN P systems with colored spikes having three neurons are sufficient to compute Turing computable sets of numbers, and such system having two neurons is able to compute the family of recursive functions. These results improved the corresponding ones on the number of neurons needed to construct universal SN P systems recently appeared in [Neurocomputing, 2016, 193(12): 193–200]. To our best knowledge, this is the smallest number of neurons used to construct Turing universal SN P systems as number generator and function computing device by far.
Citations
More filters
Journal ArticleDOI
TL;DR: This study summarizes the application of ML in microbiology and shows that ML can be used in many aspects of microbiology research, especially classification problems, and for exploring the interaction between microorganisms and the surrounding environment.
Abstract: Microorganisms are ubiquitous and closely related to people's daily lives. Since they were first discovered in the 19th century, researchers have shown great interest in microorganisms. People studied microorganisms through cultivation, but this method is expensive and time consuming. However, the cultivation method cannot keep a pace with the development of high-throughput sequencing technology. To deal with this problem, machine learning (ML) methods have been widely applied to the field of microbiology. Literature reviews have shown that ML can be used in many aspects of microbiology research, especially classification problems, and for exploring the interaction between microorganisms and the surrounding environment. In this study, we summarize the application of ML in microbiology.

118 citations


Cites background from "Spiking Neural P Systems With Color..."

  • ..., 2017b) and computational intelligence methods (Cabarle et al., 2017; Song et al., 2018), can be promising in discovering the relationship between diseases and microbes....

    [...]

Journal ArticleDOI
TL;DR: This work provides a comprehensive review on the biological importance of CPPs, CPP database and existing ML-based methods for CPP prediction, and finds that existing prediction tools tend to more accurately predict C PPs and non-CPPs with the length of 20-25 residues long than peptides in other length ranges.
Abstract: Cell-penetrating peptides (CPPs) facilitate the delivery of therapeutically relevant molecules, including DNA, proteins and oligonucleotides, into cells both in vitro and in vivo. This unique ability explores the possibility of CPPs as therapeutic delivery and its potential applications in clinical therapy. Over the last few decades, a number of machine learning (ML)-based prediction tools have been developed, and some of them are freely available as web portals. However, the predictions produced by various tools are difficult to quantify and compare. In particular, there is no systematic comparison of the web-based prediction tools in performance, especially in practical applications. In this work, we provide a comprehensive review on the biological importance of CPPs, CPP database and existing ML-based methods for CPP prediction. To evaluate current prediction tools, we conducted a comparative study and analyzed a total of 12 models from 6 publicly available CPP prediction tools on 2 benchmark validation sets of CPPs and non-CPPs. Our benchmarking results demonstrated that a model from the KELM-CPPpred, namely KELM-hybrid-AAC, showed a significant improvement in overall performance, when compared to the other 11 prediction models. Moreover, through a length-dependency analysis, we find that existing prediction tools tend to more accurately predict CPPs and non-CPPs with the length of 20-25 residues long than peptides in other length ranges.

114 citations

Journal ArticleDOI
Bing Rao, Chen Zhou1, Guoying Zhang, Ran Su1, Leyi Wei1 
TL;DR: This study establishes a feature representation learning model that can explore class and probabilistic information embedded in anticancer peptides (ACPs) by integrating a total of 29 different sequence-based feature descriptors, and demonstrates that the fused multiview features have more discriminative ability to capture the characteristics of ACPs.
Abstract: Fast and accurate identification of the peptides with anticancer activity potential from large-scale proteins is currently a challenging task. In this study, we propose a new machine learning predictor, namely, ACPred-Fuse, that can automatically and accurately predict protein sequences with or without anticancer activity in peptide form. Specifically, we establish a feature representation learning model that can explore class and probabilistic information embedded in anticancer peptides (ACPs) by integrating a total of 29 different sequence-based feature descriptors. In order to make full use of various multiview information, we further fused the class and probabilistic features with handcrafted sequential features and then optimized the representation ability of the multiview features, which are ultimately used as input for training our prediction model. By comparing the multiview features and existing feature descriptors, we demonstrate that the fused multiview features have more discriminative ability to capture the characteristics of ACPs. In addition, the information from different views is complementary for the performance improvement. Finally, our benchmarking comparison results showed that the proposed ACPred-Fuse is more precise and promising in the identification of ACPs than existing predictors. To facilitate the use of the proposed predictor, we built a web server, which is now freely available via http://server.malab.cn/ACPred-Fuse.

78 citations

Journal ArticleDOI
TL;DR: A parallel image skeletonizing method that can parallel process a certain number of pixels of an image by spiking multiple neurons simultaneously at any computation step is proposed with SN P systems with weights.
Abstract: Spiking neural P systems (namely SN P systems, for short) are bio-inspired neural-like computing models under the framework of membrane computing, which are also known as a new candidate of the third generation of neural networks. In this work, a parallel image skeletonizing method is proposed with SN P systems with weights. Specifically, an SN P system with weighs is constructed to achieve the Zhang–Suen image skeletonizing algorithm. Instead of serial calculation like Zhang–Suen image skeletonizing algorithm, the proposed method can parallel process a certain number of pixels of an image by spiking multiple neurons simultaneously at any computation step. Demonstrating via the experimental results, our method shows higher efficiency in data-reduction and simpler skeletons with less noise spurs than the method developed in Diazpernil (Neurocomputing 115:81–91, 2013) in skeletonizing images like hand-written words.

76 citations

Journal ArticleDOI
TL;DR: Under the 10-fold cross-validation of the model constructed in this study, sensitivity, specificity, and accuracy rates surpassed 85%, 80%, and 82%, respectively, and indicated that the classification model built is an effective tool in identifying electron transport proteins.
Abstract: Cellular respiration provides direct energy substances for living organisms. Electron storage and transportation should be completed through electron transport chains during the cellular respiratio...

66 citations

References
More filters
Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Journal ArticleDOI
01 Mar 1996
TL;DR: The article discusses the motivations behind the development of ANNs and describes the basic biological neuron and the artificial computational model, and outlines network architectures and learning processes, and presents some of the most commonly used ANN models.
Abstract: Artificial neural nets (ANNs) are massively parallel systems with large numbers of interconnected simple processors. The article discusses the motivations behind the development of ANNs and describes the basic biological neuron and the artificial computational model. It outlines network architectures and learning processes, and presents some of the most commonly used ANN models. It concludes with character recognition, a successful ANN application.

4,281 citations

Journal ArticleDOI
TL;DR: A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons and combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons.
Abstract: A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC.

4,082 citations


"Spiking Neural P Systems With Color..." refers methods in this paper

  • ...like computing models have been proposed, such as artificial neural networks [1]–[3] and spiking neural networks [4], [5]....

    [...]

Proceedings Article
07 Dec 2015
TL;DR: In this paper, the authors proposed a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections using a three-step method.
Abstract: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.

3,967 citations