scispace - formally typeset
Search or ask a question
Author

Abbas Nowzari-Dalini

Bio: Abbas Nowzari-Dalini is an academic researcher from University of Tehran. The author has contributed to research in topics: Time complexity & Spiking neural network. The author has an hindex of 14, co-authored 58 publications receiving 823 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A new algorithm, Kavosh, for finding k-size network motifs with less memory and CPU time in comparison to other existing algorithms, based on counting all k- size sub-graphs of a given graph (directed or undirected).
Abstract: Background: Complex networks are studied across many fields of science and are particularly important to understand biological processes. Motifs in networks are small connected sub-graphs that occur significantly in higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Existing algorithms for finding network motifs are extremely costly in CPU time and memory consumption and have practically restrictions on the size of motifs. Results: We present a new algorithm (Kavosh), for finding k-size network motifs with less memory and CPU time in comparison to other existing algorithms. Our algorithm is based on counting all k-size sub-graphs of a given graph (directed or undirected). We evaluated our algorithm on biological networks of E. coli and S. cereviciae, and also on non-biological networks: a social and an electronic network. Conclusion: The efficiency of our algorithm is demonstrated by comparing the obtained results with three well-known motif finding tools. For comparison, the CPU time, memory usage and the similarities of obtained motifs are considered. Besides, Kavosh can be employed for finding motifs of size greater than eight, while most of the other algorithms have restriction on motifs with size greater than eight. The Kavosh source code and help files are freely available at: http://Lbb.ut.ac.ir/ Download/LBBsoft/Kavosh/.

221 citations

Journal ArticleDOI
TL;DR: For the first time, it is shown that RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier.
Abstract: Reinforcement learning (RL) has recently regained popularity with major achievements such as beating the European game of Go champion. Here, for the first time, we show that RL can be used efficiently to train a spiking neural network (SNN) to perform object recognition in natural images without using an external classifier. We used a feedforward convolutional SNN and a temporal coding scheme where the most strongly activated neurons fire first, while less activated ones fire later, or not at all. In the highest layers, each neuron was assigned to an object category, and it was assumed that the stimulus category was the category of the first neuron to fire. If this assumption was correct, the neuron was rewarded, i.e., spike-timing-dependent plasticity (STDP) was applied, which reinforced the neuron’s selectivity. Otherwise, anti-STDP was applied, which encouraged the neuron to learn something else. As demonstrated on various image data sets (Caltech, ETH-80, and NORB), this reward-modulated STDP (R-STDP) approach has extracted particularly discriminative visual features, whereas classic unsupervised STDP extracts any feature that consistently repeats. As a result, R-STDP has outperformed STDP on these data sets. Furthermore, R-STDP is suitable for online learning and can adapt to drastic changes such as label permutations. Finally, it is worth mentioning that both feature extraction and classification were done with spikes, using at most one spike per neuron. Thus, the network is hardware friendly and energy efficient.

131 citations

Journal ArticleDOI
TL;DR: In this article, a deep convolutional spiking neural network (DCSNN) and a latency-coding scheme were used to address the limitations of deep artificial neural networks, which have revolutionized the computer vision domain.

120 citations

Journal ArticleDOI
11 Jul 2013-PLOS ONE
TL;DR: A number of genes have been identified for the first time to be implicated in lung adenocarcinoma by analyzing the modules and having an over-expression pattern similar to that of EGFR.
Abstract: Our goal of this study was to reconstruct a “genome-scale co-expression network” and find important modules in lung adenocarcinoma so that we could identify the genes involved in lung adenocarcinoma. We integrated gene mutation, GWAS, CGH, array-CGH and SNP array data in order to identify important genes and loci in genome-scale. Afterwards, on the basis of the identified genes a co-expression network was reconstructed from the co-expression data. The reconstructed network was named “genome-scale co-expression network”. As the next step, 23 key modules were disclosed through clustering. In this study a number of genes have been identified for the first time to be implicated in lung adenocarcinoma by analyzing the modules. The genes EGFR, PIK3CA, TAF15, XIAP, VAPB, Appl1, Rab5a, ARF4, CLPTM1L, SP4, ZNF124, LPP, FOXP1, SOX18, MSX2, NFE2L2, SMARCC1, TRA2B, CBX3, PRPF6, ATP6V1C1, MYBBP1A, MACF1, GRM2, TBXA2R, PRKAR2A, PTK2, PGF and MYO10 are among the genes that belong to modules 1 and 22. All these genes, being implicated in at least one of the phenomena, namely cell survival, proliferation and metastasis, have an over-expression pattern similar to that of EGFR. In few modules, the genes such as CCNA2 (Cyclin A2), CCNB2 (Cyclin B2), CDK1, CDK5, CDC27, CDCA5, CDCA8, ASPM, BUB1, KIF15, KIF2C, NEK2, NUSAP1, PRC1, SMC4, SYCE2, TFDP1, CDC42 and ARHGEF9 are present that play a crucial role in cell cycle progression. In addition to the mentioned genes, there are some other genes (i.e. DLGAP5, BIRC5, PSMD2, Src, TTK, SENP2, PSMD2, DOK2, FUS and etc.) in the modules.

92 citations

Journal ArticleDOI
TL;DR: SpykeTorch as discussed by the authors is an open-source high-speed simulation framework based on PyTorch, which simulates convolutional SNNs with at most one spike per neuron and the rank-order encoding scheme.
Abstract: Application of deep convolutional spiking neural networks (SNNs) to artificial intelligence (AI) tasks has recently gained a lot of interest since SNNs are hardware-friendly and energy-efficient. Unlike the non-spiking counterparts, most of the existing SNN simulation frameworks are not practically efficient enough for large-scale AI tasks. In this paper, we introduce SpykeTorch, an open-source high-speed simulation framework based on PyTorch. This framework simulates convolutional SNNs with at most one spike per neuron and the rank-order encoding scheme. In terms of learning rules, both spike-timing-dependent plasticity (STDP) and reward-modulated STDP (R-STDP) are implemented, but other rules could be implemented easily. Apart from the aforementioned properties, SpykeTorch is highly generic and capable of reproducing the results of various studies. Computations in the proposed framework are tensor-based and totally done by PyTorch functions, which in turn brings the ability of just-in-time optimization for running on CPUs, GPUs, or Multi-GPU platforms.

40 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal Article

1,091 citations

Journal ArticleDOI
TL;DR: The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNN's typically require many fewer operations and are the better candidates to process spatio-temporal data.

756 citations

Book ChapterDOI
01 Jan 2014
TL;DR: This Sprenger Briefs volume is dedicated to IDPs and IDPRs and an attempt is made to compress a massive amount of knowledge and into a digest that aims to be of use to those wishing a fast entry into this promising field of structural biology.
Abstract: Nothing is solid about proteins. Governing rules and established laws are constantly broken. As an example, the last decade and a half have witnessed the fall of one of the major paradigms in structural biology. Contrarily to the more than a century-old belief that the unique function of a protein is determined by its unique structure, which, in its turn, is defined by the unique amino acid sequence, many biologically active proteins lack stable tertiary and/or secondary structure either entirely or at their significant parts. Such intrinsically disordered proteins (IDPs) and hybrid proteins containing ordered domains and functional IDP regions (IDPRs) are highly abundant in nature, and many of them are associated with various human diseases. Such disordered proteins and regions are very different from ordered and well-structured proteins and domains at a variety of levels and possess well-recognizable biases in their amino acid compositions and amino acid sequences. A characteristic feature of these proteins is their exceptional structural heterogeneity, where different parts of a given polypeptide chain can be ordered (or disordered) to different degrees. As a result, a typical IDP/IDPR contains a multitude of potentially foldable, partially foldable, differently foldable or not foldable structural segments. This distribution of conformers is constantly changing in time, where a given segment of a protein molecule has different structures at different time points. The distribution is also constantly changing in response to changes in the environment. This mosaic structural organization is crucial for their functions and many IDPs are engaged in biological functions that rely on high conformational flexibility and that are not accessible to proteins with unique and fixed structures. As a result, the functional repertoire of IDPs complements that of ordered proteins, with IDPs/IDPRs being often involved in regulation, signaling and control. This Sprenger Briefs volume is dedicated to IDPs and IDPRs and an attempt is made to compress a massive amount of knowledge and into a digest that aims to be of use to those wishing a fast entry into this promising field of structural biology.

624 citations