scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

An improved spiking network conversion for image classification

TL;DR: In this article, an indirect training approach is proposed to avoid the difficulty of the SNN direct training, and a proposed CNN model is firstly trained with the RMSprop algorithm then the optimised weights and bias are mapped to the Spiking Neural Network model converted from the proposed CNN.
Abstract: Image classification is always an interesting problem due to its practical applications in real life. With a capability of self-learning features, modern Convolution Neural Network (CNN) models can achieve high accuracy on large and complex benchmark datasets. However, due to their high computation costs, the CNN models experience energy consumption problems during training and implementation of the hardware which limits their utilisation in mobile and embedded applications. Recently, the Spiking Neural Network (SNN) has been proposed to overcome drawbacks of the CNN models. Like the biological nervous system, the SNN’s neurons communicate with each other by sending spike trains. A neuron is only calculated when a new input spike arrives. As a result, it turns the networks into an energy-saving mode which is suitable for implementation on hardware devices. To avoid the difficulty of the SNN direct training, an indirect training approach is proposed in this work. A proposed CNN model is firstly trained with the RMSprop algorithm then the optimised weights and bias are mapped to the SNN model converted from the proposed CNN model. Experimental results confirm that our model achieves the best accuracy of 93.5% when compared to state-of-the-art SNN approaches on the Fashion- MNIST dataset.
Citations
More filters
Proceedings ArticleDOI
20 Oct 2022
TL;DR: Wang et al. as discussed by the authors proposed a novel approach that combines a preprocessing technique and an ensemble model based on a neuromorphic computing architecture called RANC, which can gain 99.99% and 92.4% accuracy in the Leave-One-Subject-Out (LOSO) validation for 3 and 17 sleeping postures.
Abstract: Sleeping posture recognition plays a vital role in various clinical applications. Many studies show that pressure sensor-based solutions work well for assessing in-bed positions. In recent years, Neuromorphic Computing has attracted many researchers' attention due to its advantage of energy efficiency. Surprisingly, the applications of Neuromorphic Computing in sleeping posture classification have been still lacking. This study proposed a novel approach that combines a preprocessing technique and an ensemble model based on a neuromorphic computing architecture called RANC. Experimental results confirm that our proposed method can gain 99.99% and 92.4% accuracy in the Leave-One-Subject-Out (LOSO) validation for 3 and 17 sleeping postures, respectively. This result greatly surpasses the previous SNN-based sleeping posture classification method.

1 citations

Proceedings ArticleDOI
20 Oct 2022
TL;DR: Wang et al. as mentioned in this paper proposed a novel approach that combines a preprocessing technique and an ensemble model based on a neuromorphic computing architecture called RANC, which can gain 99.99% and 92.4% accuracy in the Leave-One-Subject-Out (LOSO) validation for 3 and 17 sleeping postures.
Abstract: Sleeping posture recognition plays a vital role in various clinical applications. Many studies show that pressure sensor-based solutions work well for assessing in-bed positions. In recent years, Neuromorphic Computing has attracted many researchers' attention due to its advantage of energy efficiency. Surprisingly, the applications of Neuromorphic Computing in sleeping posture classification have been still lacking. This study proposed a novel approach that combines a preprocessing technique and an ensemble model based on a neuromorphic computing architecture called RANC. Experimental results confirm that our proposed method can gain 99.99% and 92.4% accuracy in the Leave-One-Subject-Out (LOSO) validation for 3 and 17 sleeping postures, respectively. This result greatly surpasses the previous SNN-based sleeping posture classification method.
Journal ArticleDOI
TL;DR: In this article , a ternary data scheme was employed to take advantage of ternarary content addressable memory (TCAM) to reduce the computation time of the DIM.
Abstract: In this paper, we present a digital processing in memory (DPIM) configured as a stride edge-detection search frequency neural network (SE-SFNN) which is trained through spike location dependent plasticity (SLDP), a learning mechanism reminiscent of spike timing dependent plasticity (STDP). This mechanism allows for rapid online learning as well as a simple memory-based implementation. In particular, we employ a ternary data scheme to take advantage of ternary content addressable memory (TCAM). The scheme utilizes a ternary representation of the image pixels, and the TCAMs are used in a two-layer format to significantly reduce the computation time. The first layer applies several filtering kernels, followed by the second layer, which reorders the pattern dictionaries of TCAMs to place the most frequent patterns at the top of each supervised TCAM dictionary. Numerous TCAM blocks in both layers operate in a massively parallel fashion using digital ternary values. There are no complicated multiply operations performed, and learning is performed in a feedforward scheme. This allows rapid and robust learning as a trade-off with the parallel memory block size. Furthermore, we propose a method to reduce the TCAM memory size using a two-tiered minor to major promotion (M2MP) of frequently occurring patterns. This reduction scheme is performed concurrently during the learning operation without incurring a preconditioning overhead. We show that with minimal circuit overhead, the required memory size is reduced by 84.4%, and the total clock cycles required for learning also decrease by 97.31 % while the accuracy decreases only by 1.12%. We classified images with 94.58% accuracy on the MNIST dataset. Using a 100 MHz clock, our simulation results show that the MNIST training takes about 6.3 ms dissipating less than 4 mW of average power. In terms of inference speed, the trained hardware is capable of processing 5,882,352 images per second.
Journal ArticleDOI
TL;DR: In this paper , a ternary data scheme was employed to take advantage of ternarary content addressable memory (TCAM) to reduce the computation time of the DIM.
Abstract: In this paper, we present a digital processing in memory (DPIM) configured as a stride edge-detection search frequency neural network (SE-SFNN) which is trained through spike location dependent plasticity (SLDP), a learning mechanism reminiscent of spike timing dependent plasticity (STDP). This mechanism allows for rapid online learning as well as a simple memory-based implementation. In particular, we employ a ternary data scheme to take advantage of ternary content addressable memory (TCAM). The scheme utilizes a ternary representation of the image pixels, and the TCAMs are used in a two-layer format to significantly reduce the computation time. The first layer applies several filtering kernels, followed by the second layer, which reorders the pattern dictionaries of TCAMs to place the most frequent patterns at the top of each supervised TCAM dictionary. Numerous TCAM blocks in both layers operate in a massively parallel fashion using digital ternary values. There are no complicated multiply operations performed, and learning is performed in a feedforward scheme. This allows rapid and robust learning as a trade-off with the parallel memory block size. Furthermore, we propose a method to reduce the TCAM memory size using a two-tiered minor to major promotion (M2MP) of frequently occurring patterns. This reduction scheme is performed concurrently during the learning operation without incurring a preconditioning overhead. We show that with minimal circuit overhead, the required memory size is reduced by 84.4%, and the total clock cycles required for learning also decrease by 97.31 % while the accuracy decreases only by 1.12%. We classified images with 94.58% accuracy on the MNIST dataset. Using a 100 MHz clock, our simulation results show that the MNIST training takes about 6.3 ms dissipating less than 4 mW of average power. In terms of inference speed, the trained hardware is capable of processing 5,882,352 images per second.
References
More filters
Journal ArticleDOI
TL;DR: This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre by putting them into mathematical form and showing that they will account for conduction and excitation in quantitative terms.
Abstract: This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre (Hodgkinet al, 1952,J Physiol116, 424–448; Hodgkin and Huxley, 1952,J Physiol116, 449–566) Its general object is to discuss the results of the preceding papers (Section 1), to put them into mathematical form (Section 2) and to show that they will account for conduction and excitation in quantitative terms (Sections 3–6)

19,800 citations

Journal ArticleDOI
TL;DR: A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons and combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons.
Abstract: A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of Hodgkin-Huxley-type dynamics and the computational efficiency of integrate-and-fire neurons. Using this model, one can simulate tens of thousands of spiking cortical neurons in real time (1 ms resolution) using a desktop PC.

4,082 citations

Journal ArticleDOI
TL;DR: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon, and can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area.
Abstract: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.

2,331 citations

Journal ArticleDOI
TL;DR: This paper shows conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset.
Abstract: Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications.

725 citations

Journal ArticleDOI
TL;DR: A novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures and evaluates the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and shows similar object recognition accuracy as the original CNN.
Abstract: Deep-learning neural networks such as convolutional neural network (CNN) have shown great potential as a solution for difficult vision problems, such as object recognition. Spiking neural networks (SNN)-based architectures have shown great potential as a solution for realizing ultra-low power consumption using spike-based neuromorphic hardware. This work describes a novel approach for converting a deep CNN into a SNN that enables mapping CNN to spike-based hardware architectures. Our approach first tailors the CNN architecture to fit the requirements of SNN, then trains the tailored CNN in the same way as one would with CNN, and finally applies the learned network weights to an SNN architecture derived from the tailored CNN. We evaluate the resulting SNN on publicly available Defense Advanced Research Projects Agency (DARPA) Neovision2 Tower and CIFAR-10 datasets and show similar object recognition accuracy as the original CNN. Our SNN implementation is amenable to direct mapping to spike-based neuromorphic hardware, such as the ones being developed under the DARPA SyNAPSE program. Our hardware mapping analysis suggests that SNN implementation on such spike-based hardware is two orders of magnitude more energy-efficient than the original CNN implementation on off-the-shelf FPGA-based hardware.

695 citations