scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Object and Character Recognition Using Spiking Neural Network

01 Jan 2018-Materials Today: Proceedings (Elsevier BV)-Vol. 5, Iss: 1, pp 360-366
TL;DR: This paper depicts the study about classification and recognition of object and various handwritten characters using one of the popular model of SNN, Leaky integrate and fire model (LIF Model) for object recognition and two level network model for character recognition will be use.
About: This article is published in Materials Today: Proceedings.The article was published on 2018-01-01. It has received 21 citations till now. The article focuses on the topics: Feature extraction & Spiking neural network.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper aims to bring a literature review for adaptive neural-based control applied on proton exchange membrane fuel cell systems and propositions are made to fill the resource gaps about fuel cell control and give some answers to the aforementioned issues.

19 citations

Journal ArticleDOI
TL;DR: An image recognition processing technology has been playing a decisive role in the field of pattern recognition, of which automatic recognition of bank notes is an important research topic.
Abstract: At present, image recognition processing technology has been playing a decisive role in the field of pattern recognition, of which automatic recognition of bank notes is an important research topic...

14 citations

Proceedings ArticleDOI
01 Sep 2021
TL;DR: In this paper, the authors explore and identify potential sources of information leakage for the Izhikevich neuron, which is a popular neuron model used in digital implementations of SNNs.
Abstract: Spiking Neural Networks (SNNs) are a strong candidate to be used in future machine learning applications. SNNs can obtain the same accuracy of complex deep learning networks, while only using a fraction of its power. As a result, an increase in popularity of SNNs is expected in the near future for cyber physical systems, especially in the Internet of Things (IoT) segment. However, SNNs work very different than conventional neural network architectures. Consequently, applying SNNs in the field might introduce new unexpected security vulnerabilities. This paper explores and identifies potential sources of information leakage for the Izhikevich neuron, which is a popular neuron model used in digital implementations of SNNs. Simulations and experiments on FPGA implementation of the spiking neurons show that timing and power can be used to infer important information of the internal functionality of the network. Additionally, the paper demonstrates that is feasible to perform a reverse engineering attack using both power and timing leakage.

11 citations

Proceedings ArticleDOI
08 Oct 2020
TL;DR: An FPGA-implemented architecture and MATLAB-simulated model for a generalized printed letter recognition algorithm using a spiking neural network (SNN) designed and implemented using an Altera DE2 field-programmable gate array for character recognition.
Abstract: Current machine learning developments, in auto-translation research and text comprehension, demand alphabet letter recognition as a preprocessing step. Thus, this paper presents an FPGA-implemented architecture and MATLAB-simulated model for a generalized printed letter recognition algorithm. A spiking neural network (SNN) is designed and implemented using an Altera DE2 field-programmable gate array (FPGA) for character recognition. The proposed SNN structure is a two-layer network consisting of Izhikevich neurons. A modified algorithm is proposed for training purposes. The neural structure is initially designed, trained, and implemented using a MATLAB package. The resulting weights from the training process, based on MATLAB software, are employed to synthesize the SNN for hardware implementation. The SNN software design for hardware implementation is developed using Verilog code. The designed and trained SNN classifier is used to identify four characters, the letters ‘A’ to ‘D’, on a 5×3 binary grid populated by a user through 16 toggle switches implanted on the FPGA development board. The most probable class suggested by the SNN is displayed on an LCD screen. The obtained character recognition is fully identified on the FPGA and MATLAB platforms. The letter recognition rate is 3-fold faster in the FPGA than that of the simulated.

9 citations

References
More filters
Journal ArticleDOI
TL;DR: High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.
Abstract: The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

37,861 citations

Journal ArticleDOI
TL;DR: It is shown that the decision surface can be written as the sum of two orthogonal terms, the first depending on only the margin vectors (which are SVs lying on the margin), the second proportional to the regularization parameter for almost all values of the parameter.
Abstract: Support Vector Machines (SVMs) perform pattern recognition between two point classes by finding a decision surface determined by certain points of the training set, termed Support Vectors (SV). This surface, which in some feature space of possibly infinite dimension can be regarded as a hyperplane, is obtained from the solution of a problem of quadratic programming that depends on a regularization parameter. In this paper we study some mathematical properties of support vectors and show that the decision surface can be written as the sum of two orthogonal terms, the first depending only on the {\em margin vectors} (which are SVs lying on the margin), the second proportional to the regularization parameter. For almost all values of the parameter, this enables us to predict how the decision surface varies for small parameter changes. In the special but important case of feature space of finite dimension m, we also show that there are at most m+1 margin vectors and observe that m+1 SVs are usually sufficient to fully determine the decision surface. For relatively small m this latter result leads to a consistent reduction of the SV number.

212 citations