scispace - formally typeset
Search or ask a question
Author

Sai Sourabh Yenamachintala

Other affiliations: Texas A&M University
Bio: Sai Sourabh Yenamachintala is an academic researcher from National Institute of Technology, Warangal. The author has contributed to research in topics: Multiplication algorithm & Karatsuba algorithm. The author has an hindex of 2, co-authored 4 publications receiving 21 citations. Previous affiliations of Sai Sourabh Yenamachintala include Texas A&M University.

Papers
More filters
Proceedings ArticleDOI
23 Apr 2015
TL;DR: An IEEE-754 based Vedic multiplier has been developed to carry out both single precision and double precision format floating point operations and its performance has been compared with Booth and Karatsuba based floating point multipliers.
Abstract: Most of the scientific operation involve floating point computations. It is necessary to implement faster multipliers occupying less area and consuming less power. Multipliers play a critical role in any digital design. Even though various multiplication algorithms have been in use, the performance of Vedic multipliers has not drawn a wider attention. Vedic mathematics involves application of 16 sutras or algorithms. One among these, the Urdhva tiryakbhyam sutra for multiplication has been considered in this work. An IEEE-754 based Vedic multiplier has been developed to carry out both single precision and double precision format floating point operations and its performance has been compared with Booth and Karatsuba based floating point multipliers. Xilinx FPGA has been made use of while implementing these algorithms and a resource utilization and timing performance based comparison has also been made.

19 citations

Journal ArticleDOI
TL;DR: This article explores bio-plausible spike-timing-dependent-plasticity (STDP) mechanisms to train liquid state machine models with and without supervision and pursues efficient hardware implementation of FPGA LSM accelerators by performing algorithm-level optimization of the two proposed training rules and exploiting the self-organizing behaviors naturally induced by STDP.
Abstract: The liquid state machine (LSM) is a model of recurrent spiking neural networks (SNNs) and provides an appealing brain-inspired computing paradigm for machine-learning applications. Moreover, operated by processing information directly on spiking events, the LSM is amenable to efficient event-driven hardware implementation. However, training SNNs is, in general, a difficult task as synaptic weights shall be updated based on neural firing activities while achieving a learning objective. In this article, we explore bio-plausible spike-timing-dependent-plasticity (STDP) mechanisms to train liquid state machine models with and without supervision. First, we employ a supervised STDP rule to train the output layer of the LSM while delivering good classification performance. Furthermore, a hardware-friendly unsupervised STDP rule is leveraged to train the recurrent reservoir to further boost the performance. We pursue efficient hardware implementation of FPGA LSM accelerators by performing algorithm-level optimization of the two proposed training rules and exploiting the self-organizing behaviors naturally induced by STDP.Several recurrent spiking neural accelerators are built on a Xilinx Zync ZC-706 platform and trained for speech recognition with the TI46 speech corpus as the benchmark. Adopting the two proposed unsupervised and supervised STDP rules outperforms the recognition accuracy of a competitive non-STDP baseline training algorithm by up to 3.47%.

8 citations

Proceedings ArticleDOI
01 Sep 2014
TL;DR: This work discusses one of the 16 sutras, urdhva tiryakbhyam sutra for multiplication, and two other multiplication algorithms namely, Booth and Karatsuba have been considered for the purpose of performance comparison.
Abstract: The rapid growth of technology influenced the need for the design of highly efficient digital systems. Multipliers have been playing a crucial role in every digital design. It is necessary to make use of an efficient multiplier. Many algorithms came into existence aiming at the reduction of execution time and area. Taking us back to the Vedic (ancient Indian) era, the sutras or algorithms described in Vedic mathematics rendered high degree of efficiency. Vedic mathematics describes 16 different sutras which involve multiplication operation. This work discusses one of the 16 sutras, urdhva tiryakbhyam sutra for multiplication. Two other multiplication algorithms namely, Booth and Karatsuba have been considered for the purpose of performance comparison. Elliptic Curve Cryptographic applications require repeated application of higher key size multiplication operation. All the three algorithms have been implemented using Xilinx FPGA and a resource utilization and timing summary comparison has been made.

5 citations

Proceedings ArticleDOI
01 Dec 2015
TL;DR: The implementation of Vedic algorithm on wireless sensor nodes is discussed, which shows the performance of 160-bit Vedic multiplier is compared to that of a 160- bit Karatsuba multiplier.
Abstract: In wireless sensor networks, the sensor nodes collect data from their surroundings and transmit the digitized data towards the base station. Most of the processing operations involves the use of multiplication. Aiming to reduce the computational complexity and utilization of resources, many multiplication algorithms have been proposed. As the sensor nodes being resource constrained, it is essential to perform the processing, incorporating highly efficient algorithms. This paper discusses the implementation of Vedic algorithm on wireless sensor nodes. The sensor nodes are programmed using nesC and the performance of 160-bit Vedic multiplier is compared to that of a 160-bit Karatsuba multiplier. Such higher key lengths are being used in ECC, which is a public key cryptographic technique.

Cited by
More filters
Proceedings ArticleDOI
23 Apr 2015
TL;DR: An IEEE-754 based Vedic multiplier has been developed to carry out both single precision and double precision format floating point operations and its performance has been compared with Booth and Karatsuba based floating point multipliers.
Abstract: Most of the scientific operation involve floating point computations. It is necessary to implement faster multipliers occupying less area and consuming less power. Multipliers play a critical role in any digital design. Even though various multiplication algorithms have been in use, the performance of Vedic multipliers has not drawn a wider attention. Vedic mathematics involves application of 16 sutras or algorithms. One among these, the Urdhva tiryakbhyam sutra for multiplication has been considered in this work. An IEEE-754 based Vedic multiplier has been developed to carry out both single precision and double precision format floating point operations and its performance has been compared with Booth and Karatsuba based floating point multipliers. Xilinx FPGA has been made use of while implementing these algorithms and a resource utilization and timing performance based comparison has also been made.

19 citations

Proceedings ArticleDOI
06 Apr 2016
TL;DR: This paper presents the highly efficient 64 bit Vedic multiplier for the mantissa calculation using rule or sutra of Vedic mathematics called Urdhva Tiryakbhyam Sutra which deals with vertically and crosswise multiplication.
Abstract: As floating point architecture is very hot topic for researchers so challenges are always there to design the efficient Floating point architecture. Out of other operations, Floating point multiplication is the most commonly used operation and it requires the multiplication of the mantissa of Floating point numbers. This paper presents the highly efficient 64 bit multiplier for the mantissa calculation using rule or sutra of Vedic mathematics called Urdhva Tiryakbhyam Sutra which deals with vertically and crosswise multiplication. Using this sutra in the computation algorithm of DSP processors, can enhance the efficiency and at the same time can reduce the complexity, area, power consumption and delay. Starting from the design of 2 bit Vedic multiplier we went up to design 64 bit Vedic multiplier as presented in this paper. Vedic multiplier is coded in Verilog HDL and targeted to three different families of FPGA Spartan6, Virtex5 and Virtex6 in Xilinx 13.1 ISE software. Result is compared with the Karatsuba, Vedic-Karatsuba and Optimized Vedic multiplier and found 33% reduction in delay.

19 citations

Journal ArticleDOI
TL;DR: The results of simulation indicate that the latency of the proposed novel binary multiplier systems (8-bit, 16-bit and 24-bit) with significantly shorter than existing implementations.
Abstract: Arithmetic Logic Units (ALUs) are very important components of the processor, which performs various arithmetic and logical operations such as multiplication, division, addition, subtraction, cubing, squaring, etc. Of these all operations, multiplication is most elementary and most frequently used operation in the ALUs. The operation of multiplication also forms the basis of many other complex arithmetic operations such as cubing, squaring, convolution, etc. This paper presents the modified novel multi-precision binary multiplier architecture to achieve a reduced latency/delay and area/hardware utilization along with existing implementations of binary multiplication. This system will function as second stage of the of a novel multi-precision binary multiplier system. The system was implemented using Xilinx 14.2 ISE and simulated with ISIM which was available from Xilinx 14.2 ISE. The results of simulation indicate that the latency of the proposed novel binary multiplier systems (8-bit, 16-bit and 24-bit) with significantly shorter than existing implementations.

16 citations

Journal ArticleDOI
TL;DR: This review aims to describe recent advances in SNNs and the neuromorphic hardware platforms (digital, analog, hybrid, and FPGA based) suitable for their implementation, and presents that biological background of SNN learning, such as neuron models and information encoding techniques, followed by a categorization of Snn training.
Abstract: Abstract Artificial neural networks (ANNs) have experienced a rapid advancement for their success in various application domains, including autonomous driving and drone vision. Researchers have been improving the performance efficiency and computational requirement of ANNs inspired by the mechanisms of the biological brain. Spiking neural networks (SNNs) provide a power-efficient and brain-inspired computing paradigm for machine learning applications. However, evaluating large-scale SNNs on classical von Neumann architectures (central processing units/graphics processing units) demands a high amount of power and time. Therefore, hardware designers have developed neuromorphic platforms to execute SNNs in and approach that combines fast processing and low power consumption. Recently, field-programmable gate arrays (FPGAs) have been considered promising candidates for implementing neuromorphic solutions due to their varied advantages, such as higher flexibility, shorter design, and excellent stability. This review aims to describe recent advances in SNNs and the neuromorphic hardware platforms (digital, analog, hybrid, and FPGA based) suitable for their implementation. We present that biological background of SNN learning, such as neuron models and information encoding techniques, followed by a categorization of SNN training. In addition, we describe state-of-the-art SNN simulators. Furthermore, we review and present FPGA-based hardware implementation of SNNs. Finally, we discuss some future directions for research in this field.

14 citations

Journal ArticleDOI
TL;DR: This study presents a high-speed signed Vedic multiplier (SVM) architecture using redundant binary representation in Urdhva Tiryagbhyam (UT) sutra, the first ever effort towards extension of Vedic algorithms to the signed numbers.
Abstract: This study presents a high-speed signed Vedic multiplier (SVM) architecture using redundant binary (RB) representation in Urdhva Tiryagbhyam (UT) sutra. This is the first ever effort towards extension of Vedic algorithms to the signed numbers. The proposed multiplier architecture solves the carry propagation issue in UT sutra, as carry free addition is possible in RB representation. The proposed design is coded in VHDL and synthesised in Xilinx ISE 14.4 of various FPGA devices. The proposed SVM architecture has better speed performances as compared with various state-of-the-art conventional as well as Vedic architectures.

13 citations