scispace - formally typeset
Search or ask a question
Author

Bhumarapu Devendra

Bio: Bhumarapu Devendra is an academic researcher from VIT University. The author has contributed to research in topics: Memory controller & Double data rate. The author has an hindex of 1, co-authored 3 publications receiving 3 citations.

Papers
More filters
Proceedings ArticleDOI
19 Jun 2014
TL;DR: This Vedic mathematics improves the performance of the multiplier in terms of speed by using RTL coding for 4×4 Vedic multipliers with and without Pipelining.
Abstract: Vedic mathematics is derived from ancient mathematics which is the simplest form of multiplication of two numbers which is one among the 16 sutras. This Vedic mathematics improves the performance of the multiplier in terms of speed. By using this technique RTL coding for 4×4 Vedic multipliers with and without Pipelining, Simulation is performed in Modelsim and got the RTL schematic in Cadence (rc). The area, delay, power analysis of multiplier performed in Cadence (rc). The delay in the Pipelined architecture got reduced by 300ps.

3 citations

Proceedings ArticleDOI
01 Apr 2017
TL;DR: In this paper, three write assist circuits for reduction in power and 8T cell circuits have been designed, and the last 8T SRAM cell in 45nm technology is implemented with operating voltage 1V.
Abstract: In modern ICs designing, the process of integrating more on-chip memories on a chip leads SRAMs to reason for a huge amount of total power and area of a chip. Therefore, memory designing with dynamic voltage scaling (DVS) capability is necessary. However, optimizing circuit operation over a wide range of voltage is not easy due to trade-offs of transistor characteristics in low-voltage and high-voltage. Ultra Dynamic Voltage Scaling (UDVS) techniques are used in low voltage levels to minimize the power consumption. Designing memories with DVS capability is gaining more importance since active as well as leakage power can be reduced by voltage scaling. UDVS is to scale the supply voltage by using assists circuits for different modes of the cell operation. In this paper three write assist circuits for reduction in power and 8T cell circuits have been designed. First one is Capacitive W-AC approach to reduce the level of cell supply voltage. Second scheme is Transient Negative Bit-line Voltage write assist scheme for write operation without using any on-chip or off-chip voltage sources and third one is transient negative bit line scheme in which write operation is performed by increasing the strength of SRAM pass transistor. Read operation for reading the data from the cell without altering (destructive read operation) the cell data with low power consumption. In this paper at last 8T SRAM cell in 45nm technology is implemented with operating voltage 1V.
Proceedings ArticleDOI
01 Feb 2014
TL;DR: The main function of DDR SDRAM is to double the bandwidth of the memory by transferring data twice per cycle on both the falling and raising edges of the clock signal, so improvement of 28.57% is achieved in performance of memory accessing.
Abstract: Modern real-time embedded system must support multiple concurrently running applications. Double Data Rate Synchronous DRAM (DDR SDRAM) became mainstream choice in designing memories due to its burst access, speed and pipeline features. Synchronous dynamic access memory is designed to support DDR transferring. To achieve the correctness of different applications and system work as to be intended, the memory controller must be configured with pipelined design for multiple operations without delay. The main function of DDR SDRAM is to double the bandwidth of the memory by transferring data (either read operation or write operation) twice per cycle on both the falling and raising edges of the clock signal. The designed DDR Controller generates the control signals as synchronous command interface between the DRAM Memory and other modules. The DDR SDRAM controller supports data width of 64 bits and Burst Length of 4 and CAS (Column Address Strobe) latency of 2 and in this pipelined SRAM controller design, improvement of 28.57% is achieved in performance of memory accessing. The architecture is designed in Modelsim AlTERA STARTER EDITION 6.5b and Cadence (RTL complier and encounter).

Cited by
More filters
Proceedings ArticleDOI
01 May 2018
TL;DR: The algorithm of the system coded using Verilog Hardware Description Language is implemented on spartan 3E series of Field Programmable Gate Array (FPGA) and shows a decrease in the power consumption and generates the results faster.
Abstract: Pipelining is a technique used to reduce the energy consumption of a device. This, when used in combination with Vedic multipliers results in low-power systems with high speed. Vedic multipliers are based on the concept of Vedic mathematics, which is an ancient Indian method of computing mathematical operations. The algorithm of the system coded using Verilog Hardware Description Language is implemented on spartan 3E series of Field Programmable Gate Array (FPGA). The design shows a decrease in the power consumption and generates the results faster. The performance of the design is compared with that of few existing non-pipelined designs.

4 citations

Proceedings ArticleDOI
01 Sep 2015
TL;DR: The possibility of hardware realization of neuronal logic gates using Vedic multipliers herein referred to as Vedic neuron has been explored and the increase in processing speed with Vedic neurons implementation has been observed which can be of use in several real time operations where speed is critical.
Abstract: Gates are the fundamental building block of all logic circuits. Artificial neural networks (ANN) have processing capabilities in a parallel architecture, and due to this they are useful in applications like pattern recognition, system identification, prediction problems, robotics, and control problems. Boolean logic realization using artificial neural network is known as Neuronal Logic. Simple and low precision computations are the basic requirements of ANN which can be performed faster. This can be implemented on cheap and low precision hardware. Neural network involves enormous number of multiplication and addition calculations. It has been already proved that multipliers based on Vedic mathematics are faster in speed than the standard multipliers. In this paper, the possibility of hardware realization of neuronal logic gates using Vedic multipliers herein referred to as Vedic neuron has been explored. This is achieved by performing the neural network computations using Vedic mathematics rather than the conventional multiplication process. Basic logic gates like AND, OR and AND-NOT have been studied and its hardware implementation using neural network has been simulated using VHDL. A comparative study was carried out on the computation speed of neuronal logic gates implemented using conventional multipliers as well as neuronal logic gates implemented using Vedic multipliers. The increase in processing speed with Vedic neuron implementation has been observed which can be of use in several real time operations where speed is critical.

4 citations

Book ChapterDOI
TL;DR: This paper proposes a study on how pipelining technology can be used in Vedic multipliers, employing Urdhava Tiryakbhyam sutra, to increase the speed and reduce the power consumption of a system.
Abstract: This paper proposes a study on how pipelining technology can be used in Vedic multipliers, employing Urdhava Tiryakbhyam sutra, to increase the speed and reduce the power consumption of a system. Pipelining is one of the methods used in the design of low-power systems. Vedic multipliers use an ancient style of multiplying numbers which allows for easier and faster calculations, compared to the regular mathematical method. The concept of pipelining, when used in these multipliers, leads to a system which computes calculations faster using lesser hardware. The study includes the direct use of pipelining in 2 × 2 bit, 4 × 4 bit, 8 × 8 bit and 16 × 16 bit Vedic multipliers. The pipelining is then further incorporated at different levels to create 8 × 8 bit and 16 × 16 bit multipliers. The algorithm of the system is implemented on Spartan 3E field-programmable gate array (FPGA). The designed system uses lesser power and has lower delay compared to the existing systems.

2 citations