scispace - formally typeset
Search or ask a question
Author

R. Dhanabal

Bio: R. Dhanabal is an academic researcher from VIT University. The author has contributed to research in topics: Single-precision floating-point format & ModelSim. The author has an hindex of 1, co-authored 2 publications receiving 3 citations.

Papers
More filters
Proceedings ArticleDOI
01 Jan 2018
TL;DR: The modified KA algorithm based floating point multiplier is presented and single precision and double precision operations are supported.
Abstract: Karatsubha based floating point multiplier has many attractive features like reduction in computation complexity and area but there is a problem of register complexity. In this paper, the modified KA algorithm based floating point multiplier is presented. In proposed multiplier single precision and double precision operations are supported. The iterative method which requires less hardware is used for DP operations which leads to reduction in power consumption. As the multiplication dominates the execution time, to overcome the problem in the proposed floating point multiplier for mantissa multiplication different algorithms are compared and best one is chosen which has less number of multiplications. This multiplier also handles underflow and overflow. In order to form a design Verilog is the description language and the tool used is modelsim Altera 10.1d (Quartus 11 13.0spl) and asic implementation is done in Synopsys tool.

2 citations

Proceedings ArticleDOI
Nisha Singh1, R. Dhanabal1
01 Feb 2018
TL;DR: This paper presents the design of a single precision floating point arithmetic logic unit using Verilog HDL using ModelSim and an efficient algorithm for addition and subtraction module is developed in order to reduce the no.
Abstract: Floating point numbers are used in many applications such as telecommunications, medical imagining, radar, etc. In top-down design approach, four arithmetic modules, addition, subtraction, multiplication and division are combined to form a floating point ALU unit. Each module is independent to each other. In this paper, the implementation of a floating point ALU is designed and simulated. This paper presents the design of a single precision floating point arithmetic logic unit. The operations are performed on 32-bit operands. The algorithms of addition, subtraction, division and multiplication are modeled in Verilog HDL using ModelSim and an efficient algorithm for addition and subtraction module is developed in order to reduce the no. of gates used. The RTL code is synthesized using Synopsys RTL complier for 180nm TSMC technology with proper constraints.

1 citations


Cited by
More filters
Proceedings ArticleDOI
18 Jan 2021
TL;DR: In this paper, the analog properties of the resistive random access memory (RRAM) crossbar are explored and a scalable RRAM-based in-memory floating-point computation architeture (RIME) is proposed.
Abstract: Processing in-memory (PIM) is an emerging technology poised to break the memory-wall in the conventional von Neumann architecture. PIM reduces data movement from the memory systems to the CPU by utilizing memory cells for logic computation. However, existing PIM designs do not support high precision computation (e.g., floating-point operations) essential for critical data-intensive applications. Furthermore, PIM architectures require complex control module and costly peripheral circuits to harness the full potential of in-memory computation. These peripherals and control modules usually suffer from scalability and efficiency issues. Hence, in this paper, we explore the analog properties of the resistive random access memory (RRAM) crossbar and propose a scalable RRAM-based in-memory floating-point computation architeture (RIME). RIME uses single-cycle NOR, NAND, and Minority logic to achieve floating-point operations. RIME features a centralized control module and a simplified peripheral circuit to eliminate data movement during parallel computation. An experimental 32-bit RIME multiplier demonstrates 4.8X speedup, 1.9X area-improvement, and 5.4X energy-efficiency than state-of-the-art RRAM-based PIM multipliers.

11 citations

Proceedings ArticleDOI
01 Dec 2018
TL;DR: The prime proposal is to increase the speed of the single precision floating point multiplier by implementing mantissa multiplication using CORDIC algorithm and exponent addition using Kogge-Stone adder which results in increasing the speed by several folds.
Abstract: Floating point arithmetic has paramount necessity in computer systems. Floating point multiplier is appreciably used in numerous applications which yearn for speed. Generally, floating point multiplier requires 23×23 mantissa multiplication and 8-bit exponent addition. Thus, delay of the mantissa multiplication plays a crucial role in boosting the speed. In this paper, the prime proposal is to increase the speed of the single precision floating point multiplier by implementing mantissa multiplication using CORDIC algorithm and exponent addition using Kogge-Stone adder which results in increasing the speed by several folds. Further, the performance of floating point multiplier using CORDIC algorithm and VEDIC multiplier is contemplated in terms of area, delay and power. Floating point multiplier was designed in VHDL using XILINX ISE 14.7 and implemented in XILINX Spartan 6e board. The proposed idea has shown better performance in terms of speed.

3 citations

Journal Article
TL;DR: This paper is going to incorporate various instructions which will be serving for a dynamic range of applications which the floating point unit needs to be fast enough to process real time signals.
Abstract: The floating point processor is the backbone of digital signal processing units In real time applications it is necessary to have a powerful processor which has a very high level of precision This can only be achieved using a floating point unitWe are going to incorporate various instructions which will be serving for a dynamic range of applications The floating point unit needs to be fast enough to process real time signals An efficient design of the floating point processor can help in reducing the area and increasing the speed

1 citations