scispace - formally typeset
Search or ask a question
Author

Hamid Reza Mahdiani

Bio: Hamid Reza Mahdiani is an academic researcher from Shahid Beheshti University. The author has contributed to research in topics: Defuzzification & Very-large-scale integration. The author has an hindex of 7, co-authored 19 publications receiving 483 citations. Previous affiliations of Hamid Reza Mahdiani include University of Tehran & Power and Water University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that these proposed Bio-inspired Imprecise Computational blocks (BICs) can be exploited to efficiently implement a three-layer face recognition neural network and the hardware defuzzification block of a fuzzy processor.
Abstract: The conventional digital hardware computational blocks with different structures are designed to compute the precise results of the assigned calculations. The main contribution of our proposed Bio-inspired Imprecise Computational blocks (BICs) is that they are designed to provide an applicable estimation of the result instead of its precise value at a lower cost. These novel structures are more efficient in terms of area, speed, and power consumption with respect to their precise rivals. Complete descriptions of sample BIC adder and multiplier structures as well as their error behaviors and synthesis results are introduced in this paper. It is then shown that these BIC structures can be exploited to efficiently implement a three-layer face recognition neural network and the hardware defuzzification block of a fuzzy processor.

458 citations

Journal ArticleDOI
TL;DR: A new class of applications is categorized in this paper, inherently capable of absorbing some degrees of vulnerability and providing FT based on their natural properties which are developed for VLSI implementation of imprecision-tolerant applications.
Abstract: Reliability should be identified as the most important challenge in future nano-scale very large scale integration (VLSI) implementation technologies for the development of complex integrated systems. Normally, fault tolerance (FT) in a conventional system is achieved by increasing its redundancy, which also implies higher implementation costs and lower performance that sometimes makes it even infeasible. In contrast to custom approaches, a new class of applications is categorized in this paper, which is inherently capable of absorbing some degrees of vulnerability and providing FT based on their natural properties. Neural networks are good indicators of imprecision-tolerant applications. We have also proposed a new class of FT techniques called relaxed fault-tolerant (RFT) techniques which are developed for VLSI implementation of imprecision-tolerant applications. The main advantage of RFT techniques with respect to traditional FT solutions is that they exploit inherent FT of different applications to reduce their implementation costs while improving their performance. To show the applicability as well as the efficiency of the RFT method, the experimental results for implementation of a face-recognition computationally intensive neural network and its corresponding RFT realization are presented in this paper. The results demonstrate promising higher performance of artificial neural network VLSI solutions for complex applications in faulty nano-scale implementation environments.

53 citations

Journal ArticleDOI
TL;DR: Three new implementation friendly defuzzification algorithms are presented in this paper and compared with a complete set of existingdefuzzification methods to demonstrate the superiority of the new methods in terms of area, delay, and power consumption when implemented in hardware.

26 citations

Journal ArticleDOI
01 Apr 2020
TL;DR: A novel approach based on the hidden Markov model (HMM) for forecasting day-ahead solar power that is superior to other examined methods in terms of accuracy and computational time is developed.
Abstract: Nowadays, with the emergence of new technologies such as smart grid and increasing the use of renewable energy in the grid, energy prediction has become more important in the electricity industry. Furthermore, with growing the integration of power generated from renewable energy sources into grids, an accurate forecasting tool for the reduction in undesirable effects of this scenario is essential. This study has developed a novel approach based on the hidden Markov model (HMM) for forecasting day-ahead solar power. The aim is to find a pattern of solar power changes at a given time in consecutive days. The proposed approach consists of two steps. In the first step, the cosine similarity is used to determine the similarity of solar power variations on consecutive days to a particular vector. In the second step, the obtained information from the first step is fed to HMM as a feature vector. These data are used for training and forecasting day-ahead solar power. After obtaining the preliminary results of the prediction, two known filters are utilized as post-processing to remove spikes and smooth the results. Finally, the performance of the proposed method is tested on real NREL data. No meteorological data (even solar radiation) are used; moreover, the model is fed only from the solar power of the past 23 days. To evaluate the proposed method, a feed-forward neural network and a simple HMM are examined with the same data and conditions. All three methods are tested with and without the post-processing. The results show that the proposed model is superior to other examined methods in terms of accuracy and computational time.

20 citations

Proceedings ArticleDOI
28 Nov 2005
TL;DR: An elaborate set of ten different defuzzification methods including the authors' three newly-proposed ones are introduced, and a two-dimensional diagram of cost-accuracy analysis is introduced which helps the designers to choose thedefuzzification method which best suits their application.
Abstract: In this paper, three novel defuzzification methods are presented which are appropriate for low-cost hardware implementations. An elaborate set of ten different defuzzification methods including our three newly-proposed ones are introduced. The C models for all of these methods are prepared for the accuracy-analysis simulations. The HDL models are also developed and synthesized to analyze the implementation cost of each method. This makes it possible to compare the accuracy of these different methods while considering their VLSI implementation costs. The accuracy analysis simulations are performed on six different sets of output fuzzy membership functions with various features to achieve more general and reliable results. A two-dimensional diagram of cost-accuracy analysis is introduced which helps the designers to choose the defuzzification method which best suits their application

19 citations


Cited by
More filters
Proceedings ArticleDOI
27 May 2013
TL;DR: This paper reviews recent progress in the area, including design of approximate arithmetic blocks, pertinent error and quality measures, and algorithm-level techniques for approximate computing.
Abstract: Approximate computing has recently emerged as a promising approach to energy-efficient design of digital systems. Approximate computing relies on the ability of many systems and applications to tolerate some loss of quality or optimality in the computed result. By relaxing the need for fully precise or completely deterministic operations, approximate computing techniques allow substantially improved energy efficiency. This paper reviews recent progress in the area, including design of approximate arithmetic blocks, pertinent error and quality measures, and algorithm-level techniques for approximate computing.

921 citations

Journal ArticleDOI
TL;DR: This paper proposes logic complexity reduction at the transistor level as an alternative approach to take advantage of the relaxation of numerical accuracy, and demonstrates the utility of these approximate adders in two digital signal processing architectures with specific quality constraints.
Abstract: Low power is an imperative requirement for portable multimedia devices employing various signal processing algorithms and architectures. In most multimedia applications, human beings can gather useful information from slightly erroneous outputs. Therefore, we do not need to produce exactly correct numerical outputs. Previous research in this context exploits error resiliency primarily through voltage overscaling, utilizing algorithmic and architectural techniques to mitigate the resulting errors. In this paper, we propose logic complexity reduction at the transistor level as an alternative approach to take advantage of the relaxation of numerical accuracy. We demonstrate this concept by proposing various imprecise or approximate full adder cells with reduced complexity at the transistor level, and utilize them to design approximate multi-bit adders. In addition to the inherent reduction in switched capacitance, our techniques result in significantly shorter critical paths, enabling voltage scaling. We design architectures for video and image compression algorithms using the proposed approximate arithmetic units and evaluate them to demonstrate the efficacy of our approach. We also derive simple mathematical models for error and power consumption of these approximate adders. Furthermore, we demonstrate the utility of these approximate adders in two digital signal processing architectures (discrete cosine transform and finite impulse response filter) with specific quality constraints. Simulation results indicate up to 69% power savings using the proposed approximate adders, when compared to existing implementations using accurate adders.

637 citations

Posted Content
TL;DR: An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed.
Abstract: Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

570 citations

Journal ArticleDOI
TL;DR: It is shown that these proposed Bio-inspired Imprecise Computational blocks (BICs) can be exploited to efficiently implement a three-layer face recognition neural network and the hardware defuzzification block of a fuzzy processor.
Abstract: The conventional digital hardware computational blocks with different structures are designed to compute the precise results of the assigned calculations. The main contribution of our proposed Bio-inspired Imprecise Computational blocks (BICs) is that they are designed to provide an applicable estimation of the result instead of its precise value at a lower cost. These novel structures are more efficient in terms of area, speed, and power consumption with respect to their precise rivals. Complete descriptions of sample BIC adder and multiplier structures as well as their error behaviors and synthesis results are introduced in this paper. It is then shown that these BIC structures can be exploited to efficiently implement a three-layer face recognition neural network and the hardware defuzzification block of a fuzzy processor.

458 citations

Journal ArticleDOI
TL;DR: New metrics are proposed for evaluating the reliability as well as the power efficiency of approximate and probabilistic adders and it is shown that the MED is an effective metric for measuring the implementation accuracy of a multiple-bit adder and that the NED is a nearly invariant metric independent of the size of an adder.
Abstract: Addition is a fundamental function in arithmetic operation; several adder designs have been proposed for implementations in inexact computing. These adders show different operational profiles; some of them are approximate in nature while others rely on probabilistic features of nanoscale circuits. However, there has been a lack of appropriate metrics to evaluate the efficacy of various inexact designs. In this paper, new metrics are proposed for evaluating the reliability as well as the power efficiency of approximate and probabilistic adders. Reliability is analyzed using the so-called sequential probability transition matrices (SPTMs). Error distance (ED) is initially defined as the arithmetic distance between an erroneous output and the correct output for a given input. The mean error distance (MED) and normalized error distance (NED) are then proposed as unified figures that consider the averaging effect of multiple inputs and the normalization of multiple-bit adders. It is shown that the MED is an effective metric for measuring the implementation accuracy of a multiple-bit adder and that the NED is a nearly invariant metric independent of the size of an adder. The MED is, therefore, useful in assessing the effectiveness of an approximate or probabilistic adder implementation, while the NED is useful in characterizing the reliability of a specific design. Since inexact adders are often used for saving power, the product of power and NED is further utilized for evaluating the tradeoffs between power consumption and precision. Although illustrated using adders, the proposed metrics are potentially useful in assessing other arithmetic circuit designs for applications of inexact computing.

453 citations