scispace - formally typeset

Journal ArticleDOI

Voltage and Temperature Aware Statistical Leakage Analysis Framework Using Artificial Neural Networks


TL;DR: Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Abstract: Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Topics: Leakage (electronics) (52%), Artificial neural network (52%), Sigmoid function (51%), Activation function (51%)
Citations
More filters

Proceedings ArticleDOI
29 Apr 2013-
TL;DR: An analytical model is developed to predict the probability density function and covariance of temperatures and voltage droops of a die in the presence of the BTI and process variation and it is observed that for benchmark circuits, treating each aspect independently and ignoring their intrinsic interactions results in 16% over-design, translating to unnecessary yield and performance loss.
Abstract: In nano-scale regime, there are various sources of uncertainty and unpredictability of VLSI designs such as transistor aging mainly due to Bias Temperature Instability (BTI) as well as Process-Voltage-Temperature (PVT) variations. BTI exponentially varies by temperature and the actual supply voltage seen by the transistors within the chip which are functions of leakage power. Leakage power is strongly impacted by PVT and BTI which in turn results in thermal-voltage variations. Hence, neglecting one or some of these aspects can lead to a considerable inaccuracy in the estimated BTI-induced delay degradation. However, a holistic approach to tackle all these issues and their interdependence is missing. In this paper, we develop an analytical model to predict the probability density function and covariance of temperatures and voltage droops of a die in the presence of the BTI and process variation. Based on this model, we propose a statistical method that characterizes the life-time of the circuit affected by BTI in the presence of process-induced temperature-voltage variations. We observe that for benchmark circuits, treating each aspect independently and ignoring their intrinsic interactions results in 16% over-design, translating to unnecessary yield and performance loss.

26 citations


Cites background from "Voltage and Temperature Aware Stati..."

  • ...BTI consists of two different phases: • Stress: When PMOS (NMOS) transistor is under negative (positive) bias, some interface traps are generated at the interface of Si-dielectric resulting in an increase in threshold voltage of the transistor....

    [...]


Journal ArticleDOI
Guillaume Bouffard1, Jean-Louis Lanet1Institutions (1)
TL;DR: This paper proposes a methodology to discover the romized code whose access is protected by the virtual machine, which uses a hooked code in an indirection table to gain access to the real processor, thus allowing to run a shell code written in 8051 assembly language.
Abstract: Attacks on smart cards can only be based on a black box approach where the code of cryptographic primitives and operating system are not accessible. To perform hardware or software attacks, a white box approach providing access to the binary code is more efficient. In this paper, we propose a methodology to discover the romized code whose access is protected by the virtual machine. It uses a hooked code in an indirection table. We gained access to the real processor, thus allowing us to run a shell code written in 8051 assembly language. As a result, this code has been able to dump completely the ROM of a Java Card operating system. One of the issues is the possibility to reverse the cryptographic algorithm and all the embedded countermeasures. Finally, our attack is evaluated on different cards from distinct manufacturers.

18 citations


Additional excerpts

  • ...3 Temperature analysis [21,27,41]...

    [...]


Journal ArticleDOI
Kyul Ko1, Jang Kyu Lee1, Hyungcheol Shin1Institutions (1)
TL;DR: A variability-aware machine learning (ML) approach that predicts variations in the key electrical parameters of 3-D NAND Flash memories caused by various sources of variability and verified the accuracy, efficiency, and generality of artificial neural network (ANN) algorithm-based ML systems.
Abstract: This article proposes a variability-aware machine learning (ML) approach that predicts variations in the key electrical parameters of 3-D NAND Flash memories. For the first time, we have verified the accuracy, efficiency, and generality of the predictive impact factor effects of artificial neural network (ANN) algorithm-based ML systems. ANN-based ML algorithms can be very effective in multiple-input and multiple-output (MIMO) predictions. Therefore, changes in the key electrical characteristics of the device caused by various sources of variability are simultaneously and integrally predicted. This algorithm benchmarks 3-D stochastic TCAD simulation, showing a prediction error rate of less than 1%, as well as a calculation cost reduction of over 80%. In addition, the generality of the algorithm is confirmed by predicting the operating characteristics of the 3-D NAND Flash memory with various structural conditions as the number of layers increases.

10 citations


Cites methods from "Voltage and Temperature Aware Stati..."

  • ...As shown in Table II, the ANN algorithm structure has five layers, which are referred to in order as the input, first hidden, second hidden, third hidden, and output layers [12]–[15]....

    [...]


Journal ArticleDOI
TL;DR: ANNs can model a much higher degree of nonlinearity compared to existing quadratic polynomial models and, hence, can even be used in sub-100-nm technologies to model leakage current that exponentially depends on process parameters.
Abstract: A technique for extracting statistical compact model parameters using artificial neural networks (ANNs) is proposed. ANNs can model a much higher degree of nonlinearity compared to existing quadratic polynomial models and, hence, can even be used in sub-100-nm technologies to model leakage current that exponentially depends on process parameters. Existing techniques cannot be extended to handle such exponential functions. Additionally, ANNs can handle multiple input multiple output relations very effectively. The concept applied to CMOS devices improves the efficiency and accuracy of model extraction. Results from the ANN match the ones obtained from SPICE simulators within 1%.

9 citations


Journal ArticleDOI
TL;DR: A single delay model of logic gate using neural network which comprehensively captures process, voltage, and temperature variation along with input slew and output load is proposed and shown how the model can be used to derive sensitivities required for linear SSTA for an arbitrary voltage and temperature.
Abstract: With the emergence of voltage scaling as one of the most powerful power reduction techniques, it has been important to support voltage scalable statistical static timing analysis (SSTA) in deep submicrometer process nodes. In this paper, we propose a single delay model of logic gate using neural network which comprehensively captures process, voltage, and temperature variation along with input slew and output load. The number of simulation programs with integrated circuit emphasis (SPICE) required to create this model over a large voltage and temperature range is found to be modest and 4t less than that required for a conventional table-based approach with comparable accuracy. We show how the model can be used to derive sensitivities required for linear SSTA for an arbitrary voltage and temperature. Our experimentation on ISCAS 85 benchmarks across a voltage range of 0.9-1.1 V shows that the average error in mean delay is less than 1.08% and average error in standard deviation is less than 2.85%. The errors in predicting the 99% and 1% probability point are 1.31% and 1%, respectively, with respect to SPICE. The two potential applications of voltage-aware SSTA have been presented, i.e., one for improving the accuracy of timing analysis by considering instance-specific voltage drops in power grids and the other for determining optimum supply voltage for target yield for dynamic voltage scaling applications.

8 citations


Cites background from "Voltage and Temperature Aware Stati..."

  • ...Recently, it has been shown in [28] that NNs, with a small modification to their kernel, are amenable to derivation of analytical formulae for the means of their outputs in terms of the input variables....

    [...]


References
More filters

Book
Christopher M. Bishop1Institutions (1)
01 Jan 1995-
TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Abstract: From the Publisher: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts, the book examines techniques for modelling probability density functions and the properties and merits of the multi-layer perceptron and radial basis function network models. Also covered are various forms of error functions, principal algorithms for error function minimalization, learning and generalization in neural networks, and Bayesian techniques and their applications. Designed as a text, with over 100 exercises, this fully up-to-date work will benefit anyone involved in the fields of neural computation and pattern recognition.

19,046 citations


Journal ArticleDOI
01 Jul 1989-Neural Networks
TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Abstract: This paper rigorously establishes that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available. In this sense, multilayer feedforward networks are a class of universal approximators.

15,834 citations


Book ChapterDOI
Suresh Kothari1, Heekuck Oh1Institutions (1)
TL;DR: The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Abstract: Publisher Summary This chapter provides an account of different neural network architectures for pattern recognition. A neural network consists of several simple processing elements called neurons. Each neuron is connected to some other neurons and possibly to the input nodes. Neural networks provide a simple computing paradigm to perform complex recognition tasks in real time. The chapter categorizes neural networks into three types: single-layer networks, multilayer feedforward networks, and feedback networks. It discusses the gradient descent and the relaxation method as the two underlying mathematical themes for deriving learning algorithms. A lot of research activity is centered on learning algorithms because of their fundamental importance in neural networks. The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue. It closes with the discussion of performance and implementation issues.

12,585 citations


Journal ArticleDOI
01 Jul 1989-Neural Networks

9,800 citations


Journal ArticleDOI
TL;DR: The nationwide network of sheldon m ross introduction to probability models solutions is dedicated to offering you the ideal service and will help you with this kind of manual.
Abstract: Download Introduction to Probability Models Sheldon M Download Pdf octave levenspiel solution manual pdf stochastic processes sheldon m ross pdf. Our nationwide network of sheldon m ross introduction to probability models solutions is dedicated to offering you the ideal service. With this kind of manual. MTL 106 (Introduction to Probability Theory and Stochastic Processes) 4 Credits Introduction to Probability Models, Sheldon M. Ross, Academic Press, ninth.

2,660 citations


Network Information
Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20211
20205
20191
20171
20151
20142