scispace - formally typeset
Search or ask a question
Author

V Visvanathan

Bio: V Visvanathan is an academic researcher from Texas Instruments. The author has contributed to research in topics: Leakage (electronics) & Activation function. The author has an hindex of 1, co-authored 1 publications receiving 18 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.
Abstract: Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.

19 citations


Cited by
More filters
Proceedings ArticleDOI
29 Apr 2013
TL;DR: An analytical model is developed to predict the probability density function and covariance of temperatures and voltage droops of a die in the presence of the BTI and process variation and it is observed that for benchmark circuits, treating each aspect independently and ignoring their intrinsic interactions results in 16% over-design, translating to unnecessary yield and performance loss.
Abstract: In nano-scale regime, there are various sources of uncertainty and unpredictability of VLSI designs such as transistor aging mainly due to Bias Temperature Instability (BTI) as well as Process-Voltage-Temperature (PVT) variations. BTI exponentially varies by temperature and the actual supply voltage seen by the transistors within the chip which are functions of leakage power. Leakage power is strongly impacted by PVT and BTI which in turn results in thermal-voltage variations. Hence, neglecting one or some of these aspects can lead to a considerable inaccuracy in the estimated BTI-induced delay degradation. However, a holistic approach to tackle all these issues and their interdependence is missing. In this paper, we develop an analytical model to predict the probability density function and covariance of temperatures and voltage droops of a die in the presence of the BTI and process variation. Based on this model, we propose a statistical method that characterizes the life-time of the circuit affected by BTI in the presence of process-induced temperature-voltage variations. We observe that for benchmark circuits, treating each aspect independently and ignoring their intrinsic interactions results in 16% over-design, translating to unnecessary yield and performance loss.

30 citations

Journal ArticleDOI
TL;DR: This paper proposes a methodology to discover the romized code whose access is protected by the virtual machine, which uses a hooked code in an indirection table to gain access to the real processor, thus allowing to run a shell code written in 8051 assembly language.
Abstract: Attacks on smart cards can only be based on a black box approach where the code of cryptographic primitives and operating system are not accessible. To perform hardware or software attacks, a white box approach providing access to the binary code is more efficient. In this paper, we propose a methodology to discover the romized code whose access is protected by the virtual machine. It uses a hooked code in an indirection table. We gained access to the real processor, thus allowing us to run a shell code written in 8051 assembly language. As a result, this code has been able to dump completely the ROM of a Java Card operating system. One of the issues is the possibility to reverse the cryptographic algorithm and all the embedded countermeasures. Finally, our attack is evaluated on different cards from distinct manufacturers.

19 citations

Journal ArticleDOI
TL;DR: A variability-aware machine learning (ML) approach that predicts variations in the key electrical parameters of 3-D NAND Flash memories caused by various sources of variability and verified the accuracy, efficiency, and generality of artificial neural network (ANN) algorithm-based ML systems.
Abstract: This article proposes a variability-aware machine learning (ML) approach that predicts variations in the key electrical parameters of 3-D NAND Flash memories. For the first time, we have verified the accuracy, efficiency, and generality of the predictive impact factor effects of artificial neural network (ANN) algorithm-based ML systems. ANN-based ML algorithms can be very effective in multiple-input and multiple-output (MIMO) predictions. Therefore, changes in the key electrical characteristics of the device caused by various sources of variability are simultaneously and integrally predicted. This algorithm benchmarks 3-D stochastic TCAD simulation, showing a prediction error rate of less than 1%, as well as a calculation cost reduction of over 80%. In addition, the generality of the algorithm is confirmed by predicting the operating characteristics of the 3-D NAND Flash memory with various structural conditions as the number of layers increases.

17 citations

Proceedings ArticleDOI
06 Apr 2020
TL;DR: This work investigated process variation effect of 3D NAND flash memory cell, especially about geometric variation using a machine learning (ML) model, which has multi-input and multi-output (MIMO) structure and deep hidden layers to train and predict complex data of process variation.
Abstract: We investigated process variation effect of 3D NAND flash memory cell, especially about geometric variation using a machine learning (ML) model. Geometric variability sources impact on variation of device's electrical parameters such as threshold voltage $(\mathbf{V}_{\mathbf{t}})$ , subthreshold swing (SS), transconductance $(\mathbf{g}_{\mathbf{m}})$ and on-current $(\mathbf{I}_{\mathbf{on}})$ . All these data were analyzed with 3D stochastic Technology Computer-Aided Design (TCAD) simulation and trained through ML model, which is composed of artificial neural network (ANN). The model has multi-input and multi-output (MIMO) structure and deep hidden layers to train and predict complex data of process variation. In order to make ML model more accurate, simulation for constructing training data set was carried out with a large number of random unit cells, which are cut from various strings. The completed ML model was tested with random test data set which had not been used for training to prove its accuracy. Through the test process, ML model showed the error of up to 5% and proved the accuracy of prediction.

14 citations

Journal ArticleDOI
TL;DR: A single delay model of logic gate using neural network which comprehensively captures process, voltage, and temperature variation along with input slew and output load is proposed and shown how the model can be used to derive sensitivities required for linear SSTA for an arbitrary voltage and temperature.
Abstract: With the emergence of voltage scaling as one of the most powerful power reduction techniques, it has been important to support voltage scalable statistical static timing analysis (SSTA) in deep submicrometer process nodes. In this paper, we propose a single delay model of logic gate using neural network which comprehensively captures process, voltage, and temperature variation along with input slew and output load. The number of simulation programs with integrated circuit emphasis (SPICE) required to create this model over a large voltage and temperature range is found to be modest and 4t less than that required for a conventional table-based approach with comparable accuracy. We show how the model can be used to derive sensitivities required for linear SSTA for an arbitrary voltage and temperature. Our experimentation on ISCAS 85 benchmarks across a voltage range of 0.9-1.1 V shows that the average error in mean delay is less than 1.08% and average error in standard deviation is less than 2.85%. The errors in predicting the 99% and 1% probability point are 1.31% and 1%, respectively, with respect to SPICE. The two potential applications of voltage-aware SSTA have been presented, i.e., one for improving the accuracy of timing analysis by considering instance-specific voltage drops in power grids and the other for determining optimum supply voltage for target yield for dynamic voltage scaling applications.

12 citations