scispace - formally typeset
Search or ask a question
Author

Massoud Pedram

Bio: Massoud Pedram is an academic researcher from University of Southern California. The author has contributed to research in topics: Energy consumption & CMOS. The author has an hindex of 77, co-authored 780 publications receiving 23047 citations. Previous affiliations of Massoud Pedram include University of California, Berkeley & Syracuse University.


Papers
More filters
Proceedings ArticleDOI
02 Oct 1994
TL;DR: In this article, the authors present strategies for controlling on-chip design-for-test and built-in self-test (BIST) circuitry under a partially distributed test control architecture.
Abstract: We present strategies for controlling on-chip design-for-test (DFT) and built-in self-test (BIST) circuitry under a partially distributed test control architecture. These include mechanisms for broadcasting control information from apt integrated TAP controller over an infernal test bus, techniques for creating symbolic descriptions of local decoders that employ this information to control test resources, and algorithms for encoding the bus information. The encoding algorithms minimize a two-level implementation of the integrated TAP controller and/or the distributed decoders. These control strategies are IEEE 1149.1 boundary scan standard compliant and are applicable to both simple and complex DFT/BIST methodologies including those that employ multifunction and/or reconfigurable test registers and reconfigurable scan chains.

23 citations

Proceedings ArticleDOI
11 Aug 2014
TL;DR: It is demonstrated that a partially solar powered EV can significantly save battery energy during cruising using innovative fast photovoltaic array (PV) reconfiguration and customization of the PV array installation according to the driving pattern and overcomes the partial shading phenomenon.
Abstract: This paper demonstrates that a partially solar powered EV can significantly save battery energy during cruising using innovative fast photovoltaic array (PV) reconfiguration. Use of all the vehicle sur- face areas, such as the hood, rooftop, door panels, quarter pan- els, etc., makes it possible to install more PV modules, but it also results in severe performance degradation due to inherent partial shading. This paper introduces fast online PV array reconfigura- tion and customization of the PV array installation according to the driving pattern and overcomes the partial shading phenomenon. We implement a high-speed, high-voltage PV reconfiguration switch network with IGBTs (insulated-gate bipolar transistors) and a controller. We derive the optimal reconfiguration period based on the solar irradiance/driving profiles using adaptive learning method, where the on/off delay of IGBT, CAN (control area network) delay, computation overhead, and energy overhead are taken into account. Experimental results show 25% more power generation from the PV array. This paper also introduces two important design-time optimization problems to achieve trade-off between performance and overhead. We derive the optimal PV reconfiguration granularity and partial PV array mounting by the car owner's driving pattern, which results in more than 20% PV cell cost reduction.

23 citations

Journal ArticleDOI
TL;DR: This paper discusses the PDN with heterogeneous VRs, which is proposed to increase the benefits of the VRCon by incorporating VRs with a larger driving capability of load current and results from detailed simulations demonstrate up to 36% VR energy loss reduction and 9% total energy saving.
Abstract: The emerging trend toward utilizing chip multicore processors (CMPs) that support dynamic voltage and frequency scaling (DVFS) is driven by user requirements for high performance and low power. To overcome limitations of the conventional chip-wide DVFS and achieve the maximum possible energy saving, per-core DVFS is being enabled in the recent CMP offerings. While power consumed by the CMP is reduced by per-core DVFS, power dissipated by the set of voltage regulators (VRs) that are required to support per-core DVFS becomes critical. This paper focuses on the dynamic control of the VRs in a CMP platform. Starting with a proposed platform with a reconfigurable VR-to-core power distribution network (PDN), two optimization methods are presented to maximize the system-wide energy savings: 1) reactive VR consolidation (VRCon) to reconfigure the network for maximizing the power conversion efficiency of the VRs, which is performed under the predetermined DVFS levels for the cores and 2) proactive VRCon to determine new DVFS levels for maximizing the total energy savings without any performance degradation. Along with the optimization methods for the PDN composed of homogeneous VRs, we also discuss the PDN with heterogeneous VRs, which is proposed to increase the benefits of the VRCon by incorporating VRs with a larger driving capability of load current. Results from detailed simulations based on realistic experimental setups demonstrate up to 36% VR energy loss reduction and 9% total energy saving.

22 citations

Proceedings ArticleDOI
25 Mar 2019
TL;DR: A scalable framework for gate-level circuit recognition that leverages deep learning and a convolutional neural network (CNN)-based circuit representation is presented and a data structure, termed level-dependent decaying sum (LDDS) existence vector, which can compactly represent information about the circuit topology is proposed.
Abstract: Efficiently recognizing the functionality of a circuit is key to many applications, such as formal verification, reverse engineering, and security. We present a scalable framework for gate-level circuit recognition that leverages deep learning and a convolutional neural network (CNN)-based circuit representation. Given a standard cell library, we present a sparse mapping algorithm to improve the time and memory efficiency of the CNN-based circuit representation. Sparse mapping allows encoding only the logic cell functionality, independently of implementation parameters such as timing or area. We further propose a data structure, termed level-dependent decaying sum (LDDS) existence vector, which can compactly represent information about the circuit topology. Given a reference gate in the circuit, an LDDS vector can capture the function of the gates in the input and output cones as well as their distance (number of stages) from the reference. Compared to the baseline approach, our framework obtains more than an-order-of-magnitude reduction in the average training time and 2× improvement in the average runtime for generating CNN-based representations from gate-level circuits, while achieving 10% higher accuracy on a set of benchmarks including EPFL and ISCAS’85 circuits.

22 citations

Proceedings ArticleDOI
01 Oct 2014
TL;DR: This paper is the first work that investigates the effectiveness of building CMOS circuits operating in the near-threshold regime and above with 7nm FinFET technology through a cross-layer design and simulation framework.
Abstract: Because of their many attractive attributes, FinFETs are emerging as the device of choice for CMOS process technology nodes below 20nm. This paper is the first work that investigates the effectiveness of building CMOS circuits operating in the near-threshold regime and above with 7nm FinFET technology through a cross-layer design and simulation framework. Three types of FinFET devices with different threshold voltages are designed using Sentaurus TCAD to accommodate the need for constructing both high-speed cells and low-power cells in the same library. Compact and SPICE-compatible device models are extracted with high accuracy using current source modeling techniques. Standard cell libraries with two different (near-and super-threshold) supply voltages are generated. Circuit syntheses are performed on extensive benchmarks to compare the performance with the state-of-the-art planar CMOS counterparts. Simulation results demonstrate the benefit of 7nm FinFET-based circuits from both aspects of speed and energy efficiency.

22 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations