scispace - formally typeset
Search or ask a question
Author

Massoud Pedram

Bio: Massoud Pedram is an academic researcher from University of Southern California. The author has contributed to research in topics: Energy consumption & CMOS. The author has an hindex of 77, co-authored 780 publications receiving 23047 citations. Previous affiliations of Massoud Pedram include University of California, Berkeley & Syracuse University.


Papers
More filters
Proceedings ArticleDOI
03 Mar 2014
TL;DR: An improved analytical FinFET model covering both sub- and near-threshold regimes is presented, which accurately captures the drain current as a function of both the gate and drain voltages and provides in-depth analysis of the stack sizing of FinFet logic cells in the sub/near-th threshold region.
Abstract: Sub/near-threshold computing has been proposed for ultra-low power applications. FinFET devices are considered as an alternative for bulk CMOS devices due to the superior characteristics, which make FinFET an excellent candidate for ultra-low power designs. In this paper, we first present an improved analytical FinFET model covering both sub- and near-threshold regimes. This model accurately captures the drain current as a function of both the gate and drain voltages. Based on the accurate FinFET model, we provide a detailed analysis on stack sizing of FinFET logic cells, and derive the optimal stack depth in FinFET circuits. We also provide a delay optimization framework for the FinFET circuits in the sub/near-threshold region, based on the stack sizing analysis. To the best of our knowledge, this is the first work that provides in-depth analysis of the stack sizing of FinFET logic cells in the sub/near-threshold region based on the accurate FinFET modeling. Experimental results on the 32nm Predictive Technology Model for FinFET devices demonstrate the effectiveness of the proposed optimization framework.

18 citations

Proceedings ArticleDOI
04 Jun 2007
TL;DR: This paper formulizes the problem of selecting the best set of regulators in a tree topology as a dynamic program and efficiently solve it and demonstrates the efficacy of proposed problem formulation and solution.
Abstract: High efficiency low voltage DC-DC conversion is a key enabler to the design of power-efficient integrated circuits. Typically a star configuration of the DC-DC converters, where only one converter resides between the source and each load, is used to deliver currents with appropriate voltage levels to different loads in the circuit, hi this paper we show that using a tree topology of suitably chosen voltage regulators between the power source and loads yields higher power efficiency in the power delivery network. We formulize the problem of selecting the best set of regulators in a tree topology as a dynamic program and efficiently solve it. Experimental results demonstrate the efficacy of proposed problem formulation and solution.

18 citations

Proceedings ArticleDOI
13 Jun 1997
TL;DR: Experimental results show the effectiveness of the proposed techniques based on population pruning and stratification in providing detailed power distribution information in a circuit.
Abstract: This paper proposes to use quantile points of the cumulative distributionfunction for power consumption to provide detailed informationabout the power distribution in a circuit. The paper also presentstwo techniques based on population pruning and stratification to improvethe efficiency of estimation. Both population pruning and stratificationare based on a lowcost predictor, such as zero-delay powerestimate. Experimental results show the effectiveness of the proposedtechniques in providing detailed power distribution information.

18 citations

Proceedings ArticleDOI
28 May 2014
TL;DR: A negotiation-based iterative approach has been proposed that is inspired by the state-of-the-art Field-Programmable Gate Array (FPGA) routing algorithms to address the problem of task scheduling of (a collection of) energy consumers with PV power generation facilities, in order to minimize the electricity bill.
Abstract: Dynamic energy pricing is a promising technique in the Smart Grid that incentivizes energy consumers to consume electricity more prudently in order to minimize their electric bills meanwhile satisfying their energy requirements. This has become a particularly interesting problem with the introduction of residential photovoltaic (PV) power generation facilities. This paper addresses the problem of task scheduling of (a collection of) energy consumers with PV power generation facilities, in order to minimize the electricity bill. A general type of dynamic pricing scenario is assumed where the energy price is both time-of-use and total power consumption-dependent. A negotiation-based iterative approach has been proposed that is inspired by the state-of-the-art Field-Programmable Gate Array (FPGA) routing algorithms. More specifically, the negotiation-based algorithm is used to rip-up and re-schedule all tasks in each iteration, and the concept of congestion is effectively introduced to dynamically adjust the schedule of each task based on the historical scheduling results as well as the (historical) total power consumption in each time slot. Experimental results demonstrate that the proposed algorithm achieves up to 51.8% improvement in electric bill reduction compared with baseline methods.

18 citations

Proceedings ArticleDOI
07 Nov 1993
TL;DR: The approach is to combine the benefits of the EVBDD data structure (in terms of subgraph sharing and caching of computational results) with the state-of-the-art ILP solving techniques.
Abstract: Edge-valued binary-decision diagrams (EVBDDs) are directed acyclic graphs which can represent and manipulate integer functions as effectively as ordered binary-decision diagrams (OBDDs) do for Boolean functions. They have been used to perform logic verification and compute the decomposability of Boolean functions. In this paper, we present a new EVBDD application for solving integer linear programs (ILP), which is an NP-hard problem that appears in many applications. Our approach is to combine the benefits of the EVBDD data structure (in terms of subgraph sharing and caching of computational results) with the state-of-the-art ILP solving techniques. Our program, called FGILP (Function Graph ILP) has been implemented in C under the SIS environment. The preliminary results of FGILP are comparable to those of LINDO.

17 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations