scispace - formally typeset
Search or ask a question
Author

Massoud Pedram

Bio: Massoud Pedram is an academic researcher from University of Southern California. The author has contributed to research in topics: Energy consumption & CMOS. The author has an hindex of 77, co-authored 780 publications receiving 23047 citations. Previous affiliations of Massoud Pedram include University of California, Berkeley & Syracuse University.


Papers
More filters
Proceedings ArticleDOI
09 Jul 2014
TL;DR: The circuit synthesis results of various combinational and sequential circuits based on the 5nm FinFET standard cell library show up to 40X circuit speed improvement and three orders of magnitude energy reduction compared to those of 45nm bulk CMOS technology.
Abstract: FinFET device has been proposed as a promising substitute for the traditional bulk CMOS-based device at the nanoscale, due to its extraordinary properties such as improved channel controllability, high ON/OFF current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. In addition, the near-ideal subthreshold behavior indicates the potential application of FinFET circuits in the near-threshold supply voltage regime, which consumes an order of magnitude less energy than the regular strong-inversion circuits operating in the super-threshold supply voltage regime. This paper presents a design flow of creating standard cells by using the FinFET 5nm technology node, including both near-threshold and super-threshold operations, and building a Liberty-format standard cell library. The circuit synthesis results of various combinational and sequential circuits based on the 5nm FinFET standard cell library show up to 40X circuit speed improvement and three orders of magnitude energy reduction compared to those of 45nm bulk CMOS technology.

31 citations

Proceedings ArticleDOI
10 Mar 2008
TL;DR: This paper introduces a new approach for sleep transistor sizing which minimizes the total sleep transistor width for a coarse-grain multi-threshold CMOS circuit assuming a given standard cell and sleep transistor placement.
Abstract: Power gating is one of the most effective techniques in reducing the standby leakage current of VLSI circuits. In this paper we introduce a new approach for sleep transistor sizing which minimizes the total sleep transistor width for a coarse-grain multi-threshold CMOS circuit assuming a given standard cell and sleep transistor placement. First, the circuit is decomposed into a set of modules, each containing the set of logic cells that are closest to a sleep transistor cell. Next given an upper bound on the overall circuit speed degradation, the global timing slack is distributed among different clusters using a delay-budgeting. The slack distribution result is then used to size the sleep transistors such that the total sleep transistor width is minimized while accounting for the parasitic resistances of the virtual ground net. Results show that the proposed sizing algorithm produces sleep transistor sizes that are 40% smaller than those produced by previous approaches.

31 citations

Proceedings ArticleDOI
12 Oct 2014
TL;DR: This work introduces a fractal operator to account for the time-varying fractal properties of the cloud workloads, and presents an efficient (online) parameter estimation algorithm, an accurate forecasting strategy, and a novel fractal-based model predictive control approach for optimizing the CPU utilization, and hence, the overall energy consumption in the system while satisfying networked architecture performance constraints like queue capacities.
Abstract: Cloud Computing is a promising approach to handle the growing needs for computation and storage in an efficient and cost-effective manner. Towards this end, characterizing workloads in the cloud infrastructure (e.g., a data center) is essential for performing cloud optimizations such as resource provisioning and energy minimization. However, there is a huge gap between the characteristics of actual workloads (e.g., they tend to be bursty and exhibit fractal behavior) and existing cloud optimization algorithms, which tend to rely on simplistic assumptions about the workloads. To close this gap, based on fractional calculus concepts, we present a fractal model to account for the complex dynamics of cloud computing workloads (i.e., the number of request arrivals or CPU/memory usage during each time interval). More precisely, we introduce a fractal operator to account for the time-varying fractal properties of the cloud workloads. In addition, we present an efficient (online) parameter estimation algorithm, an accurate forecasting strategy, and a novel fractal-based model predictive control approach for optimizing the CPU utilization, and hence, the overall energy consumption in the system while satisfying networked architecture performance constraints like queue capacities. We demonstrate advantages of our fractal model in forecasting the complex cloud computing dynamics over conventional (non-fractal) models by using real-world cloud (Google) traces. Unlike non-fractal models, which have very poor prediction capabilities under bursty workload conditions, our fractal model can accurately predict bursty request processes, which is crucial for cloud computing workload forecasting. Finally, experimental results demonstrate that the fractal model based optimization outperforms the non-fractal based ones in terms of minimizing the resource utilization by an average of 30%.

31 citations

Proceedings ArticleDOI
24 Jul 2006
TL;DR: The proposed charge recycling technique can save up to 46% of the mode transition energy while, in most cases, maintaining, or even improving, the wake up time of the original circuit.
Abstract: Designing an energy efficient power gating structure is an important and challenging task in multi-threshold CMOS (MTCMOS) circuit design. In order to achieve a very low power design, the large amount of energy consumed during mode transition in MTCMOS circuits should be avoided. In this paper, we propose an appropriate charge recycling technique to reduce energy consumption during the mode transition of MTCMOS circuits. The proposed method can save up to 46% of the mode transition energy while, in most cases, maintaining, or even improving, the wake up time of the original circuit. It also reduces the peak negative voltage value and the settling time of the ground bounce.

31 citations

Proceedings ArticleDOI
02 Nov 2015
TL;DR: A nested learning framework in which both the optimal actions (which include the gear ratio selection and the use of internal combustion engine versus the electric motor to drive the vehicle) and limits on the range of the state-of-charge of the battery are learned on the fly.
Abstract: This paper investigates the energy management problem in hybrid electric vehicles (HEVs) focusing on the minimization of the operating cost of an HEV, including both fuel and battery replacement cost. More precisely, the paper presents a nested learning framework in which both the optimal actions (which include the gear ratio selection and the use of internal combustion engine versus the electric motor to drive the vehicle) and limits on the range of the state-of-charge of the battery are learned on the fly. The inner-loop learning process is the key to minimization of the fuel usage whereas the outer-loop learning process is critical to minimization of the amortized battery replacement cost. Experimental results demonstrate a maximum of 48% operating cost reduction by the proposed HEV energy management policy.

31 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations