scispace - formally typeset
Search or ask a question
Author

Massoud Pedram

Bio: Massoud Pedram is an academic researcher from University of Southern California. The author has contributed to research in topics: Energy consumption & CMOS. The author has an hindex of 77, co-authored 780 publications receiving 23047 citations. Previous affiliations of Massoud Pedram include University of California, Berkeley & Syracuse University.


Papers
More filters
Proceedings ArticleDOI
01 Oct 2014
TL;DR: This paper presents a model-free reinforcement learning-based approach to dynamically manage the current flows from and into the battery and supercapacitor banks under various scenarios (combinations of EV specs and driving patterns).
Abstract: To improve the cycle efficiency and peak output power density of energy storage systems in electric vehicles (EVs), supercapacitors have been proposed as auxiliary energy storage elements to complement the mainstream Lithium-ion (Li-ion) batteries. The performance of such a hybrid electrical energy storage (HEES) system is highly dependent on the implemented management policy. This paper presents a model-free reinforcement learning-based approach to dynamically manage the current flows from and into the battery and supercapacitor banks under various scenarios (combinations of EV specs and driving patterns). Experimental results demonstrate that the proposed approach achieves up to 25% higher efficiency compared to a Li-ion battery only storage system and outperforms other online HEES system control policies in all test cases.

13 citations

Proceedings ArticleDOI
19 May 2014
TL;DR: A nested two stage game based formulation is proposed based on the location-dependent real-time pricing policy of the smart grid, the cloud computing system, and other load devices, using convex optimization and heuristic search to derive optimal strategies.
Abstract: The emergence of cloud computing has established a trend towards building energy-hungry and geographically distributed data centers. Due to their enormous energy consumption, data centers are expected to have major impact on the electric power grid by significantly increasing the load at locations where they are built. Dynamic energy pricing policies in the recently proposed smart grid technology can incentivize the cloud computing controller to shift their computation load towards data centers in regions with cheaper electricity. On the other hand, distributed data centers also provide opportunities to help the smart grid to improve load balancing and robustness. To shed some light into these opportunities, this paper considers an interaction system of the smart grid, the cloud computing system, and other load devices. A nested two stage game based formulation is proposed based on the location-dependent real-time pricing policy of the smart grid. The leading player in this game is the smart grid controller that announces the relationship between the electricity price at each power bus and the total load demand at that bus. In the second stage, the cloud computing controller performs resource allocation as response to the pricing functions, whereas the other load devices perform demand side management. The objective of the smart grid controller is to maximize its own profit and perform load balancing among power buses, whereas the objective of the cloud computing controller is to maximize its own profit with respect to the location-dependent pricing functions. The optimal strategies are derived based on the backward induction principle for the smart grid controller, the cloud computing controller, and the other load devices, using convex optimization and heuristic search.

13 citations

Book ChapterDOI
29 Nov 2004
TL;DR: In this article, the authors propose a new algorithm: 0.32.0.0-1.0/1.00/0.1/1/0/2.
Abstract: 32.

13 citations

Journal ArticleDOI
TL;DR: TEI-power is presented, a dynamic voltage and frequency scaling--based dynamic thermal management technique that considers the TEI phenomenon and also the superlinear dependencies of power consumption components on the temperature and outlines a real-time trade-off between delay and power consumption as a function of the chip temperature to provide significant energy savings.
Abstract: FinFETs have emerged as a promising replacement for planar CMOS devices in sub-20nm technology nodes. However, based on the temperature effect inversion (TEI) phenomenon observed in FinFET devices, the delay characteristics of FinFET circuits in sub-, near-, and superthreshold voltage regimes may be fundamentally different from those of CMOS circuits with nominal voltage operation. For example, FinFET circuits may run faster in higher temperatures. Therefore, the existing CMOS-based and TEI-unaware dynamic power and thermal management techniques would not be applicable. In this article, we present TEI-power, a dynamic voltage and frequency scaling--based dynamic thermal management technique that considers the TEI phenomenon and also the superlinear dependencies of power consumption components on the temperature and outlines a real-time trade-off between delay and power consumption as a function of the chip temperature to provide significant energy savings, with no performance penalty—namely, up to 42% energy savings for small circuits where the logic cell delay is dominant and up to 36% energy savings for larger circuits where the interconnect delay is considerable.

13 citations

Proceedings ArticleDOI
30 Apr 2006
TL;DR: Monte Carlo Spice-based experimental results demonstrate the effectiveness of the proposed approach in accurately modeling the correlation-aware process variations and their impact on interconnect delay when crosstalk is present.
Abstract: Process variations have become a key concern of circuit designers because of their significant, yet hard to predict impact on performance and signal integrity of VLSI circuits. Statistical approaches have been suggested as the most effective substitute for corner-based approaches to deal with the variability of present process technology nodes. This paper introduces a statistical analysis of the crosstalk-aware delay of coupled interconnects considering process variations. The few existing works that have studied this problem suffer not only from shortcomings in their statistical models, but also from inaccurate crosstalk circuit models. We utilize an accurate distributed RC-p model of the interconnections to be able to model process variations close to reality. The considerable effect of correlation among the parameters of neighboring wire segments is also indicated. Statistical properties of the crosstalk-aware output delay are characterized and presented as closed-formed expressions. Monte Carlo Spice-based experimental results demonstrate the effectiveness of the proposed approach in accurately modeling the correlation-aware process variations and their impact on interconnect delay when crosstalk is present.

12 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations