scispace - formally typeset
Search or ask a question
Author

Massoud Pedram

Bio: Massoud Pedram is an academic researcher from University of Southern California. The author has contributed to research in topics: Energy consumption & CMOS. The author has an hindex of 77, co-authored 780 publications receiving 23047 citations. Previous affiliations of Massoud Pedram include University of California, Berkeley & Syracuse University.


Papers
More filters
Journal ArticleDOI
01 Oct 2017
TL;DR: This study addresses the problem of concurrent task scheduling and storage management for residential energy consumers with PV and storage systems, in order to minimise the electric bill using a negotiation-based iterative approach and a near-optimal storage control algorithm.
Abstract: Dynamic energy pricing policy introduces real-time power-consumption-reflective pricing in the smart grid in order to incentivise energy consumers to schedule electricity-consuming applications (tasks) more prudently to minimise electric bills. This has become a particularly interesting problem with the availability of photovoltaic (PV) power generation facilities and controllable energy storage systems. This study addresses the problem of concurrent task scheduling and storage management for residential energy consumers with PV and storage systems, in order to minimise the electric bill. A general type of dynamic pricing scenario is assumed where the energy price is both time-of-use and power dependent. Tasks are allowed to support suspend-now and resume-later operations. A negotiation-based iterative approach has been proposed. In each iteration, all tasks are ripped-up and rescheduled under a fixed storage charging/discharging scheme, and then the storage control scheme is derived based on the latest task scheduling. The concept of congestion is introduced to gradually adjust the schedule of each task, whereas dynamic programming is used to find the optimal schedule. A near-optimal storage control algorithm is effectively implemented. Experimental results demonstrate that the proposed algorithm can achieve up to 60.95% in the total energy cost reduction compared with various baseline methods.

10 citations

Journal ArticleDOI
TL;DR: This paper addresses the co-scheduling problem of HVAC control and HEES system management to achieve energy-efficient smart buildings, while also accounting for the degradation of the battery state-of-health during charging and discharging operations (which determines the amortized cost of owning and utilizing a battery storage system).

10 citations

Proceedings ArticleDOI
04 Sep 2013
TL;DR: This paper proposes and demonstrates that using a supercapacitor instead of a large capacity battery can be beneficial in terms of improving the charging efficiency, and thereby, significantly reducing the charging time, and proposes a dynamic programming-based online algorithm to solve the problem.
Abstract: Battery life of high-end smartphones and tablet PCs is becoming more and more important due to the gap between the rapid increase in power requirements of the electronic components and the slow increase in energy storage capacity of Li-ion batteries. Energy harvesting, on the other hand, is a promising technique that can prolong the battery life without compromising the users' experience with the devices and potentially without the necessity to have access to a wall AC outlet. Such energy harvesting products are available on the market today, but most of them are equipped with only a large battery pack, which exhibits poor capacity utilization during solar energy harvesting. In this paper, we propose and demonstrate that using a supercapacitor instead of a large capacity battery can be beneficial in terms of improving the charging efficiency, and thereby, significantly reducing the charging time. However, this is not a trivial task and gives rise to many problems associated with charging the supercapacitor via the USB charging port. We analyze the USB charging standard and commercial USB charger designs in smartphones to formulate an energy efficiency optimization problem and propose a dynamic programming-based online algorithm to solve the aforesaid problem. Experimental results show up to 34.5% of charging efficiency improvement compared with commercial solar charger designs.

10 citations

Proceedings ArticleDOI
24 Mar 2003
TL;DR: This paper uses the built-in scan-chain in a VLSI circuit to drive it with the minimum leakage vector when it enters the sleep mode, and eliminates the area and delay overhead of the additional circuitry that would otherwise be needed to apply theminimum leakage vector to the circuit.
Abstract: Input vector control is an effective technique for reducing the leakage current of combinational VLSI circuits when these circuits are in the sleep mode. In this paper a design technique for applying the minimum leakage input to a sequential circuit is proposed. Our method uses the built-in scan-chain in a VLSI circuit to drive it with the minimum leakage vector when it enters the sleep mode. Using these scan registers eliminates the area and delay overhead of the additional circuitry that would otherwise be needed to apply the minimum leakage vector to the circuit. We show how the proposed technique can be used for several different scan-chain architectures and present the experimental results on the MCNC91 benchmark circuits.

10 citations

Proceedings ArticleDOI
09 Mar 2015
TL;DR: It is demonstrated that, compared to Dual-VT, GLB is a more suitable technique for the advanced 7nm FinFET technology due to its capability of delivering a finer-grained trade-off between the leakage power and circuit speed, not to mention the lower manufacturing cost.
Abstract: With the aggressive downscaling of the process technologies and importance of battery-powered systems, reducing leakage power consumption has become one of the most crucial design challenges for IC designers. This paper presents a device-circuit cross-layer framework to utilize fine-grained gate-length biased FinFETs for circuit leakage power reduction in the near- and super-threshold operation regimes. The impacts of Gate-Length Biasing (GLB) on circuit speed and leakage power are first studied using one of the most advanced technology nodes — a 7nm FinFET technology. Then multiple standard cell libraries using different leakage reduction techniques, such as GLB and Dual-Fj-, are built in multiple operating regimes at this technology node. It is demonstrated that, compared to Dual-Fj-, GLB is a more suitable technique for the advanced 7nm FinFET technology due to its capability of delivering a finer-grained trade-off between the leakage power and circuit speed, not to mention the lower manufacturing cost. The circuit synthesis results of a variety of ISCAS benchmark circuits using the presented GLB 7nm FinFET cell libraries show up to 70% leakage improvement with zero degradation in circuit speed in the near- and super-threshold regimes, respectively, compared to the standard 7nm FinFET cell library.

10 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations