scispace - formally typeset
Search or ask a question
Author

Massoud Pedram

Bio: Massoud Pedram is an academic researcher from University of Southern California. The author has contributed to research in topics: Energy consumption & CMOS. The author has an hindex of 77, co-authored 780 publications receiving 23047 citations. Previous affiliations of Massoud Pedram include University of California, Berkeley & Syracuse University.


Papers
More filters
Proceedings ArticleDOI
25 Mar 2013
TL;DR: Experimental results demonstrate the effectiveness of the proposed nested two stage game-based optimization framework on the MCC interaction system, which can achieve simultaneous reduction in average power consumption and average service request response time, by 21.8% and 31.9% compared with baseline methods.
Abstract: The rapidly developing cloud computing and virtualization techniques provide mobile devices with battery energy saving opportunities by allowing them to offload computation and execute applications remotely. A mobile device should judiciously decide whether to offload computation and which portion of application should be offloaded to the cloud. In this paper, we consider a mobile cloud computing (MCC) interaction system consisting of multiple mobile devices and the cloud computing facilities. We provide a nested two stage game formulation for the MCC interaction system. In the first stage, each mobile device determines the portion of its service requests for remote processing in the cloud. In the second stage, the cloud computing facilities allocate a portion of its total resources for service request processing depending on the request arrival rate from all the mobile devices. The objective of each mobile device is to minimize its power consumption as well as the service request response time. The objective of the cloud computing controller is to maximize its own profit. Based on the backward induction principle, we derive the optimal or near-optimal strategy for all the mobile devices as well as the cloud computing controller in the nested two stage game using convex optimization technique. Experimental results demonstrate the effectiveness of the proposed nested two stage game-based optimization framework on the MCC interaction system. The mobile devices can achieve simultaneous reduction in average power consumption and average service request response time, by 21.8% and 31.9%, respectively, compared with baseline methods.

76 citations

Journal ArticleDOI
TL;DR: The proposed backlight scaling technique is capable of efficiently computing the flickering effect online and subsequently using a measure of the temporal distortion to appropriately adjust the slack on the intra-frame spatial distortion, thereby, achieving a good balance between the two sources of distortion while maximizing the backlight dimming-driven energy saving in the display system and meeting an overall video quality figure of merit.
Abstract: Liquid crystal displays (LCDs) have appeared in applications ranging from medical equipment to automobiles, gas pumps, laptops, and handheld portable computers. These display components present a cascaded energy attenuator to the battery of the handheld device which is responsible for about half of the energy drain at maximum display intensity. As such, the display components become the main focus of every effort for maximization of embedded system's battery lifetime. This paper proposes an approach for pixel transformation of the displayed image to increase the potential energy saving of the backlight scaling method. The proposed approach takes advantage of human visual system (HVS) characteristics and tries to minimize distortion between the perceived brightness values of the individual pixels in the original image and those of the backlight-scaled image. This is in contrast to previous backlight scaling approaches which simply match the luminance values of the individual pixels in the original and backlight-scaled images. Furthermore, this paper proposes a temporally-aware backlight scaling technique for video streams. The goal is to maximize energy saving in the display system by means of dynamic backlight dimming subject to a video distortion tolerance. The video distortion comprises of: 1) an intra-frame (spatial) distortion component due to frame-sensitive backlight scaling and transmittance function tuning and 2) an inter-frame (temporal) distortion component due to large-step backlight dimming across frames modulated by the psychophysical characteristics of the human visual system. The proposed backlight scaling technique is capable of efficiently computing the flickering effect online and subsequently using a measure of the temporal distortion to appropriately adjust the slack on the intra-frame spatial distortion, thereby, achieving a good balance between the two sources of distortion while maximizing the backlight dimming-driven energy saving in the display system and meeting an overall video quality figure of merit. The proposed dynamic backlight scaling approach is amenable to highly efficient hardware realization and has been implemented on the Apollo Testbed II. Actual current measurements demonstrate the effectiveness of proposed technique compared to the previous backlight dimming techniques, which have ignored the temporal distortion effect

76 citations

Proceedings ArticleDOI
10 Feb 1998
TL;DR: In this article, the logic construction of a double-edge-triggered (DET) flip-flop, which can receive input signal at two levels of the clock, is analyzed and a new circuit design of CMOS DET flipflop is proposed.
Abstract: The logic construction of a double-edge-triggered (DET) flip-flop, which can receive input signal at two levels of the clock, is analyzed and a new circuit design of CMOS DET flip-flop is proposed. Simulation using SPICE and a 1 /spl mu/ technology shows that this DET flip-flop has ideal logic functionality, a simpler structure, lower delay time, and higher maximum data rate compared to other existing CMOS DET flip-flops. By simulating and comparing the proposed DET flip-flop with the traditional single-edge-triggered (SET) flip-flop, it is shown that the proposed DET flip-flop reduces power dissipation by half while keeping the same date rate.

76 citations

Journal ArticleDOI
TL;DR: An efficient metric to estimate the capacitive crosstalk in nanometer high-speed very large scale integration circuits is presented and closed-form expressions for the peak amplitude, the pulsewidth, and the time-domain waveform of the crosStalk noise are provided.
Abstract: Rapid technology scaling along with the continuous increase in the operation frequency cause the crosstalk noise to become a major source of performance degradation in high-speed integrated circuits. This paper presents an efficient metric to estimate the capacitive crosstalk in nanometer high-speed very large scale integration circuits. In particular, we provide closed-form expressions for the peak amplitude, the pulsewidth, and the time-domain waveform of the crosstalk noise. Experimental results show that the maximum error of our noise predictions is less than 13%, while the average error is only 5.82%.

73 citations

Journal ArticleDOI
TL;DR: The circuit synthesis results of various combinational and sequential circuits based on the presented 7-nm FinFET standard cell libraries forecast 10× and 1000× energy reduction on average in a superthreshold regime and 16× and 3000× energy reductions in a near-th threshold regime as compared with the results of the 14-nm and 45-nm bulk CMOS technology nodes.
Abstract: FinFET devices have been proposed as a promising substitute for conventional bulk CMOS-based devices at the nanoscale due to their extraordinary properties such as improved channel controllability, a high on / off current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. This brief builds standard cell libraries for the advanced 7-nm FinFET technology, supporting multiple threshold voltages and supply voltages. The circuit synthesis results of various combinational and sequential circuits based on the presented 7-nm FinFET standard cell libraries forecast 10× and 1000× energy reductions on average in a superthreshold regime and 16× and 3000× energy reductions on average in a near-threshold regime as compared with the results of the 14-nm and 45-nm bulk CMOS technology nodes, respectively.

72 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations