scispace - formally typeset
M

Massoud Pedram

Researcher at University of Southern California

Publications -  812
Citations -  25236

Massoud Pedram is an academic researcher from University of Southern California. The author has contributed to research in topics: Energy consumption & CMOS. The author has an hindex of 77, co-authored 780 publications receiving 23047 citations. Previous affiliations of Massoud Pedram include University of California, Berkeley & Syracuse University.

Papers
More filters
Proceedings ArticleDOI

Power-Aware Deployment and Control of Forced-Convection and Thermoelectric Coolers

TL;DR: An optimization framework called OFTEC is presented which finds the optimum TEC driving current and the fan speed to minimize the overall power consumption of the cooling system while maintaining safe die temperatures.
Journal ArticleDOI

All-Region Statistical Model for Delay Variation Based on Log-Skew-Normal Distribution

TL;DR: This paper proposes a single probability density function for the distributions of the delay in the presence of the process variation for different regions of operation using the log-skew-normal distribution for modeling the delay variation for a wide range of supply voltages.
Proceedings ArticleDOI

Improving the Efficiency of Power Management Techniques by Using Bayesian Classification

TL;DR: Experimental results reveal that the proposed Bayesian classification based DPM technique ensures system-wide energy savings under rapidly and widely varying workloads.
Journal ArticleDOI

A Minimum-Skew Clock Tree Synthesis Algorithm for Single Flux Quantum Logic Circuits

TL;DR: A synchronous minimum-skew clock tree synthesis algorithm for single flux quantum circuits considering splitter delays and placement blockages and creating a fully balanced clock tree structure in which the number of clock splitters from the clock source to all the sink nodes is identical.
Posted Content

NullaNet: Training Deep Neural Networks for Reduced-Memory-Access Inference.

TL;DR: A training method is presented that enables a radically different approach for realization of deep neural networks through Boolean logic minimization, which completely removes the energy-hungry step of accessing memory for obtaining model parameters, consumes about two orders of magnitude fewer computing resources, and has a substantially lower latency.