scispace - formally typeset
M

Massoud Pedram

Researcher at University of Southern California

Publications -  812
Citations -  25236

Massoud Pedram is an academic researcher from University of Southern California. The author has contributed to research in topics: Energy consumption & CMOS. The author has an hindex of 77, co-authored 780 publications receiving 23047 citations. Previous affiliations of Massoud Pedram include University of California, Berkeley & Syracuse University.

Papers
More filters
Journal ArticleDOI

Memristive-based Mixed-signal CGRA for Accelerating Deep Neural Network Inference

TL;DR: In this article, a mixed-signal coarse-grained reconfigurable architecture (CGRA) is proposed for accelerating inference in deep neural networks (DNNs) based on performing dot-product computations using analog computing to achieve a considerable speed improvement.

RT-Level Power Analysis Using Information Theoretic Measures

TL;DR: It is shown that the average switching activity can be predicted without simulation using either entropy or informational energy averages, and two new measures relying on these concepts are developed.

Brain Tumor Detection using Convolutional Neural Networks with Skip Connections

TL;DR: In this article , different CNN architecture optimization techniques such as widening and deepening of the network and adding skip connections are applied to improve the accuracy of the CNN network for brain tumor classification using Magnetic Resonance Imaging (MRI) technique.
Journal ArticleDOI

Federated learning by employing knowledge distillation on edge devices with limited hardware resources

TL;DR: In this article , a federated learning approach based on utilizing computational resources of the IoT edge devices for training deep neural networks is presented. But, instead of the original neural network (NN), instead of using a smaller NN generated using a proposed heuristic method, the smaller model which is trained on the edge device is generated from the main NN model.
Journal ArticleDOI

SNT: Sharpness-Minimizing Network Transformation for Fast Compression-friendly Pretraining

TL;DR: In this article , a sharpness-minimizing network transformation (SNT) method is proposed to create models with desirable compressibility and generalizability features during pretraining.