scispace - formally typeset
Search or ask a question
Author

Massoud Pedram

Bio: Massoud Pedram is an academic researcher from University of Southern California. The author has contributed to research in topics: Energy consumption & CMOS. The author has an hindex of 77, co-authored 780 publications receiving 23047 citations. Previous affiliations of Massoud Pedram include University of California, Berkeley & Syracuse University.


Papers
More filters
Proceedings ArticleDOI
23 Apr 2007
TL;DR: A hierarchical wireless sensor network with mobile overlays, along with a mobility-aware multi-hop routing scheme, is presented and analyzed in order to optimize the network lifetime, delay, and local storage size.
Abstract: Recent technological advances have led to the emergence of small battery-powered sensors with considerable, albeit limited, processing and communication capabilities. Wireless sensor networks have gained considerable attention in applications where spatially distributed events are to be monitored with minimal delay. We present and analyze a hierarchical wireless sensor network with mobile overlays, along with a mobility-aware multi-hop routing scheme, in order to optimize the network lifetime, delay, and local storage size. Fixed event aggregation relays and mobile relays are used to collect events from the sensors and send them to a central base station. We analyze the effects of various system parameters on the network performance, and formulate a convex optimization problem for maximizing the network lifetime subject to constraints on local storage, delay, and maintenance cost. Network behavior is studied and analytical results are validated through simulations

16 citations

Proceedings ArticleDOI
02 Oct 2005
TL;DR: This paper presents an approach to approximate variational RC-/spl pi/ load by using a canonical first-order model, and proposes a new framework to handle the variation-aware gate timing analysis in block-based /spl sigma/TA.
Abstract: As technology scales down, timing verification of digital integrated circuits becomes an extremely difficult task due to gate and wire variability. Therefore, statistical timing analysis is inevitable. Most timing tools divide the analysis into two parts: 1) interconnect (wire) timing analysis and 2) gate timing analysis. Variational interconnect delay calculation for block-based /spl sigma/TA has been recently studied. However, variational gate delay calculation has remained unexplored. In this paper, we propose a new framework to handle the variation-aware gate timing analysis in block-based /spl sigma/TA. First, we present an approach to approximate variational RC-/spl pi/ load by using a canonical first-order model. Next, an efficient variation-aware effective capacitance calculation based on statistical input transition, statistical gate timing library, and statistical RC-/spl pi/ load is presented. In this step, we use a single-iteration C/sub eff/ calculation which is efficient and reasonably accurate. Finally we calculate the statistical gate delay and output slew based on the aforementioned model. Experimental results show an average error of 7% for gate delay and output slew with respect to the HSPICE Monte Carlo simulation while the runtime is about 145 times faster.

16 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: This paper presents a dynamic programming-based technology mapping algorithm that generates a minimum-area mapping solution which is guaranteed to be fully path balanced to conventional superconductive single flux quantum circuits, which will fail otherwise.
Abstract: Path balancing technology mapping is a method of mapping a technology-independent logical description of a circuit, such as a Boolean network, into a technology-dependent, gate-level netlist. For a gate-level netlist generated by the path balancing mapper, the difference between lengths of the longest and the shortest paths in the circuit is minimized. To achieve full path balancing, it may be necessary to add buffers on signal paths, and in such a case, the cost of buffers must be properly accounted for. This paper presents a dynamic programming-based technology mapping algorithm that generates a minimum-area mapping solution which is guaranteed to be fully path balanced. The fully path balanced mapping solution is essential to conventional superconductive single flux quantum circuits, which will fail otherwise. The balanced mapping solution is also useful in CMOS circuits to avoid (or minimize) unwanted hazard activity and the resulting wasteful dynamic power dissipation as well as to achieve the maximum throughput in a wave-pipelined circuit. Experimental results show that our path balancing technology mapping algorithm decreases total area, static power consumption, and path balancing overhead of single flux quantum circuits by large factors. For example, it reduces the circuit area by up to 111% and by an average of 26.3% compared to state-of-the-art technology mappers.

16 citations

Proceedings ArticleDOI
07 May 2008
TL;DR: This work studies how the location-aware selection of the modulation schemes for sensors can affect their energy efficiency and shows how the energy in the network can be distributed more evenly by proper selection of those schemes for different sensors.
Abstract: Wireless sensor networks (WSN) with hierarchical organizations have recently attracted a lot of attention as effective platforms for pervasive computing. With power efficiency and lifetime awareness becoming critical design concerns, a significant amount of research has focused on energy-aware design of different layers of the WSN protocol stack. However, much less has been done in way of incorporating physical layer characteristics at the system deployment stage and analyzing the effects on spatial energy balancing across the network and the resulting overall network lifetime. Our focus is on improving the lifetime of each cluster of sensors in a hierarchical WSN using optimization techniques at the physical layer. Specifically, we study how the location-aware selection of the modulation schemes for sensors can affect their energy efficiency. Furthermore, we show how the energy in the network can be distributed more evenly by proper selection of the modulation schemes for different sensors.

15 citations

Journal ArticleDOI
TL;DR: Simulation results for current recycling ERSFQ circuits are presented along with a strategy for implementing large superconducting circuits, and an innovative clock-choking mechanism using magnetic Josephson junctions is proposed.
Abstract: Energy-efficient rapid single flux quantum (ERSFQ) circuits have become a viable alternative for the implementation of superconducting circuits due to a large amount of static power consumption in RSFQ circuits. ERSFQ circuits are built upon the popular RSFQ logic circuits by replacing the power-dissipating resistor bias network with a bias network consisting of active devices. In this paper, a simulation study of ERSFQ biasing scheme is carried out by building simulation test benches for both synchronous and asynchronous ERSFQ circuits. A study is carried out to present the optimum value of biasing inductance, influence of the feeding Josephson transmission line (FJTL) and the effect of its size, the effect of the feeding clock frequency, and the effect of the circuit operating frequency. An innovative clock-choking mechanism using magnetic Josephson junctions is also proposed for the FJTL in the case of no logic circuit activity for a current-recycling circuit block, which would help in eliminating the dynamic power consumed due to the switching of bias junctions in a logic circuit. Simulation results for current recycling ERSFQ circuits are presented along with a strategy for implementing large superconducting circuits.

15 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations