scispace - formally typeset
Search or ask a question
Author

Daniel Mosse

Bio: Daniel Mosse is an academic researcher from University of Pittsburgh. The author has contributed to research in topics: Scheduling (computing) & Energy consumption. The author has an hindex of 48, co-authored 247 publications receiving 7578 citations. Previous affiliations of Daniel Mosse include University of Maryland, College Park & Universidade Federal de Santa Catarina.


Papers
More filters
Proceedings ArticleDOI
03 Dec 2001
TL;DR: It is established that solving an instance of the static power-aware scheduling problem is equivalent to solving an instances of the reward-based scheduling problem [1, 4] with concave reward functions.
Abstract: In this paper we address power-aware scheduling of periodic hard real-time tasks using dynamic voltage scaling. Our solution includes three parts: (a) a static (off-line) solution to compute the optimal speed, assuming worst-case workload for each arrival, (b) an on-line speed reduction mechanism to reclaim energy by adapting to the actual workload, and (c) an online, adaptive and speculative speed adjustment mechanism to anticipate early completions of future executions by using the average-case workload information. All these solutions still guarantee that all deadlines are met. Our simulation results show that the reclaiming algorithm saves a striking 50% of the energy, over the static algorithm. Further our speculative techniques allow for an additional approximately 20% savings over the reclaiming algorithm. In this study, we also establish that solving an instance of the static power-aware scheduling problem is equivalent to solving an instance of the reward-based scheduling problem [1, 4] with concave reward functions.

495 citations

Journal ArticleDOI
TL;DR: The simulation results show that the reclaiming algorithm alone outperforms other recently proposed intertask voltage scheduling schemes and the speculative techniques are shown to provide additional gains, approaching the theoretical lower-bound by a margin of 10 percent.
Abstract: We address power-aware scheduling of periodic tasks to reduce CPU energy consumption in hard real-time systems through dynamic voltage scaling. Our intertask voltage scheduling solution includes three components: 1) a static (offline) solution to compute the optimal speed, assuming worst-case workload for each arrival, 2) an online speed reduction mechanism to reclaim energy by adapting to the actual workload, and 3) an online, adaptive and speculative speed adjustment mechanism to anticipate early completions of future executions by using the average-case workload information. All these solutions still guarantee that all deadlines are met. Our simulation results show that our reclaiming algorithm alone outperforms other recently proposed intertask voltage scheduling schemes. Our speculative techniques are shown to provide additional gains, approaching the theoretical lower-bound by a margin of 10 percent.

481 citations

Proceedings ArticleDOI
07 Nov 2004
TL;DR: In this article, the authors investigated the effects of frequency and voltage scaling on the fault rate and proposed two fault rate models based on previously published data and analyzed the effect of energy management on reliability.
Abstract: The slack time in real-time systems can be used by recovery schemes to increase system reliability as well as by frequency and voltage scaling techniques to save energy. Moreover, the rate of transient faults (i.e., soft errors caused, for example, by cosmic ray radiations) also depends on system operating frequency and supply voltage. Thus, there is an interesting trade-off between system reliability and energy consumption. This work first investigates the effects of frequency and voltage scaling on the fault rate and proposes two fault rate models based on previously published data. Then, the effects of energy management on reliability are studied. Our analysis results show that, energy management through frequency and voltage scaling could dramatically reduce system reliability, and ignoring the effects of energy management on the fault rate is too optimistic and may lead to unsatisfied system reliability.

298 citations

Proceedings ArticleDOI
13 Jun 2001
TL;DR: It is shown that a task T/sub i/ can run at a constant speed S/ sub i/ at every instance without hurting optimality and it is proved that the EDF (Earliest Deadline First) scheduling policy can be used to obtain a feasible schedule with these optimal speed values.
Abstract: In this paper, we provide an efficient solution for periodic real-time tasks with (potentially) different power consumption characteristics. We show that a task T/sub i/ can run at a constant speed S/sub i/ at every instance without hurting optimality. We sketch an O(n/sup 2/ log n) algorithm to compute the optimal S/sub i/ values. We also prove that the EDF (Earliest Deadline First) scheduling policy can be used to obtain a feasible schedule with these optimal speed values.

216 citations

Proceedings ArticleDOI
08 Mar 2010
TL;DR: In this paper, the authors describe techniques to enhance the lifetime of phase-change memory (PCM) when used for main memory, including cache replacement policies, avoidance of unnecessary writebacks, and endurance management with a novel PCM-aware swap algorithm for wear-leveling.
Abstract: The introduction of Phase-Change Memory (PCM) as a main memory technology has great potential to achieve a large energy reduction. PCM has desirable energy and scalability properties, but its use for main memory also poses challenges such as limited write endurance with at most 107 writes per bit cell before failure. This paper describes techniques to enhance the lifetime of PCM when used for main memory. Our techniques are (a) writeback minimization with new cache replacement policies, (b) avoidance of unnecessary writes, which write only the bit cells that are actually changed, and (c) endurance management with a novel PCM-aware swap algorithm for wear-leveling. A failure detection algorithm is also incorporated to improve the reliability of PCM. With these approaches, the lifetime of a PCM main memory is increased from just a few days to over 8 years.

200 citations


Cited by
More filters
09 Mar 2012
TL;DR: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems as mentioned in this paper, and they have been widely used in computer vision applications.
Abstract: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods. † Correspondence: Chung-Ming Kuan, Institute of Economics, Academia Sinica, 128 Academia Road, Sec. 2, Taipei 115, Taiwan; ckuan@econ.sinica.edu.tw. †† I would like to express my sincere gratitude to the editor, Professor Steven Durlauf, for his patience and constructive comments on early drafts of this entry. I also thank Shih-Hsun Hsu and Yu-Lieh Huang for very helpful suggestions. The remaining errors are all mine.

2,069 citations

Book
12 Aug 2005
TL;DR: In this article, the authors state several problems related to topology control in wireless ad hoc and sensor networks, and survey state-of-the-art solutions which have been proposed to tackle them.
Abstract: Topology Control (TC) is one of the most important techniques used in wireless ad hoc and sensor networks to reduce energy consumption (which is essential to extend the network operational time) and radio interference (with a positive effect on the network traffic carrying capacity). The goal of this technique is to control the topology of the graph representing the communication links between network nodes with the purpose of maintaining some global graph property (e.g., connectivity), while reducing energy consumption and/or interference that are strictly related to the nodes' transmitting range. In this article, we state several problems related to topology control in wireless ad hoc and sensor networks, and we survey state-of-the-art solutions which have been proposed to tackle them. We also outline several directions for further research which we hope will motivate researchers to undertake additional studies in this field.

1,367 citations

Journal Article
TL;DR: Govindan et al. as mentioned in this paper performed a large-scale measurement of packet delivery in dense wireless sensor networks and found that packet de-livery performance is important for energy-constrained networks.
Abstract: Understanding Packet Delivery Performance In Dense Wireless Sensor Networks ∗ Computer Science Department University of Southern California Los Angeles, CA 90089-0781 Jerry Zhao Computer Science Department University of Southern California Los Angeles, CA 90089-0781 Ramesh Govindan zhaoy@usc.edu ABSTRACT Wireless sensor networks promise fine-grain monitoring in a wide variety of environments. Many of these environ- ments (e.g., indoor environments or habitats) can be harsh for wireless communication. From a networking perspec- tive, the most basic aspect of wireless communication is the packet delivery performance: the spatio-temporal charac- teristics of packet loss, and its environmental dependence. These factors will deeply impact the performance of data acquisition from these networks. In this paper, we report on a systematic medium-scale (up to sixty nodes) measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot. Our findings have interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks. ramesh@usc.edu spectrum under use, the particular modulation schemes un- der use, and possibly on the communicating devices them- selves. Communication quality can vary dramatically over time, and has been reputed to change with slight spatial displacements. All of these are true to a greater degree for ad-hoc (or infrastructure-less) communication than for wire- less communication to a base station. Given this, and the paucity of large-scale deployments, it is perhaps not surpris- ing that there have been no medium to large-scale measure- ments of ad-hoc wireless systems; one expects measurement studies to reveal high variability in performance, and one suspects that such studies will be non-representative. Wireless sensor networks [5, 7] are predicted on ad-hoc wireless communications. Perhaps more than other ad-hoc wireless systems, these networks can expect highly variable wireless communication. They will be deployed in harsh, inaccessible, environments which, almost by definition will exhibit significant multi-path communication. Many of the current sensor platforms use low-power radios which do not have enough frequency diversity to reject multi-path prop- agation. Finally, these networks will be fairly densely de- ployed (on the order of tens of nodes within communica- tion range). Given the potential impact of these networks, and despite the anecdotal evidence of variability in wireless communication, we argue that it is imperative that we get a quantitative understanding of wireless communication in sensor networks, however imperfect. Our paper is a first attempt at this. Using up to 60 Mica motes, we systematically evaluate the most basic aspect of wireless communication in a sensor network: packet delivery. Particularly for energy-constrained networks, packet de- livery performance is important, since that translates to net- work lifetime. Sensor networks are predicated using low- power RF transceivers in a multi-hop fashion. Multiple short hops can be more energy-efficient than one single hop over a long range link. Poor cumulative packet delivery per- formance across multiple hops may degrade performance of data transport and expend significant energy. Depending on the kind of application, it might significantly undermine application-level performance. Finally, understanding the dynamic range of packet delivery performance (and the ex- tent, and time-varying nature of this performance) is impor- tant for evaluating almost all sensor network communication protocols. We study packet delivery performance at two layers of the communication stack (Section 3). At the physical-layer and in the absence of interfering transmissions, packet de- Categories and Subject Descriptors C.2.1 [Network Architecture and Design]: Wireless communication; C.4 [Performance of Systems]: Perfor- mance attributes, Measurement techniques General Terms Measurement, Experimentation Keywords Low power radio, Packet loss, Performance measurement 1. INTRODUCTION Wireless communication has the reputation of being no- toriously unpredictable. The quality of wireless communica- tion depends on the environment, the part of the frequency ∗ This work is supported in part by NSF grant CCR-0121778 for the Center for Embedded Systems. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SenSys’03, November 5–7, 2003, Los Angeles, California, USA. Copyright 2003 ACM 1-58113-707-9/03/0011 ... $ 5.00.

1,330 citations

Proceedings ArticleDOI
05 Nov 2003
TL;DR: This paper reports on a systematic medium-scale measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot, which has interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks.
Abstract: Wireless sensor networks promise fine-grain monitoring in a wide variety of environments. Many of these environments (e.g., indoor environments or habitats) can be harsh for wireless communication. From a networking perspective, the most basic aspect of wireless communication is the packet delivery performance: the spatio-temporal characteristics of packet loss, and its environmental dependence. These factors will deeply impact the performance of data acquisition from these networks.In this paper, we report on a systematic medium-scale (up to sixty nodes) measurement of packet delivery in three different environments: an indoor office building, a habitat with moderate foliage, and an open parking lot. Our findings have interesting implications for the design and evaluation of routing and medium-access protocols for sensor networks.

1,326 citations

Proceedings ArticleDOI
30 Mar 2011
TL;DR: Dominant Resource Fairness (DRF), a generalization of max-min fairness to multiple resource types, is proposed, and it is shown that it leads to better throughput and fairness than the slot-based fair sharing schemes in current cluster schedulers.
Abstract: We consider the problem of fair resource allocation in a system containing different resource types, where each user may have different demands for each resource. To address this problem, we propose Dominant Resource Fairness (DRF), a generalization of max-min fairness to multiple resource types. We show that DRF, unlike other possible policies, satisfies several highly desirable properties. First, DRF incentivizes users to share resources, by ensuring that no user is better off if resources are equally partitioned among them. Second, DRF is strategy-proof, as a user cannot increase her allocation by lying about her requirements. Third, DRF is envy-free, as no user would want to trade her allocation with that of another user. Finally, DRF allocations are Pareto efficient, as it is not possible to improve the allocation of a user without decreasing the allocation of another user. We have implemented DRF in the Mesos cluster resource manager, and show that it leads to better throughput and fairness than the slot-based fair sharing schemes in current cluster schedulers.

1,189 citations