scispace - formally typeset
Search or ask a question
Author

Patricia Arroba

Bio: Patricia Arroba is an academic researcher from Technical University of Madrid. The author has contributed to research in topics: Cloud computing & Computer science. The author has an hindex of 9, co-authored 18 publications receiving 217 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A DVFS policy that reduces power consumption while preventing performance degradation, and a DVFS‐aware consolidation policy that optimizes consumption are proposed, considering the DVFS configuration that would be necessary when mapping Virtual Machines to maintain Quality of Service.
Abstract: Summary Computational demand in data centers is increasing because of the growing popularity of Cloud applications. However, data centers are becoming unsustainable in terms of power consumption and growing energy costs so Cloud providers have to face the major challenge of placing them on a more scalable curve. Also, Cloud services are provided under strict Service Level Agreement conditions, so trade-offs between energy and performance have to be taken into account. Techniques as Dynamic Voltage and Frequency Scaling (DVFS) and consolidation are commonly used to reduce the energy consumption in data centers, although they are applied independently and their effects on Quality of Service are not always considered. Thus, understanding the relationship between power, DVFS, consolidation, and performance is crucial to enable energy-efficient management at the data center level. In this work, we propose a DVFS policy that reduces power consumption while preventing performance degradation, and a DVFS-aware consolidation policy that optimizes consumption, considering the DVFS configuration that would be necessary when mapping Virtual Machines to maintain Quality of Service. We have performed an extensive evaluation on the CloudSim toolkit using real Cloud traces and an accurate power model based on data gathered from real servers. Our results demonstrate that including DVFS awareness in workload management provides substantial energy savings of up to 41.62% for scenarios under dynamic workload conditions. These outcomes outperforms previous approaches, that do not consider integrated use of DVFS and consolidation strategies.

84 citations

Proceedings ArticleDOI
18 Oct 2015
TL;DR: This work proposes two contributions: a DVFS policy that takes into account the trade-offs between energy consumption and performance degradation; and a novel consolidation algorithm that is aware of the frequency that would be necessary when allocating a Cloud workload in order to maintain QoS.
Abstract: Nowadays, data centers consume about 2% of the worldwide energy production, originating more than 43 million tons of CO2 per year. Cloud providers need to implement an energy-efficient management of physical resources in order to meet the growing demand for their services and ensure minimal costs. From the application-framework viewpoint, Cloud workloads present additional restrictions as 24/7 availability, and SLA constraints among others. Also, workload variation impacts on the performance of two of the main strategies for energy-efficiency in Cloud data centers: Dynamic Voltage and Frequency Scaling (DVFS) and Consolidation. Our work proposes two contributions: 1) a DVFS policy that takes into account the trade-offs between energy consumption and performance degradation; 2) a novel consolidation algorithm that is aware of the frequency that would be necessary when allocating a Cloud workload in order to maintain QoS. Our results demonstrate that including DVFS awareness in workload management provides substantial energy savings of up to 39.14% for scenarios under dynamic workload conditions.

33 citations

Journal ArticleDOI
TL;DR: This work proposes an automatic method, based on Multi-Objective Particle Swarm Optimization, for the identification of power models of enterprise servers in Cloud data centers, which reaches slightly better models than classical approaches, but also broadens the possibilities to derive efficient energy saving techniques for Cloud facilities.

23 citations

Journal ArticleDOI
01 Dec 2016
TL;DR: The results show how the models can fully predict the temperature of the servers in a data rooms, with prediction errors below 2C and 0.5C in CPU and server inlet temperature respectively.
Abstract: Graphical abstractDisplay Omitted HighlightsModeling methodology for temperature prediction in data centers.Prediction of server CPU and inlet temperature under variable cooling setups.Development of time-dependent multi-variable models based on Grammatical Evolution.Premature convergence techniques using Social Disaster Techniques and Random Off-Spring Generation.Comparison to other techniques such as ARMA, N4SID and NARX.Models tuned, trained and tested using measurements from real server and data center traces. Data Centers are huge power consumers, both because of the energy required for computation and the cooling needed to keep servers below thermal redlining. The most common technique to minimize cooling costs is increasing data room temperature. However, to avoid reliability issues, and to enhance energy efficiency, there is a need to predict the temperature attained by servers under variable cooling setups. Due to the complex thermal dynamics of data rooms, accurate runtime data center temperature prediction has remained as an important challenge. By using Grammatical Evolution techniques, this paper presents a methodology for the generation of temperature models for data centers and the runtime prediction of CPU and inlet temperature under variable cooling setups. As opposed to time costly Computational Fluid Dynamics techniques, our models do not need specific knowledge about the problem, can be used in arbitrary data centers, re-trained if conditions change and have negligible overhead during runtime prediction. Our models have been trained and tested by using traces from real Data Center scenarios. Our results show how we can fully predict the temperature of the servers in a data rooms, with prediction errors below 2C and 0.5C in CPU and server inlet temperature respectively.

22 citations

Journal ArticleDOI
TL;DR: This paper proposes an accurate power model in data centers for time-constrained servers in Cloud computing that incorporates the need of considering the static power consumption and, even more interestingly, its dependency with temperature.
Abstract: Managing energy efficiency under timing constraints is an interesting and big challenge. This paper proposes an accurate power model in data centers for time-constrained servers in Cloud computing. This model, as opposed to previous approaches, does not only consider the workload assigned to the processing element, but also incorporates the need of considering the static power consumption and, even more interestingly, its dependency with temperature. The proposed model has been used in a multiobjective optimization environment in which the dynamic voltage and frequency scaling and workload assignment have been efficiently optimized.

18 citations


Cited by
More filters
01 Jan 2012
TL;DR: In this paper, the difference between the types of condition issues on objects that cause structural issues versus those that are purely cosmetic is identified and identified, which can help when determining whether or not an item requires conservation or deciding simple factors like the best way to store or exhibit an item.
Abstract: As preservers of history and the objects that others leave behind, it is easy to get distracted by a desire for those objects to be “perfect.” We worry about every little tear, stain or blemish, and we want nothing more than to return objects to their "original state." However, so many times those rough spots are part of the item’s history. It is important to understand the difference between the types of condition issues on objects that cause structural issues versus those that are purely cosmetic. Knowing and identifying these differences can help when determining whether or not an item requires conservation or deciding simple factors like the best way to store or exhibit an item.

347 citations

Journal ArticleDOI
TL;DR: In this paper, a comprehensive survey of the emerging applications of federated learning in IoT networks is provided, which explores and analyzes the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing and IoT privacy and security.
Abstract: The Internet of Things (IoT) is penetrating many facets of our daily life with the proliferation of intelligent services and applications empowered by artificial intelligence (AI). Traditionally, AI techniques require centralized data collection and processing that may not be feasible in realistic application scenarios due to the high scalability of modern IoT networks and growing data privacy concerns. Federated Learning (FL) has emerged as a distributed collaborative AI approach that can enable many intelligent IoT applications, by allowing for AI training at distributed IoT devices without the need for data sharing. In this article, we provide a comprehensive survey of the emerging applications of FL in IoT networks, beginning from an introduction to the recent advances in FL and IoT to a discussion of their integration. Particularly, we explore and analyze the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing, and IoT privacy and security. We then provide an extensive survey of the use of FL in various key IoT applications such as smart healthcare, smart transportation, Unmanned Aerial Vehicles (UAVs), smart cities, and smart industry. The important lessons learned from this review of the FL-IoT services and applications are also highlighted. We complete this survey by highlighting the current challenges and possible directions for future research in this booming area.

319 citations

Journal ArticleDOI
TL;DR: This book by Nino Boccara presents a compilation of model systems commonly termed as `complex' and starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics.
Abstract: This book by Nino Boccara presents a compilation of model systems commonly termed as `complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a compehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this `wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany--Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success!

268 citations

Journal ArticleDOI
TL;DR: In this paper, a comprehensive survey of the emerging applications of federated learning in IoT networks is provided, which explores and analyzes the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing and IoT privacy and security.
Abstract: The Internet of Things (IoT) is penetrating many facets of our daily life with the proliferation of intelligent services and applications empowered by artificial intelligence (AI). Traditionally, AI techniques require centralized data collection and processing that may not be feasible in realistic application scenarios due to the high scalability of modern IoT networks and growing data privacy concerns. Federated Learning (FL) has emerged as a distributed collaborative AI approach that can enable many intelligent IoT applications, by allowing for AI training at distributed IoT devices without the need for data sharing. In this article, we provide a comprehensive survey of the emerging applications of FL in IoT networks, beginning from an introduction to the recent advances in FL and IoT to a discussion of their integration. Particularly, we explore and analyze the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing, and IoT privacy and security. We then provide an extensive survey of the use of FL in various key IoT applications such as smart healthcare, smart transportation, Unmanned Aerial Vehicles (UAVs), smart cities, and smart industry. The important lessons learned from this review of the FL-IoT services and applications are also highlighted. We complete this survey by highlighting the current challenges and possible directions for future research in this booming area.

205 citations

Journal ArticleDOI
TL;DR: This article comprehensively and comparatively studies existing energy efficiency techniques in cloud computing and provides the taxonomies for the classification and evaluation of the existing studies.
Abstract: The increase in energy consumption is the most critical problem worldwide. The growth and development of complex data-intensive applications have promulgated the creation of huge data centers that have heightened the energy demand. In this article, the need for energy efficiency is emphasized by discussing the dual role of cloud computing as a major contributor to increasing energy consumption and as a method to reduce energy wastage. This article comprehensively and comparatively studies existing energy efficiency techniques in cloud computing and provides the taxonomies for the classification and evaluation of the existing studies. The article concludes with a summary providing valuable suggestions for future enhancements.

172 citations