scispace - formally typeset
Search or ask a question
Journal ArticleDOI

iFogSim: A toolkit for modeling and simulation of resource management techniques in the Internet of Things, Edge and Fog computing environments

TL;DR: In this paper, the authors propose a simulator, called iFogSim, to model IoT and fog environments and measure the impact of resource management techniques in latency, network congestion, energy consumption, and cost.
Abstract: Summary Internet of Things (IoT) aims to bring every object (eg, smart cameras, wearable, environmental sensors, home appliances, and vehicles) online, hence generating massive volume of data that can overwhelm storage systems and data analytics applications. Cloud computing offers services at the infrastructure level that can scale to IoT storage and processing requirements. However, there are applications such as health monitoring and emergency response that require low latency, and delay that is caused by transferring data to the cloud and then back to the application can seriously impact their performances. To overcome this limitation, Fog computing paradigm has been proposed, where cloud services are extended to the edge of the network to decrease the latency and network congestion. To realize the full potential of Fog and IoT paradigms for real-time analytics, several challenges need to be addressed. The first and most critical problem is designing resource management techniques that determine which modules of analytics applications are pushed to each edge device to minimize the latency and maximize the throughput. To this end, we need an evaluation platform that enables the quantification of performance of resource management policies on an IoT or Fog computing infrastructure in a repeatable manner. In this paper we propose a simulator, called iFogSim, to model IoT and Fog environments and measure the impact of resource management techniques in latency, network congestion, energy consumption, and cost. We describe two case studies to demonstrate modeling of an IoT environment and comparison of resource management policies. Moreover, scalability of the simulation toolkit of RAM consumption and execution time is verified under different circumstances.
Citations
More filters
Journal ArticleDOI
TL;DR: Fog computing is designed to overcome limitations in traditional systems, the cloud, and even edge computing to handle the growing amount of data that is generated by the Internet of Things.
Abstract: The Internet of Things (IoT) could enable innovations that enhance the quality of life, but it generates unprecedented amounts of data that are difficult for traditional systems, the cloud, and even edge computing to handle. Fog computing is designed to overcome these limitations.

873 citations

Journal ArticleDOI
TL;DR: This paper provides a tutorial on fog computing and its related computing paradigms, including their similarities and differences, and provides a taxonomy of research topics in fog computing.

783 citations

Book ChapterDOI
TL;DR: In this paper, the challenges in fog computing acting as an intermediate layer between IoT devices/sensors and cloud datacentres and review the current developments in this field are discussed.
Abstract: In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named "Fog computing" has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/ sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features.We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research.

669 citations

Journal ArticleDOI
TL;DR: This work reviews the recent status of methodologies and techniques related to the construction of digital twins mostly from a modeling perspective to provide a detailed coverage of the current challenges and enabling technologies along with recommendations and reflections for various stakeholders.
Abstract: Digital twin can be defined as a virtual representation of a physical asset enabled through data and simulators for real-time prediction, optimization, monitoring, controlling, and improved decision making. Recent advances in computational pipelines, multiphysics solvers, artificial intelligence, big data cybernetics, data processing and management tools bring the promise of digital twins and their impact on society closer to reality. Digital twinning is now an important and emerging trend in many applications. Also referred to as a computational megamodel, device shadow, mirrored system, avatar or a synchronized virtual prototype, there can be no doubt that a digital twin plays a transformative role not only in how we design and operate cyber-physical intelligent systems, but also in how we advance the modularity of multi-disciplinary systems to tackle fundamental barriers not addressed by the current, evolutionary modeling practices. In this work, we review the recent status of methodologies and techniques related to the construction of digital twins mostly from a modeling perspective. Our aim is to provide a detailed coverage of the current challenges and enabling technologies along with recommendations and reflections for various stakeholders.

660 citations

Book ChapterDOI
01 Jan 2018
TL;DR: This chapter comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/ sensors and Cloud datacentres and presents a taxonomy of Fog computing according to the identified challenges and its key features.
Abstract: In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named “Fog computing” has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features. We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research.

501 citations

References
More filters
Journal ArticleDOI
TL;DR: MCEP significantly reduces latency, network utilization, and processing overhead by providing on-demand and opportunistic adaptation algorithms to dynamically assign event streams and computing resources to operators of the MCEP system.
Abstract: With the proliferation of mobile devices and sensors, complex event proceesing (CEP) is becoming increasingly important to scalably detect situations in real time. Current CEP systems are not capable of dealing efficiently with highly dynamic mobile consumers whose interests change with their location. We introduce the distributed mobile CEP (MCEP) system which automatically adapts the processing of events according to a consumer's location. MCEP significantly reduces latency, network utilization, and processing overhead by providing on-demand and opportunistic adaptation algorithms to dynamically assign event streams and computing resources to operators of the MCEP system.

54 citations

Journal ArticleDOI
TL;DR: This is the first work to reproduce Google cloud environment with real experimental system setting and real‐world large scale production trace and experiments show that the simulation system could effectively reproduce the real checkpointing/restart events based on Google trace, by leveraging Berkeley Lab Checkpoint/Restart tool.
Abstract: In 2011, Google released a 1-month production trace with hundreds of thousands of jobs running across over 12,000 heterogeneous hosts In order to perform in-depth research based on the trace, it is necessary to construct a close-to-practice simulation system In this paper, we devise a distributed cloud simulator or toolkit based on virtual machines, with three important features 1 The dynamic changing resource amounts such as CPU rate and memory size consumed by the reproduced jobs can be emulated as closely as possible to the real values in the trace 2 Various types of events eg, kill/evict event can be emulated precisely based on the trace 3 Our simulation toolkit is able to emulate more complex and useful cases beyond the original trace to adapt to various research demands We evaluate the system on a real cluster environment with 16×8=128 cores and 112 virtual machines constructed by XEN hypervisor To the best of our knowledge, this is the first work to reproduce Google cloud environment with real experimental system setting and real-world large scale production trace Experiments show that our simulation system could effectively reproduce the real checkpointing/restart events based on Google trace, by leveraging Berkeley Lab Checkpoint/Restart tool It can simultaneously process up to 1200 emulated Google jobs over the 112 virtual machines Such a simulation toolkit has been released as a GNU GPL v3 software for free downloading, and it has been successfully applied to the fundamental research on the optimization of checkpoint intervals for Google tasks Copyright © 2014 © Published 2014 This article is a US Government work and is in the public domain in the USA

38 citations

Proceedings ArticleDOI
27 Jun 2015
TL;DR: A service-oriented workflow based mobile Cloud middleware framework for balancing the task allocation between the mobile terminal and utility Cloud service and the proposed cost-performance index scheme assists workflow configuration decision-making based on fuzzy set and weight of context schemes.
Abstract: In the near future, Industrial Internet of Things can provide various useful spatial information in urban areas. Based on real time resource discovery and data retrieval technologies, mobile devices can continuously interact with surrounding things and provide real time content mash up services to their users. One challenge involved in such a scenario falls in the resource management. Continuous resource discovery and content mash up processes can be resource intensive for common handheld mobile devices. In order to reduce the resource usage, certain tasks of mobile application can be offloaded to Utility Cloud services. However, the task offloading process needs to be context-aware. In certain cases, performing tasks in mobile devices is more cost-efficient. This paper proposes a service-oriented workflow based mobile Cloud middleware framework for balancing the task allocation between the mobile terminal and utility Cloud service. The proposed cost-performance index scheme assists workflow configuration decision-making based on fuzzy set and weight of context schemes. The prototype has been implemented in real mobile devices and the evaluation has shown that the workflow system can automatically configure the task allocation based on resource availabilities.

28 citations

Journal ArticleDOI
TL;DR: Three high-quality papers are chosen for this special issue, covering different aspects of resource management, machine learning based framework, and a Blockchain-based mechanism that enables intelligent cloud computing.
Abstract: This special issue is intended to introduce the state-of-the-art, open research challenges, new solutions, and applications for intelligence in the cloud computing. Specifically, we choose three high-quality papers for this special issue, covering different aspects of resource management, machine learning based framework, and a Blockchain-based mechanism that enables intelligent cloud computing.

3 citations