scispace - formally typeset
Search or ask a question

Showing papers on "Edge computing published in 2017"


Journal ArticleDOI
TL;DR: The relationship between cyber-physical systems and IoT, both of which play important roles in realizing an intelligent cyber- physical world, are explored and existing architectures, enabling technologies, and security and privacy issues in IoT are presented to enhance the understanding of the state of the art IoT development.
Abstract: Fog/edge computing has been proposed to be integrated with Internet of Things (IoT) to enable computing services devices deployed at network edge, aiming to improve the user’s experience and resilience of the services in case of failures. With the advantage of distributed architecture and close to end-users, fog/edge computing can provide faster response and greater quality of service for IoT applications. Thus, fog/edge computing-based IoT becomes future infrastructure on IoT development. To develop fog/edge computing-based IoT infrastructure, the architecture, enabling techniques, and issues related to IoT should be investigated first, and then the integration of fog/edge computing and IoT should be explored. To this end, this paper conducts a comprehensive overview of IoT with respect to system architecture, enabling technologies, security and privacy issues, and present the integration of fog/edge computing and IoT, and applications. Particularly, this paper first explores the relationship between cyber-physical systems and IoT, both of which play important roles in realizing an intelligent cyber-physical world. Then, existing architectures, enabling technologies, and security and privacy issues in IoT are presented to enhance the understanding of the state of the art IoT development. To investigate the fog/edge computing-based IoT, this paper also investigate the relationship between IoT and fog/edge computing, and discuss issues in fog/edge computing-based IoT. Finally, several applications, including the smart grid, smart transportation, and smart cities, are presented to demonstrate how fog/edge computing-based IoT to be implemented in real-world applications.

2,057 citations


Journal ArticleDOI
TL;DR: This paper describes major use cases and reference scenarios where the mobile edge computing (MEC) is applicable and surveys existing concepts integrating MEC functionalities to the mobile networks and discusses current advancement in standardization of the MEC.
Abstract: Technological evolution of mobile user equipment (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. A suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud. Nevertheless, this option introduces significant execution delay consisting of delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such a delay is inconvenient and makes the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling it to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: 1) decision on computation offloading; 2) allocation of computing resource within the MEC; and 3) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.

1,829 citations


Journal ArticleDOI
TL;DR: A five-video playlist demonstrating proof-of-concept implementations for three tasks: assembling 2D Lego models, freehand sketching, and playing Ping-Pong is demonstrated.
Abstract: Industry investment and research interest in edge computing, in which computing and storage nodes are placed at the Internet's edge in close proximity to mobile devices or sensors, have grown dramatically in recent years. This emerging technology promises to deliver highly responsive cloud services for mobile computing, scalability and privacy-policy enforcement for the Internet of Things, and the ability to mask transient cloud outages. The web extra at www.youtube.com/playlist?list=PLmrZVvFtthdP3fwHPy_4d61oDvQY_RBgS includes a five-video playlist demonstrating proof-of-concept implementations for three tasks: assembling 2D Lego models, freehand sketching, and playing Ping-Pong.

1,690 citations


Journal ArticleDOI
TL;DR: This paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties, and elaborates further on open research challenges.
Abstract: Multi-access edge computing (MEC) is an emerging ecosystem, which aims at converging telecommunication and IT services, providing a cloud computing platform at the edge of the radio access network MEC offers storage and computational resources at the edge, reducing latency for mobile end users and utilizing more efficiently the mobile backhaul and core networks This paper introduces a survey on MEC and focuses on the fundamental key enabling technologies It elaborates MEC orchestration considering both individual services and a network of MEC platforms supporting mobility, bringing light into the different orchestration deployment options In addition, this paper analyzes the MEC reference architecture and main deployment scenarios, which offer multi-tenancy support for application developers, content providers, and third parties Finally, this paper overviews the current standardization activities and elaborates further on open research challenges

1,351 citations


Journal ArticleDOI
TL;DR: This survey makes an exhaustive review on the state-of-the-art research efforts on mobile edge networks, including definition, architecture, and advantages, and presents a comprehensive survey of issues on computing, caching, and communication techniques at the network edge.
Abstract: As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.

782 citations


Journal ArticleDOI
TL;DR: A real-time, context-aware collaboration framework that lies at the edge of the RAN, comprising MEC servers and mobile devices, and amalgamates the heterogeneous resources at theedge is envisions.
Abstract: MEC is an emerging paradigm that provides computing, storage, and networking resources within the edge of the mobile RAN. MEC servers are deployed on a generic computing platform within the RAN, and allow for delay-sensitive and context-aware applications to be executed in close proximity to end users. This paradigm alleviates the backhaul and core network and is crucial for enabling low-latency, high-bandwidth, and agile mobile services. This article envisions a real-time, context-aware collaboration framework that lies at the edge of the RAN, comprising MEC servers and mobile devices, and amalgamates the heterogeneous resources at the edge. Specifically, we introduce and study three representative use cases ranging from mobile edge orchestration, collaborative caching and processing, and multi-layer interference cancellation. We demonstrate the promising benefits of the proposed approaches in facilitating the evolution to 5G networks. Finally, we discuss the key technical challenges and open research issues that need to be addressed in order to efficiently integrate MEC into the 5G ecosystem.

700 citations


Journal ArticleDOI
TL;DR: Fog computing extends the cloud services to the edge of network, and makes computation, communication and storage closer to edge devices and end-users, which aims to enhance low-latency, mobility, network bandwidth, security and privacy.

645 citations


Journal ArticleDOI
TL;DR: This work forms the computation offloading decision, resource allocation and content caching strategy as an optimization problem, considering the total revenue of the network, and develops an alternating direction method of multipliers-based algorithm to solve the optimization problem.
Abstract: Mobile edge computing has risen as a promising technology for augmenting the computational capabilities of mobile devices Meanwhile, in-network caching has become a natural trend of the solution of handling exponentially increasing Internet traffic The important issues in these two networking paradigms are computation offloading and content caching strategies, respectively In order to jointly tackle these issues in wireless cellular networks with mobile edge computing, we formulate the computation offloading decision, resource allocation and content caching strategy as an optimization problem, considering the total revenue of the network Furthermore, we transform the original problem into a convex problem and then decompose it in order to solve it in a distributed and efficient way Finally, with recent advances in distributed convex optimization, we develop an alternating direction method of multipliers-based algorithm to solve the optimization problem The effectiveness of the proposed scheme is demonstrated by simulation results with different system parameters

611 citations


Journal ArticleDOI
TL;DR: This survey attempts to provide a comprehensive list of vulnerabilities and countermeasures against them on the edge-side layer of IoT, which consists of three levels: (i) edge nodes, (ii) communication, and (iii) edge computing.
Abstract: Internet of Things (IoT), also referred to as the Internet of Objects, is envisioned as a transformative approach for providing numerous services. Compact smart devices constitute an essential part of IoT. They range widely in use, size, energy capacity, and computation power. However, the integration of these smart things into the standard Internet introduces several security challenges because the majority of Internet technologies and communication protocols were not designed to support IoT. Moreover, commercialization of IoT has led to public security concerns, including personal privacy issues, threat of cyber attacks, and organized crime. In order to provide a guideline for those who want to investigate IoT security and contribute to its improvement, this survey attempts to provide a comprehensive list of vulnerabilities and countermeasures against them on the edge-side layer of IoT, which consists of three levels: (i) edge nodes, (ii) communication, and (iii) edge computing. To achieve this goal, we first briefly describe three widely-known IoT reference models and define security in the context of IoT. Second, we discuss the possible applications of IoT and potential motivations of the attackers who target this new paradigm. Third, we discuss different attacks and threats. Fourth, we describe possible countermeasures against these attacks. Finally, we introduce two emerging security challenges not yet explained in detail in previous literature.

547 citations


Journal ArticleDOI
TL;DR: The authors propose a mechanism that employs fog to improve the distribution of certificate revocation information among IoT devices for security enhancement and present potential research directions aimed at using fog computing to enhance the security and privacy issues in IoT environments.
Abstract: The inherent characteristics of Internet of Things (IoT) devices, such as limited storage and computational power, require a new platform to efficiently process data. The concept of fog computing has been introduced as a technology to bridge the gap between remote data centers and IoT devices. Fog computing enables a wide range of benefits, including enhanced security, decreased bandwidth, and reduced latency. These benefits make the fog an appropriate paradigm for many IoT services in various applications such as connected vehicles and smart grids. Nevertheless, fog devices (located at the edge of the Internet) obviously face many security and privacy threats, much the same as those faced by traditional data centers. In this article, the authors discuss the security and privacy issues in IoT environments and propose a mechanism that employs fog to improve the distribution of certificate revocation information among IoT devices for security enhancement. They also present potential research directions aimed at using fog computing to enhance the security and privacy issues in IoT environments.

520 citations


Proceedings ArticleDOI
05 Jun 2017
TL;DR: In this paper, the authors proposed distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices.
Abstract: We propose distributed deep neural networks (DDNNs) over distributed computing hierarchies, consisting of the cloud, the edge (fog) and end devices. While being able to accommodate inference of a deep neural network (DNN) in the cloud, a DDNN also allows fast and localized inference using shallow portions of the neural network at the edge and end devices. When supported by a scalable distributed computing hierarchy, a DDNN can scale up in neural network size and scale out in geographical span. Due to its distributed nature, DDNNs enhance sensor fusion, system fault tolerance and data privacy for DNN applications. In implementing a DDNN, we map sections of a DNN onto a distributed computing hierarchy. By jointly training these sections, we minimize communication and resource usage for devices and maximize usefulness of extracted features which are utilized in the cloud. The resulting system has built-in support for automatic sensor fusion and fault tolerance. As a proof of concept, we show a DDNN can exploit geographical diversity of sensors to improve object recognition accuracy and reduce communication cost. In our experiment, compared with the traditional method of offloading raw sensor data to be processed in the cloud, DDNN locally processes most sensor data on end devices while achieving high accuracy and is able to reduce the communication cost by a factor of over 20x.

Journal ArticleDOI
TL;DR: The main requirements of wireless interconnected VR are described followed by a selection of key enablers; then research avenues and their underlying grand challenges are presented.
Abstract: Just recently, the concept of augmented and virtual reality (AR/VR) over wireless has taken the entire 5G ecosystem by storm, spurring an unprecedented interest from academia, industry, and others. However, the success of an immersive VR experience hinges on solving a plethora of grand challenges cutting across multiple disciplines. This article underscores the importance of VR technology as a disruptive use case of 5G (and beyond) harnessing the latest development of storage/ memory, fog/edge computing, computer vision, artificial intelligence, and others. In particular, the main requirements of wireless interconnected VR are described followed by a selection of key enablers; then research avenues and their underlying grand challenges are presented. Furthermore, we examine three VR case studies and provide numerical results under various storage, computing, and network configurations. Finally, this article exposes the limitations of current networks and makes the case for more theory, and innovations to spearhead VR for the masses.

Journal ArticleDOI
TL;DR: This paper comprehensively presents a tutorial on three typical edge computing technologies, namely mobile edge computing, cloudlets, and fog computing, and the standardization efforts, principles, architectures, and applications of these three technologies are summarized and compared.

Journal ArticleDOI
TL;DR: A geographically distributed architecture of public clouds and edges that extend down to the cameras is the only feasible approach to meeting the strict real-time requirements of large-scale live video analytics.
Abstract: Video analytics will drive a wide range of applications with great potential to impact society. A geographically distributed architecture of public clouds and edges that extend down to the cameras is the only feasible approach to meeting the strict real-time requirements of large-scale live video analytics.

Journal ArticleDOI
TL;DR: This paper provides an overview of existing security and privacy concerns, particularly for the fog computing, and highlights ongoing research effort, open challenges, and research trends in privacy and security issues for fog computing.
Abstract: Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing toward the edge of the networks while offloading the cloud data centers and reducing service latency to the end users. However, the characteristics of fog computing arise new security and privacy challenges. The existing security and privacy measurements for cloud computing cannot be directly applied to the fog computing due to its features, such as mobility, heterogeneity, and large-scale geo-distribution. This paper provides an overview of existing security and privacy concerns, particularly for the fog computing. Afterward, this survey highlights ongoing research effort, open challenges, and research trends in privacy and security issues for fog computing.

Proceedings ArticleDOI
06 Jun 2017
TL;DR: A set of parameters are defined based on which one of these implementations can be chosen optimally given a particular use-case or application and a decision tree for the selection of the optimal implementation is presented.
Abstract: When it comes to storage and computation of large scales of data, Cloud Computing has acted as the de-facto solution over the past decade. However, with the massive growth in intelligent and mobile devices coupled with technologies like Internet of Things (IoT), V2X Communications, Augmented Reality (AR), the focus has shifted towards gaining real-time responses along with support for context-awareness and mobility. Due to the delays induced on the Wide Area Network (WAN) and location agnostic provisioning of resources on the cloud, there is a need to bring the features of the cloud closer to the consumer devices. This led to the birth of the Edge Computing paradigm which aims to provide context aware storage and distributed Computing at the edge of the networks. In this paper, we discuss the three different implementations of Edge Computing namely Fog Computing, Cloudlet and Mobile Edge Computing in detail and compare their features. We define a set of parameters based on which one of these implementations can be chosen optimally given a particular use-case or application and present a decision tree for the selection of the optimal implementation.

Journal ArticleDOI
TL;DR: The proposed scheme enforces an autonomic creation of MEC services to allow anywhere anytime data access with optimum QoE and reduced latency to ensure ultra-short latency through a smart MEC architecture capable of achieving the 1 ms latency dream for the upcoming 5G mobile systems.
Abstract: This article proposes an approach to enhance users' experience of video streaming in the context of smart cities. The proposed approach relies on the concept of MEC as a key factor in enhancing QoS. It sustains QoS by ensuring that applications/services follow the mobility of users, realizing the "Follow Me Edge" concept. The proposed scheme enforces an autonomic creation of MEC services to allow anywhere anytime data access with optimum QoE and reduced latency. Considering its application in smart city scenarios, the proposed scheme represents an important solution for reducing core network traffic and ensuring ultra-short latency through a smart MEC architecture capable of achieving the 1 ms latency dream for the upcoming 5G mobile systems.

Journal ArticleDOI
TL;DR: The scheduling problem in Fog computing is analyzed, focusing on how user mobility can influence application performance and how three different scheduling policies, namely concurrent, FCFS, and delay-priority, can be used to improve execution based on application characteristics.
Abstract: Fog computing provides a distributed infrastructure at the edges of the network, resulting in low-latency access and faster response to application requests when compared to centralized clouds. With this new level of computing capacity introduced between users and the data center-based clouds, new forms of resource allocation and management can be developed to take advantage of the Fog infrastructure. A wide range of applications with different requirements run on end-user devices, and with the popularity of cloud computing many of them rely on remote processing or storage. As clouds are primarily delivered through centralized data centers, such remote processing/storage usually takes place at a single location that hosts user applications and data. The distributed capacity provided by Fog computing allows execution and storage to be performed at different locations. The combination of distributed capacity, the range and types of user applications, and the mobility of smart devices require resource management and scheduling strategies that takes into account these factors altogether. We analyze the scheduling problem in Fog computing, focusing on how user mobility can influence application performance and how three different scheduling policies, namely concurrent, FCFS, and delay-priority, can be used to improve execution based on application characteristics.

Journal ArticleDOI
TL;DR: The results from the review were compiled into an IoT architecture that represents a wide range of current solutions in agro-industrial and environmental fields that are motivated by the need to identify application areas, trends, architectures and open challenges in these two fields.

Journal ArticleDOI
TL;DR: In this article, a user-centric energy-aware mobility management (EMM) scheme is proposed to optimize the delay due to both radio access and computation under the long-term energy consumption constraint of the user.
Abstract: Merging mobile edge computing (MEC) functionality with the dense deployment of base stations (BSs) provides enormous benefits such as a real proximity, low latency access to computing resources. However, the envisioned integration creates many new challenges, among which mobility management (MM) is a critical one. Simply applying existing radio access-oriented MM schemes leads to poor performance mainly due to the co-provisioning of radio access and computing services of the MEC-enabled BSs. In this paper, we develop a novel user-centric energy-aware mobility management (EMM) scheme, in order to optimize the delay due to both radio access and computation, under the long-term energy consumption constraint of the user. Based on Lyapunov optimization and multi-armed bandit theories, EMM works in an online fashion without future system state information, and effectively handles the imperfect system state information. Theoretical analysis explicitly takes radio handover and computation migration cost into consideration and proves a bounded deviation on both the delay performance and energy consumption compared with the oracle solution with exact and complete future system information. The proposed algorithm also effectively handles the scenario in which candidate BSs randomly switch ON/OFF during the offloading process of a task. Simulations show that the proposed algorithms can achieve close-to-optimal delay performance while satisfying the user energy consumption constraint.

Journal ArticleDOI
TL;DR: A clear collaboration model for the SDN-Edge Computing interaction is put forward through practical architectures and it is shown that SDN related mechanisms can feasibly operate within the Edge Computing infrastructures.
Abstract: A novel paradigm that changes the scene for the modern communication and computation systems is the Edge Computing. It is not a coincidence that terms like Mobile Cloud Computing, Cloudlets, Fog Computing, and Mobile-Edge Computing are gaining popularity both in academia and industry. In this paper, we embrace all these terms under the umbrella concept of “Edge Computing” to name the trend where computational infrastructures hence the services themselves are getting closer to the end user. However, we observe that bringing computational infrastructures to the proximity of the user does not magically solve all technical challenges. Moreover, it creates complexities of its own when not carefully handled. In this paper, these challenges are discussed in depth and categorically analyzed. As a solution direction, we propose that another major trend in networking, namely software-defined networking (SDN), should be taken into account. SDN, which is not proposed specifically for Edge Computing, can in fact serve as an enabler to lower the complexity barriers involved and let the real potential of Edge Computing be achieved. To fully demonstrate our ideas, initially, we put forward a clear collaboration model for the SDN-Edge Computing interaction through practical architectures and show that SDN related mechanisms can feasibly operate within the Edge Computing infrastructures. Then, we provide a detailed survey of the approaches that comprise the Edge Computing domain. A comparative discussion elaborates on where these technologies meet as well as how they differ. Later, we discuss the capabilities of SDN and align them with the technical shortcomings of Edge Computing implementations. We thoroughly investigate the possible modes of operation and interaction between the aforementioned technologies in all directions and technically deduce a set of “Benefit Areas” which is discussed in detail. Lastly, as SDN is an evolving technology, we give the future directions for enhancing the SDN development so that it can take this collaboration to a further level.

Journal ArticleDOI
TL;DR: This article first proposes a transparent computing based IoT architecture, and clearly identifies its advantages and associated challenges, and presents a case study to clearly show how to build scalable lightweight wearables with the proposed architecture.
Abstract: By moving service provisioning from the cloud to the edge, edge computing becomes a promising solution in the era of IoT to meet the delay requirements of IoT applications, enhance the scalability and energy efficiency of lightweight IoT devices, provide contextual information processing, and mitigate the traffic burdens of the backbone network. However, as an emerging field of study, edge computing is still in its infancy and faces many challenges in its implementation and standardization. In this article, we study an implementation of edge computing, which exploits transparent computing to build scalable IoT platforms. Specifically, we first propose a transparent computing based IoT architecture, and clearly identify its advantages and associated challenges. Then, we present a case study to clearly show how to build scalable lightweight wearables with the proposed architecture. Some future directions are finally pointed out to foster continued research efforts.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a joint optimization framework for all the nodes, DSOs, and DSSs to achieve the optimal resource allocation schemes in a distributed fashion, where a Stackelberg game was formulated to analyze the pricing problem for the DSO and the resource allocation problem for DSS.
Abstract: Fog computing is a promising architecture to provide economical and low latency data services for future Internet of Things (IoT)-based network systems. Fog computing relies on a set of low-power fog nodes (FNs) that are located close to the end users to offload the services originally targeting at cloud data centers. In this paper, we consider a specific fog computing network consisting of a set of data service operators (DSOs) each of which controls a set of FNs to provide the required data service to a set of data service subscribers (DSSs). How to allocate the limited computing resources of FNs to all the DSSs to achieve an optimal and stable performance is an important problem. Therefore, we propose a joint optimization framework for all FNs, DSOs, and DSSs to achieve the optimal resource allocation schemes in a distributed fashion. In the framework, we first formulate a Stackelberg game to analyze the pricing problem for the DSOs as well as the resource allocation problem for the DSSs. Under the scenarios that the DSOs can know the expected amount of resource purchased by the DSSs, a many-to-many matching game is applied to investigate the pairing problem between DSOs and FNs. Finally, within the same DSO, we apply another layer of many-to-many matching between each of the paired FNs and serving DSSs to solve the FN-DSS pairing problem. Simulation results show that our proposed framework can significantly improve the performance of the IoT-based network systems.

Proceedings ArticleDOI
TL;DR: The result of this work can serve as a Micro-benchmark in studies/research related with IoT and Fog Computing, and can be used for Quality of Service (QoS) and Service Level Objective benchmarking for IoT applications.

Journal ArticleDOI
TL;DR: A hierarchical distributed Fog Computing architecture is introduced to support the integration of massive number of infrastructure components and services in future smart cities and demonstrates the feasibility of the system's city-wide implementation in the future.
Abstract: Data intensive analysis is the major challenge in smart cities because of the ubiquitous deployment of various kinds of sensors. The natural characteristic of geodistribution requires a new computing paradigm to offer location-awareness and latency-sensitive monitoring and intelligent control. Fog Computing that extends the computing to the edge of network, fits this need. In this paper, we introduce a hierarchical distributed Fog Computing architecture to support the integration of massive number of infrastructure components and services in future smart cities. To secure future communities, it is necessary to integrate intelligence in our Fog Computing architecture, e.g., to perform data representation and feature extraction, to identify anomalous and hazardous events, and to offer optimal responses and controls. We analyze case studies using a smart pipeline monitoring system based on fiber optic sensors and sequential learning algorithms to detect events threatening pipeline safety. A working prototype was constructed to experimentally evaluate event detection performance of the recognition of 12 distinct events. These experimental results demonstrate the feasibility of the system's city-wide implementation in the future.

Journal ArticleDOI
TL;DR: In this article, an efficient reinforcement learning-based resource management algorithm was proposed to minimize the long-term system cost, including both service delay and operational cost, by using a decomposition of the (offline) value iteration and (online) reinforcement learning.
Abstract: Mobile edge computing (also known as fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in energy harvesting mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to the centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run-time performance when compared to standard reinforcement learning algorithms such as ${Q}$ -learning. We prove the convergence of the proposed algorithm and analytically show that the learned policy has a simple monotone structure amenable to practical implementation. Our simulation results validate the efficacy of our algorithm, which significantly improves the edge computing performance compared to fixed or myopic optimization schemes and conventional reinforcement learning algorithms.

Journal ArticleDOI
TL;DR: This paper presents the definitions of the MEC given by researchers and discusses the opportunities brought by the M EC and some of the important research challenges highlighted in MEC environment.

Journal ArticleDOI
TL;DR: An overview of the core issues, challenges, and future research directions in fog-enabled orchestration for IoT services is given, demonstrating the feasibility and initial results of using a distributed genetic algorithm in this context.
Abstract: Large-scale Internet of Things (IoT) services such as healthcare, smart cities, and marine monitoring are pervasive in cyber-physical environments strongly supported by Internet technologies and fog computing. Complex IoT services are increasingly composed of sensors, devices, and compute resources within fog computing infrastructures. The orchestration of such applications can be leveraged to alleviate the difficulties of maintenance and enhance data security and system reliability. However, efficiently dealing with dynamic variations and transient operational behavior is a crucial challenge within the context of choreographing complex services. Furthermore, with the rapid increase of the scale of IoT deployments, the heterogeneity, dynamicity, and uncertainty within fog environments and increased computational complexity further aggravate this challenge. This article gives an overview of the core issues, challenges, and future research directions in fog-enabled orchestration for IoT services. Additionally, it presents early experiences of an orchestration scenario, demonstrating the feasibility and initial results of using a distributed genetic algorithm in this context.

Journal ArticleDOI
TL;DR: This article formalizes the vehicular fog computing architecture and presents a typical use case in vehicular Fog Computing, and discusses several key security and forensic challenges and potential solutions.
Abstract: Vehicular fog computing extends the fog computing paradigm to conventional vehicular networks. This allows us to support more ubiquitous vehicles, achieve better communication efficiency, and address limitations in conventional vehicular networks in terms of latency, location awareness, and real-time response (typically required in smart traffic control, driving safety applications, entertainment services, and other applications). Such requirements are particularly important in adversarial environments (e.g., urban warfare and battlefields in the Internet of Battlefield Things involving military vehicles). However, there is no one widely accepted definition for vehicular fog computing and use cases. Thus, in this article, we formalize the vehicular fog computing architecture and present a typical use case in vehicular fog computing. Then we discuss several key security and forensic challenges and potential solutions.

Journal ArticleDOI
TL;DR: It is pointed out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming while introducing new open issues.
Abstract: Fog computing (FC) and Internet of Everything (IoE) are two emerging technological paradigms that, to date, have been considered standing-alone. However, because of their complementary features, we expect that their integration can foster a number of computing and network-intensive pervasive applications under the incoming realm of the future Internet. Motivated by this consideration, the goal of this position paper is fivefold. First, we review the technological attributes and platforms proposed in the current literature for the standing-alone FC and IoE paradigms. Second, by leveraging some use cases as illustrative examples, we point out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming, while introducing new open issues. Third, we propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, that integrates FC and IoE and then we detail the main building blocks and services of the corresponding technological platform and protocol stack. Fourth, as a proof-of-concept, we present the simulated energy-delay performance of a small-scale FoE prototype, namely, the V-FoE prototype. Afterward, we compare the obtained performance with the corresponding one of a benchmark technological platform, e.g., the V-D2D one. It exploits only device-to-device links to establish inter-thing “ad hoc” communication. Last, we point out the position of the proposed FoE paradigm over a spectrum of seemingly related recent research projects.