scispace - formally typeset
Search or ask a question

Showing papers on "Cloud computing published in 2019"


Journal ArticleDOI
TL;DR: An ImageJ plugin is presented that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service.
Abstract: U-Net is a generic deep-learning solution for frequently occurring quantification tasks such as cell detection and shape measurements in biomedical image data. We present an ImageJ plugin that enables non-machine-learning experts to analyze their data with U-Net on either a local computer or a remote server/cloud service. The plugin comes with pretrained models for single-cell segmentation and allows for U-Net to be adapted to new tasks on the basis of a few annotated samples.

1,222 citations


Journal ArticleDOI
15 Jul 2019
TL;DR: This paper will provide an overview of applications where deep learning is used at the network edge, discuss various approaches for quickly executing deep learning inference across a combination of end devices, edge servers, and the cloud, and describe the methods for training deep learning models across multiple edge devices.
Abstract: Deep learning is currently widely used in a variety of applications, including computer vision and natural language processing. End devices, such as smartphones and Internet-of-Things sensors, are generating data that need to be analyzed in real time using deep learning or used to train deep learning models. However, deep learning inference and training require substantial computation resources to run quickly. Edge computing, where a fine mesh of compute nodes are placed close to end devices, is a viable way to meet the high computation and low-latency requirements of deep learning on edge devices and also provides additional benefits in terms of privacy, bandwidth efficiency, and scalability. This paper aims to provide a comprehensive review of the current state of the art at the intersection of deep learning and edge computing. Specifically, it will provide an overview of applications where deep learning is used at the network edge, discuss various approaches for quickly executing deep learning inference across a combination of end devices, edge servers, and the cloud, and describe the methods for training deep learning models across multiple edge devices. It will also discuss open challenges in terms of systems performance, network technologies and management, benchmarks, and privacy. The reader will take away the following concepts from this paper: understanding scenarios where deep learning at the network edge can be useful, understanding common techniques for speeding up deep learning inference and performing distributed training on edge devices, and understanding recent trends and opportunities.

793 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed to integrate the Deep Reinforcement Learning techniques and Federated Learning framework with mobile edge systems for optimizing mobile edge computing, caching and communication, and designed the "In-Edge AI" framework in order to intelligently utilize the collaboration among devices and edge nodes to exchange the learning parameters for a better training and inference of the models, and thus to carry out dynamic system-level optimization and application-level enhancement while reducing the unnecessary system communication load.
Abstract: Recently, along with the rapid development of mobile communication technology, edge computing theory and techniques have been attracting more and more attention from global researchers and engineers, which can significantly bridge the capacity of cloud and requirement of devices by the network edges, and thus can accelerate content delivery and improve the quality of mobile services. In order to bring more intelligence to edge systems, compared to traditional optimization methodology, and driven by the current deep learning techniques, we propose to integrate the Deep Reinforcement Learning techniques and Federated Learning framework with mobile edge systems, for optimizing mobile edge computing, caching and communication. And thus, we design the "In-Edge AI" framework in order to intelligently utilize the collaboration among devices and edge nodes to exchange the learning parameters for a better training and inference of the models, and thus to carry out dynamic system-level optimization and application-level enhancement while reducing the unnecessary system communication load. "In-Edge AI" is evaluated and proved to have near-optimal performance but relatively low overhead of learning, while the system is cognitive and adaptive to mobile communication systems. Finally, we discuss several related challenges and opportunities for unveili

764 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed novel heuristic algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.
Abstract: Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users in executing computation-intensive tasks via task offloading. The problem of joint task offloading and resource allocation is studied in order to maximize the users’ task offloading gains, which is measured by a weighted sum of reductions in task completion time and energy consumption. The considered problem is formulated as a mixed integer nonlinear program (MINLP) that involves jointly optimizing the task offloading decision, uplink transmission power of mobile users, and computing resource allocation at the MEC servers. Due to the combinatorial nature of this problem, solving for optimal solution is difficult and impractical for a large-scale network. To overcome this drawback, we propose to decompose the original problem into a resource allocation (RA) problem with fixed task offloading decision and a task offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We address the RA problem using convex and quasi-convex optimization techniques, and propose a novel heuristic algorithm to the TO problem that achieves a suboptimal solution in polynomial time. Simulation results show that our algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.

705 citations


Journal ArticleDOI
TL;DR: This article discusses the importance of Edge computing in real life scenarios where response time constitutes the fundamental requirement for many applications and identifies the requirements and discusses open research challenges in Edge computing.

590 citations


Journal ArticleDOI
TL;DR: The simulation results show that the proposed algorithm can effectively improve the system utility and computation time, especially for the scenario where the MEC servers fail to meet demands due to insufficient computation resources.
Abstract: Computation offloading services provide required computing resources for vehicles with computation-intensive tasks. Past computation offloading research mainly focused on mobile edge computing (MEC) or cloud computing, separately. This paper presents a collaborative approach based on MEC and cloud computing that offloads services to automobiles in vehicular networks. A cloud-MEC collaborative computation offloading problem is formulated through jointly optimizing computation offloading decision and computation resource allocation. Since the problem is non-convex and NP-hard, we propose a collaborative computation offloading and resource allocation optimization (CCORAO) scheme, and design a distributed computation offloading and resource allocation algorithm for CCORAO scheme that achieves the optimal solution. The simulation results show that the proposed algorithm can effectively improve the system utility and computation time, especially for the scenario where the MEC servers fail to meet demands due to insufficient computation resources.

543 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed edge VM allocation and task scheduling approach can achieve near-optimal performance with very low complexity and the proposed learning-based computing offloading algorithm not only converges fast but also achieves a lower total cost compared with other offloading approaches.
Abstract: Internet of Things (IoT) computing offloading is a challenging issue, especially in remote areas where common edge/cloud infrastructure is unavailable. In this paper, we present a space-air-ground integrated network (SAGIN) edge/cloud computing architecture for offloading the computation-intensive applications considering remote energy and computation constraints, where flying unmanned aerial vehicles (UAVs) provide near-user edge computing and satellites provide access to the cloud computing. First, for UAV edge servers, we propose a joint resource allocation and task scheduling approach to efficiently allocate the computing resources to virtual machines (VMs) and schedule the offloaded tasks. Second, we investigate the computing offloading problem in SAGIN and propose a learning-based approach to learn the optimal offloading policy from the dynamic SAGIN environments. Specifically, we formulate the offloading decision making as a Markov decision process where the system state considers the network dynamics. To cope with the system dynamics and complexity, we propose a deep reinforcement learning-based computing offloading approach to learn the optimal offloading policy on-the-fly, where we adopt the policy gradient method to handle the large action space and actor-critic method to accelerate the learning process. Simulation results show that the proposed edge VM allocation and task scheduling approach can achieve near-optimal performance with very low complexity and the proposed learning-based computing offloading algorithm not only converges fast but also achieves a lower total cost compared with other offloading approaches.

537 citations


Journal ArticleDOI
TL;DR: In this paper, a survey on the relationship between edge intelligence and intelligent edge computing is presented, and the practical implementation methods and enabling technologies, namely DL training and inference in the customized edge computing framework, challenges and future trends of more pervasive and fine-grained intelligence.
Abstract: Ubiquitous sensors and smart devices from factories and communities are generating massive amounts of data, and ever-increasing computing power is driving the core of computation and services from the cloud to the edge of the network. As an important enabler broadly changing people's lives, from face recognition to ambitious smart factories and cities, developments of artificial intelligence (especially deep learning, DL) based applications and services are thriving. However, due to efficiency and latency issues, the current cloud computing service architecture hinders the vision of "providing artificial intelligence for every person and every organization at everywhere". Thus, unleashing DL services using resources at the network edge near the data sources has emerged as a desirable solution. Therefore, edge intelligence, aiming to facilitate the deployment of DL services by edge computing, has received significant attention. In addition, DL, as the representative technique of artificial intelligence, can be integrated into edge computing frameworks to build intelligent edge for dynamic, adaptive edge maintenance and management. With regard to mutually beneficial edge intelligence and intelligent edge, this paper introduces and discusses: 1) the application scenarios of both; 2) the practical implementation methods and enabling technologies, namely DL training and inference in the customized edge computing framework; 3) challenges and future trends of more pervasive and fine-grained intelligence. We believe that by consolidating information scattered across the communication, networking, and DL areas, this survey can help readers to understand the connections between enabling technologies while promoting further discussions on the fusion of edge intelligence and intelligent edge, i.e., Edge DL.

518 citations


Journal ArticleDOI
TL;DR: This survey investigates some of the work that has been done to enable the integrated blockchain and edge computing system and discusses the research challenges, identifying several vital aspects of the integration of blockchain andEdge computing: motivations, frameworks, enabling functionalities, and challenges.
Abstract: Blockchain, as the underlying technology of crypto-currencies, has attracted significant attention. It has been adopted in numerous applications, such as smart grid and Internet-of-Things. However, there is a significant scalability barrier for blockchain, which limits its ability to support services with frequent transactions. On the other side, edge computing is introduced to extend the cloud resources and services to be distributed at the edge of the network, but currently faces challenges in its decentralized management and security. The integration of blockchain and edge computing into one system can enable reliable access and control of the network, storage, and computation distributed at the edges, hence providing a large scale of network servers, data storage, and validity computation near the end in a secure manner. Despite the prospect of integrated blockchain and edge computing systems, its scalability enhancement, self organization, functions integration, resource management, and new security issues remain to be addressed before widespread deployment. In this survey, we investigate some of the work that has been done to enable the integrated blockchain and edge computing system and discuss the research challenges. We identify several vital aspects of the integration of blockchain and edge computing: motivations, frameworks, enabling functionalities, and challenges. Finally, some broader perspectives are explored.

488 citations


Journal ArticleDOI
TL;DR: It is already possible to envision the need to move beyond 5G and design a new architecture incorporating innovative technologies to satisfy new needs at both the individual and societal levels.
Abstract: With its ability to provide a single platform enabling a variety of services, such as enhanced mobile broadband communications, virtual reality, automated driving, and the Internet of Things, 5G represents a breakthrough in the design of communication networks. Nevertheless, considering the increasing requests for new services and predicting the development of new technologies within a decade, it is already possible to envision the need to move beyond 5G and design a new architecture incorporating innovative technologies to satisfy new needs at both the individual and societal levels.

433 citations


Journal ArticleDOI
11 Oct 2019
TL;DR: In this article, the key building blocks of edge ML, different neural network architectural splits and their inherent tradeoffs, as well as theoretical and technical enablers stemming from a wide range of mathematical disciplines are presented.
Abstract: Fueled by the availability of more data and computing power, recent breakthroughs in cloud-based machine learning (ML) have transformed every aspect of our lives from face recognition and medical diagnosis to natural language processing. However, classical ML exerts severe demands in terms of energy, memory, and computing resources, limiting their adoption for resource-constrained edge devices. The new breed of intelligent devices and high-stake applications (drones, augmented/virtual reality, autonomous systems, and so on) requires a novel paradigm change calling for distributed, low-latency and reliable ML at the wireless network edge (referred to as edge ML). In edge ML, training data are unevenly distributed over a large number of edge nodes, which have access to a tiny fraction of the data. Moreover, training and inference are carried out collectively over wireless links, where edge devices communicate and exchange their learned models (not their private data). In a first of its kind, this article explores the key building blocks of edge ML, different neural network architectural splits and their inherent tradeoffs, as well as theoretical and technical enablers stemming from a wide range of mathematical disciplines. Finally, several case studies pertaining to various high-stake applications are presented to demonstrate the effectiveness of edge ML in unlocking the full potential of 5G and beyond.

Journal ArticleDOI
TL;DR: A comprehensive review of BIM and IoT integration research to identify common emerging areas of application and common design patterns in the approach to tackling BIM-IoT device integration along with an examination of current limitations and predictions of future research directions is conducted.

Journal ArticleDOI
TL;DR: A systematic literature review of the technologies for fog computing in the healthcare IoT systems field and analyzing the previous is presented, providing motivation, limitations faced by researchers, and suggestions proposed to analysts for improving this essential research field.

Journal ArticleDOI
TL;DR: A comprehensive survey of emerging computing paradigms from the perspective of end-edge-cloud orchestration is presented to discuss state-of-the-art research in terms of computation offloading, caching, security, and privacy.
Abstract: Sending data to the cloud for analysis was a prominent trend during the past decades, driving cloud computing as a dominant computing paradigm. However, the dramatically increasing number of devices and data traffic in the Internet-of-Things (IoT) era are posing significant burdens on the capacity-limited Internet and uncontrollable service delay. It becomes difficult to meet the delay-sensitive and context-aware service requirements of IoT applications by using cloud computing alone. Facing these challenges, computing paradigms are shifting from the centralized cloud computing to distributed edge computing. Several new computing paradigms, including Transparent Computing, Mobile Edge Computing, Fog Computing, and Cloudlet, have emerged to leverage the distributed resources at network edge to provide timely and context-aware services. By integrating end devices, edge servers, and cloud, they form a hierarchical IoT architecture, i.e., End-Edge-Cloud orchestrated architecture to improve the performance of IoT systems. This article presents a comprehensive survey of these emerging computing paradigms from the perspective of end-edge-cloud orchestration. Specifically, we first introduce and compare the architectures and characteristics of different computing paradigms. Then, a comprehensive survey is presented to discuss state-of-the-art research in terms of computation offloading, caching, security, and privacy. Finally, some potential research directions are envisioned for fostering continuous research efforts.

Journal ArticleDOI
TL;DR: This survey conducts this survey to bring more attention to this critical intersection between cyber physical systems and big data and highlight the future research direction to achieve the fully autonomy in Industry 4.0.
Abstract: With the technology development in cyber physical systems and big data, there are huge potential to apply them to achieve personalization and improve resource efficiency in Industry 4.0. As Industr...

Journal ArticleDOI
TL;DR: This work investigates the collaboration between cloud computing and edge computing, where the tasks of mobile devices can be partially processed at the edge node and at the cloud server and obtains the closed-form computation resource allocation strategy by leveraging the convex optimization theory.
Abstract: By performing data processing at the network edge, mobile edge computing can effectively overcome the deficiencies of network congestion and long latency in cloud computing systems. To improve edge cloud efficiency with limited communication and computation capacities, we investigate the collaboration between cloud computing and edge computing, where the tasks of mobile devices can be partially processed at the edge node and at the cloud server. First, a joint communication and computation resource allocation problem is formulated to minimize the weighted-sum latency of all mobile devices. Then, the closed-form optimal task splitting strategy is derived as a function of the normalized backhaul communication capacity and the normalized cloud computation capacity. Some interesting and useful insights for the optimal task splitting strategy are also highlighted by analyzing four special scenarios. Based on this, we further transform the original joint communication and computation resource allocation problem into an equivalent convex optimization problem and obtain the closed-form computation resource allocation strategy by leveraging the convex optimization theory. Moreover, a necessary condition is also developed to judge whether a task should be processed at the corresponding edge node only, without offloading to the cloud server. Finally, simulation results confirm our theoretical analysis and demonstrate that the proposed collaborative cloud and edge computing scheme can evidently achieve a better delay performance than the conventional schemes.

Journal ArticleDOI
TL;DR: An iterative heuristic MEC resource allocation algorithm to make the offloading decision dynamically and results demonstrate that the algorithm outperforms the existing schemes in terms of execution latency and offloading efficiency.
Abstract: With the evolutionary development of latency sensitive applications, delay restriction is becoming an obstacle to run sophisticated applications on mobile devices. Partial computation offloading is promising to enable these applications to execute on mobile user equipments with low latency. However, most of the existing researches focus on either cloud computing or mobile edge computing (MEC) to offload tasks. In this paper, we comprehensively consider both of them and it is an early effort to study the cooperation of cloud computing and MEC in Internet of Things. We start from the single user computation offloading problem, where the MEC resources are not constrained. It can be solved by the branch and bound algorithm. Later on, the multiuser computation offloading problem is formulated as a mixed integer linear programming problem by considering resource competition among mobile users, which is NP-hard. Due to the computation complexity of the formulated problem, we design an iterative heuristic MEC resource allocation algorithm to make the offloading decision dynamically. Simulation results demonstrate that our algorithm outperforms the existing schemes in terms of execution latency and offloading efficiency.

Journal ArticleDOI
24 Jun 2019
TL;DR: In this paper, the authors review state-of-the-art approaches in these areas as well as explore potential solutions to address these challenges, including providing enough computing power, redundancy, and security so as to guarantee the safety of autonomous vehicles.
Abstract: Safety is the most important requirement for autonomous vehicles; hence, the ultimate challenge of designing an edge computing ecosystem for autonomous vehicles is to deliver enough computing power, redundancy, and security so as to guarantee the safety of autonomous vehicles. Specifically, autonomous driving systems are extremely complex; they tightly integrate many technologies, including sensing, localization, perception, decision making, as well as the smooth interactions with cloud platforms for high-definition (HD) map generation and data storage. These complexities impose numerous challenges for the design of autonomous driving edge computing systems. First, edge computing systems for autonomous driving need to process an enormous amount of data in real time, and often the incoming data from different sensors are highly heterogeneous. Since autonomous driving edge computing systems are mobile, they often have very strict energy consumption restrictions. Thus, it is imperative to deliver sufficient computing power with reasonable energy consumption, to guarantee the safety of autonomous vehicles, even at high speed. Second, in addition to the edge system design, vehicle-to-everything (V2X) provides redundancy for autonomous driving workloads and alleviates stringent performance and energy constraints on the edge side. With V2X, more research is required to define how vehicles cooperate with each other and the infrastructure. Last, safety cannot be guaranteed when security is compromised. Thus, protecting autonomous driving edge computing systems against attacks at different layers of the sensing and computing stack is of paramount concern. In this paper, we review state-of-the-art approaches in these areas as well as explore potential solutions to address these challenges.

Proceedings ArticleDOI
04 Apr 2019
TL;DR: This paper presents DeathStarBench, a novel, open-source benchmark suite built with microservices that is representative of large end-to-end services, modular and extensible, and uses it to study the architectural characteristics of microservices, their implications in networking and operating systems, their challenges with respect to cluster management, and their trade-offs in terms of application design and programming frameworks.
Abstract: Cloud services have recently started undergoing a major shift from monolithic applications, to graphs of hundreds or thousands of loosely-coupled microservices. Microservices fundamentally change a lot of assumptions current cloud systems are designed with, and present both opportunities and challenges when optimizing for quality of service (QoS) and cloud utilization. In this paper we explore the implications microservices have across the cloud system stack. We first present DeathStarBench, a novel, open-source benchmark suite built with microservices that is representative of large end-to-end services, modular and extensible. DeathStarBench includes a social network, a media service, an e-commerce site, a banking system, and IoT applications for coordination control of UAV swarms. We then use DeathStarBench to study the architectural characteristics of microservices, their implications in networking and operating systems, their challenges with respect to cluster management, and their trade-offs in terms of application design and programming frameworks. Finally, we explore the tail at scale effects of microservices in real deployments with hundreds of users, and highlight the increased pressure they put on performance predictability.

Posted Content
TL;DR: Just as the 2009 paper identified challenges for the cloud and predicted they would be addressed and that cloud use would accelerate, it is predicted these issues are solvable and that serverless computing will grow to dominate the future of cloud computing.
Abstract: Serverless cloud computing handles virtually all the system administration operations needed to make it easier for programmers to use the cloud. It provides an interface that greatly simplifies cloud programming, and represents an evolution that parallels the transition from assembly language to high-level programming languages. This paper gives a quick history of cloud computing, including an accounting of the predictions of the 2009 Berkeley View of Cloud Computing paper, explains the motivation for serverless computing, describes applications that stretch the current limits of serverless, and then lists obstacles and research opportunities required for serverless computing to fulfill its full potential. Just as the 2009 paper identified challenges for the cloud and predicted they would be addressed and that cloud use would accelerate, we predict these issues are solvable and that serverless computing will grow to dominate the future of cloud computing.

Journal ArticleDOI
TL;DR: This article constructs a three-layer VFC model to enable distributed traffic management in order to minimize the response time of citywide events collected and reported by vehicles and formulated as an optimization problem by leveraging moving and parked vehicles as fog nodes.
Abstract: Fog computing extends the facility of cloud computing from the center to edge networks. Although fog computing has the advantages of location awareness and low latency, the rising requirements of ubiquitous connectivity and ultra-low latency challenge real-time traffic management for smart cities. As an integration of fog computing and vehicular networks, vehicular fog computing (VFC) is promising to achieve real-time and location-aware network responses. Since the concept and use case of VFC are in the initial phase, this article first constructs a three-layer VFC model to enable distributed traffic management in order to minimize the response time of citywide events collected and reported by vehicles. Furthermore, the VFC-enabled offloading scheme is formulated as an optimization problem by leveraging moving and parked vehicles as fog nodes. A real-world taxi-trajectory-based performance analysis validates our model. Finally, some research challenges and open issues toward VFC-enabled traffic management are summarized and highlighted.

Journal ArticleDOI
TL;DR: An in-depth review of IoT privacy and security issues, including potential threats, attack types, and security setups from a healthcare viewpoint is conducted and previous well-known security models to deal with security risks are analyzed.
Abstract: The fast development of the Internet of Things (IoT) technology in recent years has supported connections of numerous smart things along with sensors and established seamless data exchange between them, so it leads to a stringy requirement for data analysis and data storage platform such as cloud computing and fog computing. Healthcare is one of the application domains in IoT that draws enormous interest from industry, the research community, and the public sector. The development of IoT and cloud computing is improving patient safety, staff satisfaction, and operational efficiency in the medical industry. This survey is conducted to analyze the latest IoT components, applications, and market trends of IoT in healthcare, as well as study current development in IoT and cloud computing-based healthcare applications since 2015. We also consider how promising technologies such as cloud computing, ambient assisted living, big data, and wearables are being applied in the healthcare industry and discover various IoT, e-health regulations and policies worldwide to determine how they assist the sustainable development of IoT and cloud computing in the healthcare industry. Moreover, an in-depth review of IoT privacy and security issues, including potential threats, attack types, and security setups from a healthcare viewpoint is conducted. Finally, this paper analyzes previous well-known security models to deal with security risks and provides trends, highlighted opportunities, and challenges for the IoT-based healthcare future development.

Journal ArticleDOI
TL;DR: This paper forms the edge server placement problem in mobile edge computing environments for smart cities as a multi-objective constraint optimization problem that places edge servers in some strategic locations with the objective to make balance the workloads of edge servers and minimize the access delay between the mobile user and edge server.

Journal ArticleDOI
TL;DR: A novel EHRs sharing framework that combines blockchain and the decentralized interplanetary file system (IPFS) on a mobile cloud platform is proposed that provides an effective solution for reliable data exchanges on mobile clouds while preserving sensitive health information against potential threats.
Abstract: Recent years have witnessed a paradigm shift in the storage of Electronic Health Records (EHRs) on mobile cloud environments, where mobile devices are integrated with cloud computing to facilitate medical data exchanges among patients and healthcare providers. This advanced model enables healthcare services with low operational cost, high flexibility, and EHRs availability. However, this new paradigm also raises concerns about data privacy and network security for e-health systems. How to reliably share EHRs among mobile users while guaranteeing high-security levels in the mobile cloud is a challenging issue. In this paper, we propose a novel EHRs sharing framework that combines blockchain and the decentralized interplanetary file system (IPFS) on a mobile cloud platform. Particularly, we design a trustworthy access control mechanism using smart contracts to achieve secure EHRs sharing among different patients and medical providers. We present a prototype implementation using Ethereum blockchain in a real data sharing scenario on a mobile app with Amazon cloud computing. The empirical results show that our proposal provides an effective solution for reliable data exchanges on mobile clouds while preserving sensitive health information against potential threats. The system evaluation and security analysis also demonstrate the performance improvements in lightweight access control design, minimum network latency with high security and data privacy levels, compared to the existing data sharing models.

Journal ArticleDOI
TL;DR: An energy-efficient adaptive resource scheduler for Networked Fog Centers (NetFCs) that is capable to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rates of the traffic delivered to the vehicular clients, instantaneous rate-jitters and total processing delays is proposed and tested.
Abstract: Providing real-time cloud services to Vehicular Clients (VCs) must cope with delay and delay-jitter issues. Fog computing is an emerging paradigm that aims at distributing small-size self-powered data centers (e.g., Fog nodes) between remote Clouds and VCs, in order to deliver data-dissemination real-time services to the connected VCs. Motivated by these considerations, in this paper, we propose and test an energy-efficient adaptive resource scheduler for Networked Fog Centers (NetFCs). They operate at the edge of the vehicular network and are connected to the served VCs through Infrastructure-to-Vehicular (I2V) TCP/IP-based single-hop mobile links. The goal is to exploit the locally measured states of the TCP/IP connections, in order to maximize the overall communication-plus-computing energy efficiency, while meeting the application-induced hard QoS requirements on the minimum transmission rates, maximum delays and delay-jitters. The resulting energy-efficient scheduler jointly performs: (i) admission control of the input traffic to be processed by the NetFCs; (ii) minimum-energy dispatching of the admitted traffic; (iii) adaptive reconfiguration and consolidation of the Virtual Machines (VMs) hosted by the NetFCs; and, (iv) adaptive control of the traffic injected into the TCP/IP mobile connections. The salient features of the proposed scheduler are that: (i) it is adaptive and admits distributed and scalable implementation; and, (ii) it is capable to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rates of the traffic delivered to the vehicular clients, instantaneous rate-jitters and total processing delays. Actual performance of the proposed scheduler in the presence of: (i) client mobility; (ii) wireless fading; and, (iii) reconfiguration and consolidation costs of the underlying NetFCs, is numerically tested and compared against the corresponding ones of some state-of-the-art schedulers, under both synthetically generated and measured real-world workload traces.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: This work proposes a novel decoder that generates a structured point cloud without assuming any specific structure or topology on the underlying point set, and significantly outperforms state-of-the-art 3D point cloud completion methods on the Shapenet dataset.
Abstract: 3D point cloud generation is of great use for 3D scene modeling and understanding. Real-world 3D object point clouds can be properly described by a collection of low-level and high-level structures such as surfaces, geometric primitives, semantic parts,etc. In fact, there exist many different representations of a 3D object point cloud as a set of point groups. Existing frameworks for point cloud genera-ion either do not consider structure in their proposed solutions, or assume and enforce a specific structure/topology,e.g. a collection of manifolds or surfaces, for the generated point cloud of a 3D object. In this work, we pro-pose a novel decoder that generates a structured point cloud without assuming any specific structure or topology on the underlying point set. Our decoder is softly constrained to generate a point cloud following a hierarchical rooted tree structure. We show that given enough capacity and allowing for redundancies, the proposed decoder is very flexible and able to learn any arbitrary grouping of points including any topology on the point set. We evaluate our decoder on the task of point cloud generation for 3D point cloud shape completion. Combined with encoders from existing frameworks, we show that our proposed decoder significantly outperforms state-of-the-art 3D point cloud completion methods on the Shapenet dataset

Journal ArticleDOI
Fei Tao1, Qinglin Qi1
TL;DR: A framework—New IT driven service-oriented smart manufacturing (SoSM), which aims at facilitating the visions of smart manufacturing by making full use of New IT and services is proposed.
Abstract: Recently, along with the wide application of new generation information technologies (New IT) in manufacturing, many countries issued their national advanced manufacturing development strategies, such as Industrial Internet, Industry 4.0, and Made in China 2025. One common aim of these strategies is to achieve smart manufacturing, which demands the interoperation, integration, and fusion of the physical world and the cyber world of manufacturing. As well, New IT [such as Internet of Things (IoT), cloud computing, big data, mobile Internet, and cyber-physical systems (CPS)] have played pivotal roles in promoting smart manufacturing. Data generated in the physical world can be sensed and transfered to the cyber world through IoT and the Internet, and be processed and analyzed by cloud computing, big data technologies to adjust the physical world. The physical world and the cyber world of manufacturing are integrated based on CPS. On the other hand, servitization has become a prominent trend in the manufacturing. Embracing the concept of “Manufacturing-as-a-Service,” manufacturing is provided as service for users. Because of the characteristics of interoperability and platform independence, services pave the way for large-scale smart applications and manufacturing collaboration. Combining New IT and services, this paper proposes a framework—New IT driven service-oriented smart manufacturing (SoSM). SoSM aims at facilitating the visions of smart manufacturing by making full use of New IT and services. Complementary to the framework of SoSM, the New IT driven typical characteristics of SoSM are also investigated and discussed, respectively.

Journal ArticleDOI
TL;DR: A flexible platform able to cope with soilless culture needs in full recirculation greenhouses using moderately saline water is proposed, based on exchangeable low-cost hardware and supported by a three-tier open source software platform at local, edge and cloud planes.

Journal ArticleDOI
TL;DR: The paper proposes BodyEdge, a novel architecture well suited for human-centric applications, in the context of the emerging healthcare industry, which consists of a tiny mobile client module and a performing edge gateway supporting multiradio and multitechnology communication.
Abstract: Edge computing paradigm has attracted many interests in the last few years as a valid alternative to the standard cloud-based approaches to reduce the interaction timing and the huge amount of data coming from Internet of Things (IoT) devices toward the Internet. In the next future, Edge-based approaches will be essential to support time-dependent applications in the Industry 4.0 context; thus, the paper proposes BodyEdge , a novel architecture well suited for human-centric applications, in the context of the emerging healthcare industry. It consists of a tiny mobile client module and a performing edge gateway supporting multiradio and multitechnology communication to collect and locally process data coming from different scenarios; moreover, it also exploits the facilities made available from both private and public cloud platforms to guarantee a high flexibility, robustness, and adaptive service level. The advantages of the designed software platform have been evaluated in terms of reduced transmitted data and processing time through a real implementation on different hardware platforms. The conducted study also highlighted the network conditions (data load and processing delay) in which BodyEdge is a valid and inexpensive solution for healthcare application scenarios.

Journal ArticleDOI
19 Jun 2019
TL;DR: This paper provides a comprehensive survey on the most influential and basic attacks as well as the corresponding defense mechanisms that have edge computing specific characteristics and can be practically applied to real-world edge computing systems.
Abstract: The rapid developments of the Internet of Things (IoT) and smart mobile devices in recent years have been dramatically incentivizing the advancement of edge computing. On the one hand, edge computing has provided a great assistance for lightweight devices to accomplish complicated tasks in an efficient way; on the other hand, its hasty development leads to the neglection of security threats to a large extent in edge computing platforms and their enabled applications. In this paper, we provide a comprehensive survey on the most influential and basic attacks as well as the corresponding defense mechanisms that have edge computing specific characteristics and can be practically applied to real-world edge computing systems. More specifically, we focus on the following four types of attacks that account for 82% of the edge computing attacks recently reported by Statista: distributed denial of service attacks, side-channel attacks, malware injection attacks, and authentication and authorization attacks. We also analyze the root causes of these attacks, present the status quo and grand challenges in edge computing security, and propose future research directions.