scispace - formally typeset
Search or ask a question
Topic

Edge computing

About: Edge computing is a research topic. Over the lifetime, 11657 publications have been published within this topic receiving 148533 citations.


Papers
More filters
Proceedings ArticleDOI
Simone Mangiante1, Guenter Klas1, Amit Navon2, Zhuang GuanHua2, Ju Ran2, Marco Dias Silva1 
11 Aug 2017
TL;DR: A Field Of View (FOV) rendering solution at the edge of a mobile network, designed to optimize the bandwidth and latency required by VR 360° video streaming is presented.
Abstract: VR/AR is rapidly progressing towards enterprise and end customers with the promise of bringing immersive experience to numerous applications. Soon it will target smartphones from the cloud and 360° video delivery will need unprecedented requirements for ultra-low latency and ultra-high throughput to mobile networks. Latest developments in NFV and Mobile Edge Computing reveal already the potential to enable VR streaming in cellular networks and to pave the way towards 5G and next stages in VR technology. In this paper we present a Field Of View (FOV) rendering solution at the edge of a mobile network, designed to optimize the bandwidth and latency required by VR 360° video streaming. Preliminary test results show the immediate benefits in bandwidth saving this approach can provide and generate new directions for VR/AR network research.

177 citations

Journal ArticleDOI
TL;DR: A new matrix factorization (MF) model with deep features learning, which integrates a convolutional neural network (CNN), named Joint CNN-MF (JCM), which is capable of using the learned deep latent features of neighbors to infer the features of a user or a service.
Abstract: Along with the popularity of intelligent services and mobile services, service recommendation has become a key task, especially the task based on quality-of-service (QoS) in edge computing environment. Most existing service recommendation methods have some serious defects, and cannot be directly adopted in edge computing environment. For example, most of existing methods cannot learn deep features of users or services, but in edge computing environment, there are a variety of devices with different configurations and different functions, and it is necessary to learn deep features behind those complex devices. In order to fully utilize hidden features, this paper proposes a new matrix factorization (MF) model with deep features learning, which integrates a convolutional neural network (CNN). The proposed mode is named Joint CNN-MF (JCM). JCM is capable of using the learned deep latent features of neighbors to infer the features of a user or a service. Meanwhile, to improve the accuracy of neighbors selection, the proposed model contains a novel similarity computation method. CNN learns the neighbors features, forms a feature matrix and infers the features of the target user or target service. We conducted experiments on a real-world service dataset under a batch of cases of data densities, to reflect the complex invocation cases in edge computing environment. The experimental results verify that compared to counterpart methods, our method can consistently achieve higher QoS prediction results.

176 citations

Journal ArticleDOI
TL;DR: This article introduces a vehicular edge multi-access network that treats vehicles as edge computation resources to construct the cooperative and distributed computing architecture and proposes a collaborative task offloading and output transmission mechanism to guarantee low latency as well as the application- level performance.
Abstract: Mobile edge computing (MEC) has emerged as a promising paradigm to realize user requirements with low-latency applications. The deep integration of multi-access technologies and MEC can significantly enhance the access capacity between heterogeneous devices and MEC platforms. However, the traditional MEC network architecture cannot be directly applied to the Internet of Vehicles (IoV) due to high speed mobility and inherent characteristics. Furthermore, given a large number of resource-rich vehicles on the road, it is a new opportunity to execute task offloading and data processing onto smart vehicles. To facilitate good merging of the MEC technology in IoV, this article first introduces a vehicular edge multi-access network that treats vehicles as edge computation resources to construct the cooperative and distributed computing architecture. For immersive applications, co-located vehicles have the inherent properties of collecting considerable identical and similar computation tasks. We propose a collaborative task offloading and output transmission mechanism to guarantee low latency as well as the application- level performance. Finally, we take 3D reconstruction as an exemplary scenario to provide insights on the design of the network framework. Numerical results demonstrate that the proposed scheme is able to reduce the perception reaction time while ensuring the application-level driving experiences.

176 citations

Proceedings ArticleDOI
16 Apr 2018
TL;DR: This paper proposes ITEM, an iterative algorithm with fast and big “moves” where in each iteration, a graph is constructed to encode all the costs and convert the cost optimization into a graph cut problem, and can simultaneously determine the placement of multiple service entities.
Abstract: While social Virtual Reality (VR) applications such as Facebook Spaces are becoming popular, they are not compatible with classic mobile-or cloud-based solutions due to their processing of tremendous data and exchange of delay-sensitive metadata. Edge computing may fulfill these demands better, but it is still an open problem to deploy social VR applications in an edge infrastructure while supporting economic operations of the edge clouds and satisfactory quality-of-service for the users. This paper presents the first formal study of this problem. We model and formulate a combinatorial optimization problem that captures all intertwined goals. We propose ITEM, an iterative algorithm with fast and big “moves” where in each iteration, we construct a graph to encode all the costs and convert the cost optimization into a graph cut problem. By obtaining the minimum s-t cut via existing max-flow algorithms, we can simultaneously determine the placement of multiple service entities, and thus, the original problem can be addressed by solving a series of graph cuts. Our evaluations with large-scale, real-world data traces demonstrate that ITEM converges fast and outperforms baseline approaches by more than 2 × in one-shot placement and around 1.3 × in dynamic, online scenarios where users move arbitrarily in the system.

176 citations

Journal ArticleDOI
TL;DR: A comprehensive survey highlighting the recent progresses in machine learning techniques for IoT and the relevant techniques, including traffic profiling, IoT device identification, security, edge computing infrastructure, network management and typical IoT applications are provided.
Abstract: Internet of Things (IoT) has become an important network paradigm and there are lots of smart devices connected by IoT. IoT systems are producing massive data and thus more and more IoT applications and services are emerging. Machine learning, as an another important area, has obtained a great success in several research fields such as computer vision, computer graphics, natural language processing, speech recognition, decision-making, and intelligent control. It has also been introduced in networking research. Many researches study how to utilize machine learning to solve networking problems, including routing, traffic engineering, resource allocation, and security. Recently, there has been a rising trend of employing machine learning to improve IoT applications and provide IoT services such as traffic engineering, network management, security, Internet traffic classification, and quality of service optimization. This survey paper focuses on providing an overview of the application of machine learning in the domain of IoT. We provide a comprehensive survey highlighting the recent progresses in machine learning techniques for IoT and describe various IoT applications. The application of machine learning for IoT enables users to obtain deep analytics and develop efficient intelligent IoT applications. This paper is different from the previously published survey papers in terms of focus, scope, and breadth; specifically, we have written this paper to emphasize the application of machine learning for IoT and the coverage of most recent advances. This paper has made an attempt to cover the major applications of machine learning for IoT and the relevant techniques, including traffic profiling, IoT device identification, security, edge computing infrastructure, network management and typical IoT applications. We also make a discussion on research challenges and open issues.

175 citations


Network Information
Related Topics (5)
Wireless sensor network
142K papers, 2.4M citations
93% related
Network packet
159.7K papers, 2.2M citations
93% related
Wireless network
122.5K papers, 2.1M citations
93% related
Server
79.5K papers, 1.4M citations
93% related
Key distribution in wireless sensor networks
59.2K papers, 1.2M citations
92% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,471
20223,274
20212,978
20203,397
20192,698
20181,649