scispace - formally typeset
Search or ask a question

Showing papers on "Network virtualization published in 2021"


Book ChapterDOI
17 May 2021
TL;DR: This thesis addresses variants of the SVNE problem with different bandwidth and reliability requirements for transport networks through extensive simulations and proposes a connectivity-aware VNE approach that ensures VN connectivity without bandwidth guarantee in the face of multiple link failures.
Abstract: Network Virtualization (NV) is an enabling technology for the future Internet and next-generation communication networks. A fundamental problem in NV is to map the virtual nodes and virtual links of a VN to physical nodes and paths, respectively, known as the Virtual Network Embedding (VNE) problem. A VNE that can survive physical resource failures is known as the survivable VNE (SVNE) problem, and has received significant attention recently. In this thesis, we address variants of the SVNE problem with different bandwidth and reliability requirements for transport networks. Specifically, the thesis includes four main contributions. First, a connectivity-aware VNE approach that ensures VN connectivity without bandwidth guarantee in the face of multiple link failures. Second, a joint spare capacity allocation and VNE scheme that provides bandwidth guarantee against link failures by augmenting VNs with necessary spare capacity. Third, a generalized recovery mechanism to re-embed the VNs that are impacted by a physical node failure. Fourth, a reliable VNE scheme with dedicated protection that allows tuning of available bandwidth of a VN during a physical link failure. We show the effectiveness of the proposed SVNE schemes through extensive simulations.

178 citations


Journal ArticleDOI
TL;DR: A distributed VNE system with historical archives (HAs) and metaheuristic approaches and the set-based particle swarm optimization (PSO) as the optimizer is proposed to solve the VNE problem in a distributed way.
Abstract: Virtual network embedding (VNE) is an important problem in network virtualization for the flexible sharing of network resources. While most existing studies focus on centralized embedding for VNE, distributed embedding is considered more scalable and suitable for large-scale scenarios, but how virtual resources can be mapped to substrate resources effectively and efficiently remains a challenging issue. In this paper, we devise a distributed VNE system with historical archives (HAs) and metaheuristic approaches. First, we introduce metaheuristic approaches to each delegation of the distributed embedding system as the optimizer for VNE. Compared to the heuristic-based greedy algorithms used in existing distributed embedding approaches, which are prone to be trapped in local optima, metaheuristic approaches can provide better embedding performance for these distributed delegations. Second, an archive-based strategy is also introduced in the distributed embedding system to assist the metaheuristic algorithms. The archives are used to record the up-to-date information of frequently repeated tasks. By utilizing such archives as historical memory, metaheuristic algorithms can further improve embedding performance for frequently repeated tasks. Following this idea, we incorporate the set-based particle swarm optimization (PSO) as the optimizer and propose the distributed VNE system with HAs and set-based PSO (HA-VNE-PSO) system to solve the VNE problem in a distributed way. HA-VNE-PSO is empirically validated in scenarios of different scales. The experimental results verify that HA-VNE-PSO can scale well with respect to substrate networks, and the HA strategy is indeed effective in different scenarios.

58 citations


Journal ArticleDOI
TL;DR: In this paper, the authors analyze cloud and edge computing paradigms from features and pillars perspectives to identify the key motivators of the transitions from one type of virtualized computing paradigm to another one.

52 citations


Journal ArticleDOI
TL;DR: A security-aware VNE algorithm based on reinforcement learning (RL) is proposed that is superior to other typical algorithms in terms of long-term average return, long- term revenue consumption ratio and virtual network request (VNR) acceptance rate.
Abstract: Virtual network embedding (VNE) algorithm is always the key problem in network virtualization (NV) technology. At present, the research in this field still has the following problems. The traditional way to solve VNE problem is to use heuristic algorithm. However, this method relies on manual embedding rules, which does not accord with the actual situation of VNE. In addition, as the use of intelligent learning algorithm to solve the problem of VNE has become a trend, this method is gradually outdated. At the same time, there are some security problems in VNE. However, there is no intelligent algorithm to solve the security problem of VNE. For this reason, this paper proposes a security-aware VNE algorithm based on reinforcement learning (RL). In the training phase, we use a policy network as a learning agent and take the extracted attributes of the substrate nodes to form a feature matrix as input. The learning agent is trained in this environment to get the mapping probability of each substrate node. In the test phase, we map nodes according to the mapping probability and use the breadth-first strategy (BFS) to map links. For the security problem, we add security requirements level constraint for each virtual node and security level constraint for each substrate node. Virtual nodes can only be embedded on substrate nodes that are not lower than the level of security requirements. Experimental results show that the proposed algorithm is superior to other typical algorithms in terms of long-term average return, long-term revenue consumption ratio and virtual network request (VNR) acceptance rate.

51 citations


Journal ArticleDOI
TL;DR: RDAM algorithm is proposed, which is the first algorithm to apply spectral analysis and perturbation theory to virtual network embedding, and outperforms the other three algorithms in terms of several evaluation metrics, such as long-term average revenue, long- term revenue consumption ratio, and acceptance ratio.
Abstract: Network virtualization makes it possible to manage multiple virtual networks simultaneously on substrate physical networks. Virtual network embedding (VNE) is the critical step of network virtualization that maps virtual network requests to substrate physical networks. The majority of current virtual network embedding algorithms utilize heuristic algorithm, and manually customize a series of rules and assumptions. Therefore, these experimental results are not particularly convincing. This paper proposes a reinforcement learning based dynamic attribute matrix representation (RDAM) algorithm for virtual network embedding. The RDAM algorithm decomposes the process of node mapping into the following three steps: (1) static representation of substrate physical network. (2) dynamic update of substrate physical network. (3) Reinforcement-Learning-Based algorithm. To our best knowledge, RDAM algorithm is the first algorithm to apply spectral analysis and perturbation theory to virtual network embedding. Meanwhile, the method training virtual network embedding algorithm by reinforcement learning is also non-trivial. Furthermore, we compare RDAM algorithm with three other virtual network embedding algorithms. The results show that RDAM algorithm outperforms the other three algorithms in terms of several evaluation metrics, such as long-term average revenue, long-term revenue consumption ratio, and acceptance ratio.

43 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a framework for optimizing the number and location of CUs, the function split for each BS, and the association and routing for each DU-CU pair.
Abstract: Virtualized radio access networks (vRAN) are emerging as a key component of wireless cellular networks, and it is therefore imperative to optimize their architecture. vRANs are decentralized systems where the Base Station (BS) functions can be split between the edge Distributed Units (DUs) and Cloud computing Units (CUs); hence they have many degrees of design freedom. We propose a framework for optimizing the number and location of CUs, the function split for each BS, and the association and routing for each DU-CU pair. We combine a linearization technique with a cutting-planes method to expedite the exact problem solution. The goal is to minimize the network costs and balance them with the criterion of centralization, i.e., the number of functions placed at CUs. Using data-driven simulations we find that multi-CU vRANs achieve cost savings up to 28% and improve centralization by 77%, compared to single-CU vRANs. Interestingly, we see non-trivial trade-offs among centralization and cost, which can be aligned or conflicting based on the traffic and network parameters. Our work sheds light on the vRAN design problem from a new angle, highlights the importance of deploying multiple CUs, and offers a rigorous optimization tool for balancing costs and performance.

37 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a method (cTMvSDN) which improves resource management based on combination of Markov-Process and Time Division Multiple Access (TDMA) protocol.
Abstract: Network simulation and capabilities in the form of a logical network has increased the development of virtual networks rapidly. It is one of the best ways to increase productivity and optimize hardware equipment. Network-Virtualization plays a very important role in the development of networks as the size of the networks increases vastly. This paper examines one of the most important issues in network virtualization to provide an efficient dynamic resources infrastructure management on Software-Based Networks. The proposed method (cTMvSDN) improves resource management based on combination of Markov-Process and Time Division Multiple Access (TDMA) protocol. A customized module to the controller only initializes the mapping when there are sufficient available resources. In order to optimize the response time and SDN Quality of service, the Markov-Pattern and TDMA slicing model are used to predict the next time gaps. Successfully mapped packets will be sent in TDMA slots. Simulation results performed with NS2 and Mininet simulator showed improvement in metrics such as delay and costs in comparison with relevant studies in the literature.

31 citations


Journal ArticleDOI
TL;DR: This paper proposed a new type of VNE algorithm, which applied reinforcement learning and graph neural network theory to the algorithm, especially the combination of graph convolutional neural network (GCNN) and RL algorithm, and effectively reduced the degree of resource fragmentation.
Abstract: Network virtualization (NV) is a technology with broad application prospects. Virtual network embedding (VNE) is the core orientation of VN, which aims to provide more flexible underlying physical resource allocation for user function requests. The classical VNE problem is usually solved by heuristic method, but this method often limits the flexibility of the algorithm and ignores the time limit. In addition, the partition autonomy of physical domain and the dynamic characteristics of virtual network request (VNR) also increase the difficulty of VNE. This paper proposed a new type of VNE algorithm, which applied reinforcement learning (RL) and graph neural network (GNN) theory to the algorithm, especially the combination of graph convolutional neural network (GCNN) and RL algorithm. Based on a self-defined fitness matrix and fitness value, we set up the objective function of the algorithm implementation, realized an efficient dynamic VNE algorithm, and effectively reduced the degree of resource fragmentation. Finally, we used comparison algorithms to evaluate the proposed method. Simulation experiments verified that the dynamic VNE algorithm based on RL and GCNN has good basic VNE characteristics. By changing the resource attributes of physical network and virtual network, it can be proved that the algorithm has good flexibility.

24 citations


Posted Content
TL;DR: In this article, an artificial intelligence (AI)-native network slicing architecture for 6G networks is proposed to facilitate intelligent network management and support emerging AI services, where AI solutions are investigated for the entire lifecycle of network slicing, i.e., AI for slicing.
Abstract: With the global roll-out of the fifth generation (5G) networks, it is necessary to look beyond 5G and envision the sixth generation (6G) networks. The 6G networks are expected to have space-air-ground integrated networking, advanced network virtualization, and ubiquitous intelligence. This article proposes an artificial intelligence (AI)-native network slicing architecture for 6G networks to facilitate intelligent network management and support emerging AI services. AI is built in the proposed network slicing architecture to enable the synergy of AI and network slicing. AI solutions are investigated for the entire lifecycle of network slicing to facilitate intelligent network management, i.e., AI for slicing. Furthermore, network slicing approaches are discussed to support emerging AI services by constructing slice instances and performing efficient resource management, i.e., slicing for AI. Finally, a case study is presented, followed by a discussion of open research issues that are essential for AI-native network slicing in 6G.

23 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a virtual network embedding (VNE) mathematical model used for optical data center networks and derived a priority of location VNE algorithm according to node proximity sensing and path comprehensive evaluation.
Abstract: The demand for data center bandwidth has exploded due to the continuous development of cloud computing, causing the use of network resources close to saturation Optical network has become an encouraging technology for many burgeoning networks and parallel/distributed computing applications because of its huge bandwidth This article focuses on efficient embedding of data centers into optical networks, which aims to reduce complexity of the network topology by using the parallel transmission characteristics of optical fiber We first present a novel virtual network embedding (VNE) mathematical model used for optical data center networks Then we derive a priority of location VNE algorithm according to node proximity sensing and path comprehensive evaluation Furthermore, we propose routing and wavelength assignment for DCNs into optical networks, and identify the lower bound of the required number of wavelengths Extensive evaluations show that the proposed embedding algorithm can reduce the average waiting time of virtual network requests by 20 percent, increase the request acceptance rate and revenue-overhead ratio by 13 percent, as compared to the latest VNE algorithm

20 citations


Journal ArticleDOI
TL;DR: This work proposes a virtual network embedded (VNE) algorithm with computing, storage resources and security constraints to ensure the rationality and security of resource allocation in ICPSs and uses reinforcement learning (RL) method as a means to improve algorithm performance.
Abstract: The development of Intelligent Cyber-Physical Systems (ICPSs) in virtual network environment is facing severe challenges. On the one hand, the Internet of things (IoT) based on ICPSs construction needs a large amount of reasonable network resources support. On the other hand, ICPSs are facing severe network security problems. The integration of ICPSs and network virtualization (NV) can provide more efficient network resource support and security guarantees for IoT users. Based on the above two problems faced by ICPSs, we propose a virtual network embedded (VNE) algorithm with computing, storage resources and security constraints to ensure the rationality and security of resource allocation in ICPSs. In particular, we use reinforcement learning (RL) method as a means to improve algorithm performance. We extract the important attribute characteristics of underlying network as the training environment of RL agent. Agent can derive the optimal node embedding strategy through training, so as to meet the requirements of ICPSs for resource management and security. The embedding of virtual links is based on the breadth first search (BFS) strategy. Therefore, this is a comprehensive two-stage RL-VNE algorithm considering the constraints of computing, storage and security three-dimensional resources. Finally, we design a large number of simulation experiments from the perspective of typical indicators of VNE algorithms. The experimental results effectively illustrate the effectiveness of the algorithm in the application of ICPSs.

Journal ArticleDOI
TL;DR: The state-of-the-art 5G network slice architecture is analyzed, a number of network slicing architecture issues are addressed, and some open research questions are highlighted.

Journal ArticleDOI
TL;DR: This article proposes a network virtualization (NV)-based network architecture in cybertwin-enabled 6G core networks and reveals that the problem under consideration is formally a mixed-integer nonlinear program (MINLP) and proposes an improved brute-force search algorithm to find its optimal solutions.
Abstract: To efficiently allocate heterogeneous resources for customized services, in this article, we propose a network virtualization (NV)-based network architecture in cybertwin-enabled 6G core networks. In particular, we investigate how to optimize the virtual network (VN) topology (which consists of several virtual nodes and a set of intermediate virtual links) and determine the resultant VN embedding in a joint way over a cybertwin-enabled substrate network. To this end, we formulate an optimization problem whose objective is to minimize the embedding cost, while ensuring that the end-to-end (E2E) packet delay requirements are satisfied. The queueing network theory is utilized to evaluate each service’s E2E packet delay, which is a function of the resources assigned to the virtual nodes and virtual links for the embedded VN. We reveal that the problem under consideration is formally a mixed-integer nonlinear program (MINLP) and propose an improved brute-force search algorithm to find its optimal solutions. To enhance the algorithm’s scalability and reduce the computational complexity, we further propose an adaptively weighted heuristic algorithm to obtain near-optimal solutions to the problem for large-scale networks. Simulations are conducted to show that the proposed algorithms can effectively improve network performance compared to other benchmark algorithms.

Journal ArticleDOI
TL;DR: A location‐aware network virtualization method that aims at minimizing the network load and resource cost by appropriately selecting the relevant cloud and mobile core entities to provide the required virtual networking services that facilitate secure isolation of the virtual networks with automated resource convergence is proposed.

Journal ArticleDOI
TL;DR: An offline embedding algorithm that searches through all possible embeddings, which allowed us to explore the tradeoff between solution quality and search time and identify a defined set of initial processing steps that lead to high-quality solutions in bounded time.
Abstract: A recent trend in wireless sensor networks (WSNs) is network virtualization to support on-demand sharing of sensing functionality. The efficient allocation of WSN resources to sensing requests is obtained using virtual network embedding (VNE). This must take into account Quality of Service (e.g., reliability), Quality of Information (e.g., sensing accuracy), and deal with wireless interference. With increased computational complexity due to the added constraints, finding an optimal solution can be prohibitive at scale. We developed an offline embedding algorithm that searches through all possible embeddings, which allowed us to explore the tradeoff between solution quality and search time. We identify a defined set of initial processing steps that lead to high-quality solutions (within 10% of the best solution) in bounded time. We evaluated the algorithm under high stress (large networks with long paths, high data rates, beyond typical WSN configuration) to understand its limitations and the limitations imposed by the underlying WSN substrate.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a big picture of the recent developments of architectural frameworks for intelligent and autonomous management for future networks, and survey the latest progress in the standardization of network management architectures including works by 3GPP, ETSI, and ITU-T and analyzes how cloud-native network design may facilitate the architecture development for addressing management challenges.
Abstract: Cloud-native network design, which leverages network virtualization and softwarization together with the service-oriented architectural principle, is transforming communication networks to a versatile platform for converged network-cloud/edge service provisioning. Intelligent and autonomous management is one of the most challenging issues in cloud-native future networks, and a wide range of machine learning (ML)-based technologies have been proposed for addressing different aspects of the management challenge. It becomes critical that the various management technologies are applied on the foundation of a consistent architectural framework with a holistic vision. This calls for standardization of new management architecture that supports seamless the integration of diverse ML-based technologies in cloud-native future networks. The goal of this paper is to provide a big picture of the recent developments of architectural frameworks for intelligent and autonomous management for future networks. The paper surveys the latest progress in the standardization of network management architectures including works by 3GPP, ETSI, and ITU-Tand analyzes how cloud-native network design may facilitate the architecture development for addressing management challenges. Open issues related to intelligent and autonomous management in cloud-native future networks are also discussed in this paper to identify some possible directions for future research and development.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a service function chain reliability evaluation method and reliability optimization algorithm based on a Petri net model, and reliability evaluation results related to execution time were obtained.
Abstract: With the development of information technology, the network consists of various proprietary hardware devices, and the use of these devices brings problems. To solve problems, network function virtualization is proposed, which decouples the software and hardware in the network, and deploys the existing network function devices to a common physical platform. However, network virtualization needs will inevitably face reliability problems during resource virtualization and service function chain deployment. This article proposes a service function chain reliability evaluation method and reliability optimization algorithm. The composition relationship and reliability influencing factors of service function chain were analyzed, including resource preemption, common cause failure, fault recovery and redundant backup. The service function chain was modeled as a Petri net model, and reliability evaluation results related to execution time were obtained. Based on the reliability assessment results, a VNF migration strategy is designed, with reliability as the optimization goal while considering costs. Simulation results show that, compared with the reliability optimization strategy based on backup, our algorithm costs less and reduces the impact of resource preemption on service reliability.

Proceedings ArticleDOI
25 Oct 2021
TL;DR: In this paper, the authors propose Nuberu, a pipeline architecture for 4G/5G DUs specifically engineered for non-deterministic computing platforms, which guarantees a minimum set of signals that preserve synchronization between the distributed unit and its users during computing capacity shortages and, provided this, maximize network throughput.
Abstract: RAN virtualization will become a key technology for the last mile of next-generation mobile networks driven by initiatives such as the O-RAN alliance. However, due to the computing fluctuations inherent to wireless dynamics and resource contention in shared computing infrastructure, the price to migrate from dedicated to shared platforms may be too high. Indeed, we show in this paper that the baseline architecture of a base station's distributed unit (DU) collapses upon moments of deficit in computing capacity. Recent solutions to accelerate some signal processing tasks certainly help but do not tackle the core problem: a DU pipeline that requires predictable computing to provide carrier-grade reliability. We present Nuberu, a novel pipeline architecture for 4G/5G DUs specifically engineered for non-deterministic computing platforms. Our design has one key objective to attain reliability: to guarantee a minimum set of signals that preserve synchronization between the DU and its users during computing capacity shortages and, provided this, maximize network throughput. To this end, we use techniques such as tight deadline control, jitter-absorbing buffers, predictive HARQ, and congestion control. Using an experimental prototype, we show that Nuberu attains >95% of the theoretical spectrum efficiency in hostile environments, where state-of-art approaches lose connectivity, and at least 80% resource savings.

Journal ArticleDOI
TL;DR: In this article, a genetic correlation multi-domain virtual network embedding algorithm (GCMD-VNE) is proposed to improve the natural selection stage and crossover stage of genetic algorithm, adds more accurate selection formula and crossover conditions, and improves the performance of the algorithm.
Abstract: With the increase of network scale and the complexity of network structure, the problems of traditional Internet have emerged. At the same time, the appearance of network function virtualization (NFV) and network virtualization technologies has largely solved this problem, they can effectively split the network according to the application requirements, and flexibly provide network functions when needed. During the development of virtual network, how to improve network performance, including reducing the cost of embedding process and shortening the embedding time, has been widely concerned by the academia. Combining genetic algorithm with virtual network embedding problem, this paper proposes a genetic correlation multi-domain virtual network embedding algorithm (GCMD-VNE). The algorithm improves the natural selection stage and crossover stage of genetic algorithm, adds more accurate selection formula and crossover conditions, and improves the performance of the algorithm. Simulation results show that, compared with the existing algorithms, the algorithm has better performance in terms of embedding cost and embedding time.

Proceedings ArticleDOI
21 Jun 2021
TL;DR: The High-Rate Delay Tolerant Networking (HDTN) project has taken a distributed service-based approach to the development of a highly efficient delay tolerant networking (DTN) implementation.
Abstract: The High-Rate Delay Tolerant Networking (HDTN) project has taken a distributed service-based approach to the development of a highly efficient delay tolerant networking (DTN) implementation. Through the analysis of many DTN implementations, system and mission requirements as well as the DTN protocol specifications, HDTN has worked to infuse modern computing technologies into the NASA approach to interplanetary networking.The initial use case of the HDTN software runs on a hypervisor representative of the International Space Station (ISS) DTN Gateway. In this scenario, multiple emulated payloads will send science data through HDTN to a mission operations center. HDTN will provide store and forward capability as well as network flow management.This paper discusses the infusion path of cognitive networking technologies in the NASA Space Communications and Navigation (SCaN) networks using the DTN architecture and protocols as the basis for cognitive routing and network management capabilities. HDTN has been developing the Bundle Protocol encoding and decoding mechanisms and messaging framework that can be used as the basis for integrating DTN with various learning and decision-making processes. The concepts of distributed computing, network virtualization, software defined networking and delay tolerant networking are basic building blocks which will further the development of cognitive networking. In addition to discussion of the HDTN software development and testing, this paper examines the role that each of these technologies play in the evolution of the current state of space networking into an intelligent network of networks.

Journal ArticleDOI
TL;DR: This article presents the monitoring module of the approach for Internet of Things (IoT) environment using the Open Network Operating System (ONOS) controller and presents the first step for the next softwarized management framework that would investigate the important input information collected from this monitoring approach.
Abstract: Softwarizing a network can improve its overall resilience. A softwarized network is based on software-defined networking (SDN), network function virtualization (NFV), and network virtualization. It is well known that SDN controllers provide a wide vision of the whole network. It grants the programmability of the network, which is complementary to the NFV functioning. SDN/NFV aims to make the physical equipment used versatile. In this article, we present the monitoring module of our approach for Internet of Things (IoT) environment using the Open Network Operating System (ONOS) controller. This article presents the first step for our next softwarized management framework that would investigate the important input information collected from this monitoring approach.

Journal ArticleDOI
TL;DR: A dynamic VONE algorithm is proposed to allocate physical network resources to VON requests, which considers not only bandwidth requirements but also key demands, and results show that in order to achieve a tradeoff between blocking probability and key resource utilization, the values of adjustment factor and key generation rate should be appropriately set.

Journal ArticleDOI
Yang Wang1, Qian Hu1
TL;DR: In this paper, the authors study the optical virtual network embedding (OVNE) problem in SLICE-based network virtualization and propose a path growing algorithm to solve it.
Abstract: Network virtualization (NV) is considered to be a promising solution to address the ossification of the current Internet infrastructure. With the recent advancements in optical virtualization, network virtualization can be further extended to virtualization-enabled optical substrate networks that supply connectivity-as-a-service . This integration of NV and optical virtualization is referred to as optical-based network virtualization in this work, which warrants a future-proof Internet that is not only technology-friendly (i.e, due to the service and hardware decoupling in NV) but also bandwidth-abundant (i.e., with optical networking). In optical-based network virtualization, it remains to resolve the classic virtual network embedding (VNE) problem (i.e., a NP-Complete problem) with the extra dimension of complexity from physical characteristics of the specific type of optical network (e.g., WDM or SLICE). In this work, we study this variant of the VNE problem, namely Optical Virtual Network Embedding (OVNE), in SLICE-based network virtualization. We avoid addressing the OVNE problem with simple add-on constraints to VNE but address it with two strategies: first, exploring the OVNE problem structure at different granular of network elements (e.g., path-channel and channel graph) to mitigate the complexity; second, designing a path growing process that solves the OVNE model with substantially reduced variables. The proposed approach is evaluated and compared to other representative path-based schemes with demonstrated improvements. The proposed approach can also obtain near-optimal solution to the OVNE problem with guaranteed closeness to the optimal solution.

Proceedings ArticleDOI
01 Feb 2021
TL;DR: In this article, the authors evaluate and analyze the power consumption of virtualized base station (vBS) experimentally and find interesting tradeoffs between power savings and performance and propose two linear mixed effect models to approximate the experimental data.
Abstract: Network virtualization is intended to be a key element of new generation networks. However, it is no clear how the implantation of this new paradigm will affect the power consumption of the network. To shed light on this relatively unexplored topic, we evaluate and analyze the power consumption of virtualized Base Station (vBS) experimentally. In particular, we measure the power consumption associated with uplink transmissions as a function of different variables such as traffic load, channel quality, modulation selection, and bandwidth. We find interesting tradeoffs between power savings and performance and propose two linear mixed-effect models to approximate the experimental data. These models allow us to understand the power behavior of the vBS and select power-efficient configurations. We release our experimental dataset hoping to foster further efforts in this research area.

Journal ArticleDOI
TL;DR: This work proposes a prototype of an architecture for robust service function chain instantiation with convergence and performance guarantees, and describes the extensible management object model and compares the asynchronous consensus’s overhead against Raft, a recent decentralized consensus protocol, showing superior performance.
Abstract: The service function chaining paradigm links ordered service functions via network virtualization, in support of applications with severe network constraints. To provide wide-area (federated) virtual network services, a distributed architecture should orchestrate cooperating or competing processes to generate and maintain virtual paths hosting service function chains while, guaranteeing performance and fast asynchronous consensus even in the presence of failures. To this end, we propose a prototype of an architecture for robust service function chain instantiation with convergence and performance guarantees. To instantiate a service chain, our system uses a fully distributed asynchronous consensus mechanism that has bounds on convergence time and leads to a (1 $-\,\,1/ {e}$ )-approximation ratio with respect to the Pareto optimal chain instantiation, even in the presence of (non-byzantine) failures. Moreover, we show that a better optimal chain approximation cannot exist. To establish the practicality of our approach, we evaluate the system performance, policy tradeoffs, and overhead via simulations and through a prototype implementation. We then describe our extensible management object model and compare our asynchronous consensus’s overhead against Raft, a recent decentralized consensus protocol, showing superior performance. We furthermore discuss a new management object model for distributed service function chain instantiation.

Journal ArticleDOI
TL;DR: In this paper, a reliable interference-aware virtual network embedding (VNE-RIA) algorithm is proposed to provide reliable guarantee in the coordinated node and link mapping for various airborne tactical virtual networks (ATVNs).
Abstract: Airborne tactical networks (ATNs) is driving the growing development of network-centric warfare by maintaining coverage and providing reach-back to military units. However, the key function of ATNs is impeded by the network ossification that is deep-rooted in the tightly coupled, vertically integrated architecture of traditional ATNs. Network virtualization (NV) can provide a more flexible and scalable ATN architecture as a solution, breaking the tight coupling between applications and network infrastructure. One important aspect of NV is virtual network embedding (VNE), which instantiates multiple virtual networks on a shared substrate network. However, existing efforts are not necessarily optimal for the virtualization of ATNs due to the absence of QoS-compliant capacity for the complex interference in air-combat field. To tackle this difficulty, a reliable interference-aware VNE algorithm, termed VNE-RIA, is proposed to provide reliable guarantee in the coordinated node and link mapping for various airborne tactical virtual networks (ATVNs). In the node mapping, the VNE-RIA adopts a novel node ranking approach to rank all substrate and virtual nodes, considering the complex interference including link interference, environmental noise and malicious attacks. In the link mapping, an improved anypath link mapping approach, based on the anypath routing scheme, is adopted to improve the reliability and efficiency of mapping virtual links by exploiting the unique features of wireless channels and the influence of different transmission rates. Numerical simulation results reveal that VNE-RIA algorithm outperforms typical and latest heuristic wireless VNE algorithms under the complex electromagnetic interference of ATNs.

Journal ArticleDOI
TL;DR: Sva is an autonomic framework that can combine the virtual dynamic SR-IOV and the virtual machine live migration for virtual network allocations in data centers and exploit the advantages of both techniques to match and even beat the better performance of each individual technology by adapting to the VM workload changes.
Abstract: With the rise of network virtualization, the workloads deployed on data center are dramatically changed to support diverse service-oriented applications, which are in general characterized by the time-bounded service response that in turn puts great burden on the data-center networks. Although there have been numerous techniques proposed to optimize the virtual network allocation in data center, the research on coordinating them in a flexible and effective way to autonomically adapt to the workloads for service time reduction is few and far between. To address these issues, in this article we propose Sova , an autonomic framework that can combine the virtual dynamic SR-IOV (DSR-IOV) and the virtual machine live migration (VLM) for virtual network allocations in data centers. DSR-IOV is a SR-IOV-based virtual network allocation technology, but its operation scope is very limited to a single physical machine, which could lead to the local hotspot issue in the course of computation and communication, likely increasing the service response time. In contrast, VLM is an often-used virtualization technique to optimize global network traffic via VM migration. Sova exploits the software-defined approach to combine these two technologies with reducing the service response time as a goal. To realize the autonomic coordination, the architecture of Sova is designed based on the MAPE-K loop in autonomic computing. With this design, Sova can adaptively optimize the network allocation between different services by coordinating DSR-IOV and VLM in autonomic way, depending on the resource usages of physical servers and the network characteristics of VMs. To this end, Sova needs to monitor the network traffic as well as the workload characteristics in the cluster, whereby the network properties are derived on the fly to direct the coordination between these two technologies. Our experiments show that Sova can exploit the advantages of both techniques to match and even beat the better performance of each individual technology by adapting to the VM workload changes.

Journal ArticleDOI
TL;DR: A multi-layer feed-forward neural network is constructed based on the analysis of optimal auction design for resource allocation in wireless virtualization and can increase the revenue by 10 and 30 percent on average, for single MVNO and multi-MVNO cases, respectively.
Abstract: Wireless virtualization has become a key concept in future cellular networks which can provide virtualized wireless networks for different mobile virtual network operators (MVNOs) over the same physical infrastructure. Resource allocation is a main challenging issue in wireless virtualization for which auction approaches have been widely used. However, for most existing auction-based allocation schemes, the objective is to maximize the social welfare due to its simplicity. While in reality, MVNOs are more interested in maximizing their own revenues. However, the revenue-maximization auction problem is much more complex since the price is unknown before calculation. In this paper, we give a first attempt for designing a revenue-optimal auction mechanism for resource allocation in wireless virtualization. Considering the complexity in revenue maximization, deep learning techniques are applied. Specifically, we construct a multi-layer feed-forward neural network based on the analysis of optimal auction design. The neural network adopts users' bids as the input and the allocation rule and conditional payment rule as the output. The training set of this neural network is the users' valuation profiles. The proposed auction mechanism possesses several satisfactory properties, e.g., individual rationality, incentive compatibility and budget constraint. Finally, simulation results demonstrate the effectiveness of the proposed scheme.

Journal ArticleDOI
TL;DR: A way for the learning machine to receive video frames from the network camera without delay and monitor the status of available network interfaces in networked cameras for image-based object detection is provided.
Abstract: These days, networked cameras are used in various applications using deep learning. In particular, as the deep learning technology for image processing develops, image-based application services using networked camera are expanding. Object detections are the representative application in the image-based applications. Images from the networked camera are transmitted to a deep learning machine, which performs object detection using a deep neural network (DNN) algorithm. For real-time processing of the object detection, lightweight of the image learning steps is needed. Both preprocessing of training sets and lightweight learning models can reduce computing loads for image learning. However, it is most important to receive video frames from the network camera without delay. In this paper, we provide a way for the learning machine to receive video frames with minimal delay. The proposed method is a kind of network virtualization for image-based object detection. It monitors network the status of available network interfaces in networked cameras. When a camera transmit video frames, the virtualized module determines the appropriate network interface to reduce delay. The performance of the proposed method is evaluated in the image-based object detection system using deep learning.

Proceedings ArticleDOI
10 May 2021
TL;DR: In this paper, the authors proposed a bandwidth isolation guarantee for software-defined networking (SDN)-based network virtualization (NV), which provides topology and address virtualization while allowing flexible resource provisioning, control, and monitoring of virtual networks.
Abstract: We introduce TeaVisor, which provides bandwidth isolation guarantee for software-defined networking (SDN)-based network virtualization (NV). SDN-NV provides topology and address virtualization while allowing flexible resource provisioning, control, and monitoring of virtual networks. However, to the best of our knowledge, the bandwidth isolation guarantee, which is essential for providing stable and reliable throughput on network services, is missing in SDN-NV. Without bandwidth isolation guarantee, tenants suffer degraded service quality and significant revenue loss. In fact, we find that the existing studies on bandwidth isolation guarantees are insufficient for SDN-NV. With SDN-NV, routing is performed by tenants, and existing studies have not addressed the overloaded link problem. To solve this problem, TeaVisor designs three components: path virtualization, bandwidth reservation, and path establishment, which utilize multipath routing. With these, TeaVisor achieves the bandwidth isolation guarantee while preserving the routing of the tenants. In addition, TeaVisor guarantees the minimum and maximum amounts of bandwidth simultaneously. We fully implement TeaVisor, and the comprehensive evaluation results show that near-zero error rates on achieving the bandwidth isolation guarantee. We also present an overhead analysis of control traffic and memory consumption.