scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Network and Service Management in 2019"


Journal ArticleDOI
TL;DR: Different state-of-the-art DL techniques from (standard) TC are reproduced, dissected, and set into a systematic framework for comparison, including also a performance evaluation workbench, to propose deep learning classifiers based on automatically extracted features, able to cope with encrypted traffic, and reflecting their complex traffic patterns.
Abstract: The massive adoption of hand-held devices has led to the explosion of mobile traffic volumes traversing home and enterprise networks, as well as the Internet. Traffic classification (TC), i.e., the set of procedures for inferring (mobile) applications generating such traffic, has become nowadays the enabler for highly valuable profiling information (with certain privacy downsides), other than being the workhorse for service differentiation/blocking. Nonetheless, the design of accurate classifiers is exacerbated by the raising adoption of encrypted protocols (such as TLS), hindering the suitability of (effective) deep packet inspection approaches. Also, the fast-expanding set of apps and the moving-target nature of mobile traffic makes design solutions with usual machine learning, based on manually and expert-originated features, outdated and unable to keep the pace. For these reasons deep learning (DL) is here proposed, for the first time, as a viable strategy to design practical mobile traffic classifiers based on automatically extracted features, able to cope with encrypted traffic, and reflecting their complex traffic patterns. To this end, different state-of-the-art DL techniques from (standard) TC are here reproduced, dissected (highlighting critical choices), and set into a systematic framework for comparison, including also a performance evaluation workbench. The latter outcome, although declined in the mobile context, has the applicability appeal to the wider umbrella of encrypted TC tasks. Finally, the performance of these DL classifiers is critically investigated based on an exhaustive experimental validation (based on three mobile datasets of real human users’ activity), highlighting the related pitfalls, design guidelines, and challenges.

359 citations


Journal ArticleDOI
TL;DR: The results obtained demonstrate that the proposed cloud-based anomaly detection model is superior in comparison to the other state-of-the-art models (used for network anomaly detection), in terms of accuracy, detection rate, false positive rate, and F-score.
Abstract: With the emergence of the Internet-of-Things (IoT) and seamless Internet connectivity, the need to process streaming data on real-time basis has become essential. However, the existing data stream management systems are not efficient in analyzing the network log big data for real-time anomaly detection. Further, the existing anomaly detection approaches are not proficient because they cannot be applied to networks, are computationally complex, and suffer from high false positives. Thus, in this paper a hybrid data processing model for network anomaly detection is proposed that leverages grey wolf optimization (GWO) and convolutional neural network (CNN). To enhance the capabilities of the proposed model, GWO and CNN learning approaches were enhanced with: 1) improved exploration, exploitation, and initial population generation abilities and 2) revamped dropout functionality, respectively. These extended variants are referred to as Improved-GWO (ImGWO) and Improved-CNN (ImCNN). The proposed model works in two phases for efficient network anomaly detection. In the first phase, ImGWO is used for feature selection in order to obtain an optimal trade-off between two objectives, i.e., reduced error rate and feature-set minimization. In the second phase, ImCNN is used for network anomaly classification. The efficacy of the proposed model is validated on benchmark (DARPA’98 and KDD’99) and synthetic datasets. The results obtained demonstrate that the proposed cloud-based anomaly detection model is superior in comparison to the other state-of-the-art models (used for network anomaly detection), in terms of accuracy, detection rate, false positive rate, and F-score. In average, the proposed model exhibits an overall improvement of 8.25%, 4.08%, and 3.62% in terms of detection rate, false positives, and accuracy, respectively; relative to standard GWO with CNN.

185 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a resource allocation architecture which enables energy-aware service function chaining (SFC) for SDN-based networks, considering also constraints on delay, link utilization, server utilization.
Abstract: Service function chaining (SFC) allows the forwarding of traffic flows along a chain of virtual network functions (VNFs). Software defined networking (SDN) solutions can be used to support SFC to reduce both the management complexity and the operational costs. One of the most critical issues for the service and network providers is the reduction of energy consumption, which should be achieved without impacting the Quality of Service. In this paper, we propose a novel resource allocation architecture which enables energy-aware SFC for SDN-based networks, considering also constraints on delay, link utilization, server utilization. To this end, we formulate the problems of VNF placement, allocation of VNFs to flows, and flow routing as integer linear programming (ILP) optimization problems. Since the formulated problems cannot be solved (using ILP solvers) in acceptable timescales for realistic problem dimensions, we design a set of heuristic to find near-optimal solutions in timescales suitable for practical applications. We numerically evaluate the performance of the proposed algorithms over a real-world topology under various network traffic patterns. Our results confirm that the proposed heuristic algorithms provide near-optimal solutions (at most 14% optimality-gap) while their execution time makes them usable for real-life networks.

133 citations


Journal ArticleDOI
TL;DR: This paper introduces a flexible, programmable, and open-source SDN platform for heterogeneous 5G RANs, building on an open protocol that abstracts the technology-dependent aspects of the radio access elements, allowing network programmers to deploy complex management tasks as policies on top of a programmable logically centralized controller.
Abstract: Software-defined networking (SDN) is making their way into the fifth generation of mobile communications. For example, 3GPP is embracing the concept of control-user plane separation (a cornerstone concept in SDN) in the 5G core and the radio access network (RAN). In this paper, we introduce a flexible, programmable, and open-source SDN platform for heterogeneous 5G RANs. The platform builds on an open protocol that abstracts the technology-dependent aspects of the radio access elements, allowing network programmers to deploy complex management tasks as policies on top of a programmable logically centralized controller. We implement the proposed solution as an extension to the 5G-EmPOWER platform and release the software stack (including the southbound protocol) under a permissive APACHE 2.0 license. Finally, the effectiveness of the platform is assessed through three reference use cases: 1) active network slicing; 2) mobility management; and 3) load-balancing.

101 citations


Journal ArticleDOI
TL;DR: This work focuses on the workload orchestration problem in which execution locations for incoming tasks from mobile devices are decided within an edge computing infrastructure, including the global cloud as well.
Abstract: Edge computing is based on the philosophy that the data should be processed within the locality of its source. Edge computing is entering a new phase where it gains wide acceptance from both academia and the industry as the commercial deployments are starting. Edge of the network presents a very dynamic environment with many devices, intermittent traffic, high mobility of the end user, heterogeneous applications and their requirements. In this scene, scalable and efficient management and orchestration remains to be a problem. We focus on the workload orchestration problem in which execution locations for incoming tasks from mobile devices are decided within an edge computing infrastructure, including the global cloud as well. Workload orchestration is an intrinsically hard, online problem. We employ a fuzzy logic-based approach to solve this problem by capturing the intuition of a real-world administrator to get an automated management system. Our approach takes into consideration the properties of the offloaded task as well as the current state of the computational and networking resources. Detailed set of experiments are designed with EdgeCloudSim to demonstrate the competitive performance of our approach for different service classes.

92 citations


Journal ArticleDOI
TL;DR: A decomposition approach is proposed to solve the problem of MEC resource provisioning and workload assignment for IoT services (RPWA) as a mixed integer program to jointly decide on the number and the location of edge servers and applications to deploy, in addition to the workload assignment.
Abstract: The proliferation of smart connected Internet of Things (IoT) devices is bringing tremendous challenges in meeting the performance requirement of their supported real-time applications due to their limited resources in terms of computing, storage, and battery life. In addition, the considerable amount of data they generate brings extra burden to the existing wireless network infrastructure. By enabling distributed computing and storage capabilities at the edge of the network, multi-access edge computing (MEC) serves delay sensitive, computationally intensive applications. Managing the heterogeneity of the workload generated by IoT devices, especially in terms of computing and delay requirements, while being cognizant of the cost to network operators, requires an efficient dimensioning of the MEC-enabled network infrastructure. Hence, in this paper, we study and formulate the problem of MEC resource provisioning and workload assignment for IoT services (RPWA) as a mixed integer program to jointly decide on the number and the location of edge servers and applications to deploy, in addition to the workload assignment. Given its complexity, we propose a decomposition approach to solve it which consists of decomposing RPWA into the delay aware load assignment sub-problem and the mobile edge servers dimensioning sub-problem. We analyze the effectiveness of the proposed algorithm through extensive simulations and highlight valuable performance trends and trade-offs as a function of various system parameters.

84 citations


Journal ArticleDOI
TL;DR: A formal scalability analysis along with an ns-3 simulation performance analysis demonstrate that IoT-HiTrust not only achieves scalability without compromising accuracy, convergence, and resiliency properties against malicious attacks but also outperforms contemporary distributed and centralized IoT trust management protocols.
Abstract: We propose and analyze a 3-tier cloud-cloudlet-device hierarchical trust-based service management protocol called IoT-HiTrust for large-scale mobile cloud Internet of Things (IoT) systems. Our mobile cloud hierarchical service management protocol allows an IoT customer to report its service experiences and query its subjective service trust score toward an IoT service provider following a scalable report-and-query design. We conduct a formal scalability analysis along with an ns-3 simulation performance analysis demonstrating that IoT-HiTrust not only achieves scalability without compromising accuracy, convergence, and resiliency properties against malicious attacks but also outperforms contemporary distributed and centralized IoT trust management protocols. We test the feasibility by applying IoT-HiTrust to two case studies: 1) a smart city travel service composition and binding application and 2) an air pollution detection and response application. The results demonstrate that IoT-HiTrust outperforms contemporary distributed and centralized trust-based IoT service management protocols in selecting trustworthy nodes to maximize application performance, while achieving scalability.

79 citations


Journal ArticleDOI
TL;DR: It turns out that even the most well-known learning technique is ineffective in the context of a large-scale action space, and proposes approaches to find out feasible solutions while improving significantly the exploration of the action space.
Abstract: Network Function Virtualization (NFV) and service orchestration simplify the deployment and management of network and telecommunication services. The deployment of these services requires, typically, the allocation of Virtual Network Function - Forwarding Graph (VNF-FG), which implies not only the fulfillment of the service’s requirements in terms of Quality of Service (QoS), but also considering the constraints of the underlying infrastructure. This topic has been well-studied in existing literature, however, its complexity and uncertainty of available information unveil challenges for researchers and engineers. In this paper, we explore the potential of reinforcement learning techniques for the placement of VNF-FGs. However, it turns out that even the most well-known learning technique is ineffective in the context of a large-scale action space. In this respect, we propose approaches to find out feasible solutions while improving significantly the exploration of the action space. The simulation results clearly show the effectiveness of the proposed learning approach for this category of problems. Moreover, thanks to the deep learning process, the performance of the proposed approach is improved over time.

78 citations


Journal ArticleDOI
TL;DR: In this paper, a distributed collision-avoidance scheduling (DCAS) algorithm is proposed to address the MLCAMDAS-MC problem in distributed WSNs, where the sensors are considered to be assigned the channels and the data are compressed with a flexible aggregation ratio.
Abstract: In wireless sensor networks (WSNs), the sensed data by sensors need to be gathered, so that one very important application is periodical data collection. There is much effort which aimed at the data collection scheduling algorithm development to minimize the latency. Most of previous works investigating the minimum latency of data collection issue have an ideal assumption that the network is a centralized system , in which the entire network is completely synchronized with full knowledge of components. In addition, most of existing works often assume that any (or no) data in the network are allowed to be aggregated into one packet and the network models are often treated as tree structures. However, in practical, WSNs are more likely to be distributed systems , since each sensor’s knowledge is disjointed to each other, and a fixed number of data are allowed to be aggregated into one packet. This is a formidable motivation for us to investigate the problem of minimum latency for the data aggregation without data collision in the distributed WSNs when the sensors are considered to be assigned the channels and the data are compressed with a flexible aggregation ratio, termed the minimum-latency collision-avoidance multiple-data-aggregation scheduling with multi-channel (MLCAMDAS-MC) problem. A new distributed algorithm, termed the distributed collision-avoidance scheduling (DCAS) algorithm, is proposed to address the MLCAMDAS-MC. Finally, we provide the theoretical analyses of DCAS and conduct extensive simulations to demonstrate the performance of DCAS.

73 citations


Journal ArticleDOI
TL;DR: An approximation algorithm is proposed to solve the joint optimization problem to minimize the system cost (VM rentals) while guaranteeing QoS requirements, formulated as a mixed integer nonlinear programming problem.
Abstract: Fog-aided Internet of Things (IoT) addresses the resource limitations of IoT devices in terms of computing and energy capacities, and enables computational intensive and delay-sensitive tasks to be offloaded to the fog nodes attached to the IoT gateways. A fog node, utilizing the cloud technologies, can lease and release virtual machines (VMs) in an on-demand fashion. For the power-limited mobile IoT devices (e.g., wearable devices and smart phones), their quality of service may be degraded owing to the varying wireless channel conditions. Power control helps maintain the wireless transmission rate and hence the quality of service (QoS). The QoS (i.e., task completion time) is affected by both the fog processing and wireless transmission; it is thus important to jointly optimize fog resource provisioning (i.e., decisions on the number of VMs to rent) and power control. This paper addresses this joint optimization problem to minimize the system cost (VM rentals) while guaranteeing QoS requirements, formulated as a mixed integer nonlinear programming problem. An approximation algorithm is then proposed to solve the problem. Simulation results demonstrate the performance of our proposed algorithm.

73 citations


Journal ArticleDOI
Zhen Tu1, Kai Zhao1, Fengli Xu1, Yong Li1, Li Su1, Depeng Jin1 
TL;DR: This paper is the first to recognize the semantic attack, which is another severe privacy problem in publishing trajectory datasets, and proposes an algorithm providing strong privacy protection against both the semantic and re-identification attack while reserving high data utility.
Abstract: Nowadays, human trajectories are widely collected and utilized for scientific research and business purpose. However, publishing trajectory data without proper handling might cause severe privacy leakage. A large body of works is dedicated to merging one’s trajectory with others’, so as to avoid any individual trajectory being re-identified. Yet their solutions do not provide enough protection since they cannot prevent semantic attack, which means the attackers are able to acquire individual’s private information by using the semantics features of frequently visited locations in the trajectory even without re-identification. In this paper, we are the first to recognize the semantic attack, which is another severe privacy problem in publishing trajectory datasets. We propose an algorithm providing strong privacy protection against both the semantic and re-identification attack while reserving high data utility. Extensive evaluations based on two real-world datasets demonstrate that our solution improves the quality of privacy protection by three times, sacrificing only 36% and 10% of spatial and temporal resolution, respectively.

Journal ArticleDOI
TL;DR: A dynamic DDoS attack detection system based on three main components: classification algorithms; a distributed system; and a fuzzy logic system that uses fuzzy logic to dynamically select an algorithm from a set of prepared classification algorithms that detect different DDoS patterns.
Abstract: Distributed denial of service (DDoS) attacks are a major security threat against the availability of conventional or cloud computing resources. Numerous DDoS attacks, which have been launched against various organizations in the last decade, have had a direct impact on both vendors and users. Many researchers have attempted to tackle the security threat of DDoS attacks by combining classification algorithms with distributed computing. However, their solutions are static in terms of the classification algorithms used. In fact, current DDoS attacks have become so dynamic and sophisticated that they are able to pass the detection system thereby making it difficult for static solutions to detect. In this paper, we propose a dynamic DDoS attack detection system based on three main components: 1) classification algorithms; 2) a distributed system; and 3) a fuzzy logic system. Our framework uses fuzzy logic to dynamically select an algorithm from a set of prepared classification algorithms that detect different DDoS patterns. Out of the many candidate classification algorithms, we use Naive Bayes, Decision Tree (Entropy), Decision Tree (Gini), and Random Forest as candidate algorithms. We have evaluated the performance of classification algorithms and their delays and validated the fuzzy logic system. We have also evaluated the effectiveness of the distributed system and its impact on the classification algorithms delay. The results show that there is a trade-off between the utilized classification algorithms’ accuracies and their delays. We observe that the fuzzy logic system can effectively select the right classification algorithm based on the traffic status.

Journal ArticleDOI
TL;DR: A hybrid slice reconfiguration (HSR) framework is proposed, where a fast slice reconfigured (FSR) scheme reconfigures flows for individual slices at the time scale of flow arrival/departure, while a dimensioning slices with reconfigurations (DSR) Scheme is occasionally performed to adjust allocated resources according to the time-varying traffic demand.
Abstract: Network slicing enables diversified services to be accommodated by isolated slices in network function virtualization-enabled software-defined networks. To maintain satisfactory user experience and high profit for service providers in a dynamic environment, a slice may need to be reconfigured according to the varying traffic demand and resource availability. However, frequent reconfigurations incur certain cost and might cause service interruption. In this paper, we propose a hybrid slice reconfiguration (HSR) framework, where a fast slice reconfiguration (FSR) scheme reconfigures flows for individual slices at the time scale of flow arrival/departure, while a dimensioning slices with reconfiguration (DSR) scheme is occasionally performed to adjust allocated resources according to the time-varying traffic demand. In order to optimize the slice’s profit, i.e., the total utility minus the resource consumption and reconfiguration cost, we formulate the problems for FSR and DSR, which are difficult to solve due to the discontinuity and non-convexity of the reconfiguration cost function. Hence, we approximate the reconfiguration cost function with ${L} _{1}$ norm, which preserves the sparsity of the solution, thus facilitating restricting reconfigurations. Besides, we design an algorithm to schedule FSR and DSR, so that DSR is timely triggered according to the traffic dynamics and resource availability to improve the profit of slice. Furthermore, we extend HSR with a resource reservation mechanism, which reserves partial resources for near future traffic to reduce potential reconfigurations. Numerical results validate that our reconfiguration framework is effective in reducing reconfiguration overhead and achieving high profit for slices.

Journal ArticleDOI
TL;DR: An ad-hoc mobile edge cloud that takes advantage of Wi-Fi Direct as means of achieving connectivity, sharing resources, and integrating security services among nearby mobile devices is proposed and provides optimal offloading decision and distribution of security services without sacrificing security.
Abstract: While the usage of smart devices is increasing, security attacks and malware affecting such terminals are briskly evolving as well. Mobile security suites exist to defend devices against malware and other intrusions. However, they require extensive resources not continuously available on mobile terminals, hence affecting their relevance, efficiency and sustainability. In this paper, we address the aforementioned problem while taking into account the devices limited resources such as energy and CPU usage as well as the mobile connectivity and latency. In this context, we propose an ad-hoc mobile edge cloud that takes advantage of Wi-Fi Direct as means of achieving connectivity, sharing resources, and integrating security services among nearby mobile devices. The proposed scheme embeds a multi-objective resource-aware optimization model and genetic-based solution that provide smart offloading decision based on dynamic profiling of contextual and statistical data from the ad-hoc mobile edge cloud devices. The carried experiments illustrate the relevance and efficiency of exchanging security services while maintaining their sustainability with or without the availability of Internet connection. Moreover, the results provide optimal offloading decision and distribution of security services while significantly reducing energy consumption, execution time, and number of selected computational nodes without sacrificing security.

Journal ArticleDOI
TL;DR: The updating process has been formulated as a reinforcement learning (RL) problem whose solution prescribes optimal disseminating policies and results show an increase of up to 147% in the accumulated per-node throughput when the RL-based approach is employed.
Abstract: LoRa is an extremely flexible low-power wide-area technology that enables each IoT node to individually adjust its transmission parameters. Consequently, the average per-node throughput of LoRa-based networks has been mathematically formulated and the optimal network-level configuration derived. For end nodes to update their transmission parameters, this centrally computed global configuration must then be disseminated by LoRa gateways. Unfortunately, the regional limitations imposed on the usage of ISM bands—especially those related to the maximum utilization of the band—pose a potential handicap to this parameter dissemination. To solve this problem, a set of tools from the machine learning field have been used. Precisely, the updating process has been formulated as a reinforcement learning (RL) problem whose solution prescribes optimal disseminating policies. The use of these policies together with the optimal network configuration has been extensively analyzed and compared to other well-established alternatives. Results show an increase of up to 147% in the accumulated per-node throughput when our RL-based approach is employed.

Journal ArticleDOI
TL;DR: Numerical results show that near-optimal RWA can be obtained with the ML approach, while reducing computational time up to 93% in comparison to a traditional optimization approach based on integer linear programming.
Abstract: Recently, machine learning (ML) has attracted the attention of both researchers and practitioners to address several issues in the optical networking field. This trend has been mainly driven by the huge amount of available data (i.e., signal quality indicators, network alarms, etc.) and to the large number of optimization parameters which feature current optical networks (such as, modulation format, lightpath routes, transport wavelength, etc.). In this paper, we leverage the techniques from the ML discipline to efficiently accomplish the routing and wavelength assignment (RWA) for an input traffic matrix in an optical WDM network. Numerical results show that near-optimal RWA can be obtained with our approach, while reducing computational time up to 93% in comparison to a traditional optimization approach based on integer linear programming. Moreover, to further demonstrate the effectiveness of our approach, we deployed the ML classifier into an ONOS-based software defined optical network laboratory testbed, where we evaluate the performance of the overall RWA process in terms of computational time.

Journal ArticleDOI
TL;DR: This paper forms the problem mathematically and proposes a joint deployment and backup scheme (JDBS) and conducts a numerical simulation results show that JDBS is obviously superior to the contrasting schemes and can save about 40% resources at most.
Abstract: By means of network function virtualization (NFV), dedicated proprietary network devices can be implemented as software and instantiated flexibly on common-off-the-shelf servers, in the form of virtual network functions (VNF). NFV can bring great cost reduction as well as operation flexibility. However, it also brings new problems, one of which is how to meet the availability of network services in the VNF deployment process, because of the error prone nature of software. The availability aware VNF deployment problem has attracted attention by academics, and reserving redundancy has been treated as the de facto technology. Compared with traditional backup schemes for physical machines, resource orchestration in NFV is more flexible and the characteristics of software should be considered to improve resource utilization efficiency. Based on the above considerations, in this paper we further study the availability aware VNF deployment problem in datacenter networks. To improve the resource utilization efficiency, the sharing mechanism of redundancy and multi-tenancy technology are taken into account. Then we formulate the problem mathematically and propose a joint deployment and backup scheme (JDBS). Finally, we conduct a numerical simulation in detail and compare it with four contrasting schemes in the existing literature. The simulation results show that JDBS is obviously superior to the contrasting schemes and can save about 40% resources at most.

Journal ArticleDOI
TL;DR: A model of the adaptive and dynamic VNF allocation problem considering also VNF migration is provided and AD3, an alternating direction method of multipliers-based algorithm, is adopted to solve this problem in a distributed way.
Abstract: Network function virtualization (NFV) will simplify deployment and management of network and telecommunication services. NFV provides flexibility by virtualizing the network functions and moving them to a virtualization platform. In order to achieve its full potential, NFV is being extended to mobile or wireless networks by considering virtualization of radio functions. A typical network service setup requires the allocation of a virtual network function-forwarding graph (VNF-FG). A VNF-FG is allocated considering the resource constraints of the lower infrastructure. This topic has been well-studied in existing literature, however, the effects of variations of networks over time have not been addressed yet. In this paper, we provide a model of the adaptive and dynamic VNF allocation problem considering also VNF migration. Then we formulate the optimization problem as an integer linear programming (ILP) and provide a heuristic algorithm for allocating multiple VNF-FGs. The idea is that VNF-FGs can be reallocated dynamically to obtain the optimal solution over time. First, a centralized optimization approach is proposed to cope with the ILP-resource allocation problem. Next, a decentralized optimization approach is proposed to deal with cooperative multi-operator scenarios. We adopt AD3, an alternating direction method of multipliers-based algorithm, to solve this problem in a distributed way. The results confirm that the proposed algorithms are able to optimize the network utilization, while limiting the number of reallocations of VNFs which could interrupt network services.

Journal ArticleDOI
TL;DR: An extensive parametric analysis is carried out that highlights how diverse performance guarantees, technological settings, and slice configurations impact the resource utilization at different levels of the infrastructure in presence of network slicing.
Abstract: The economic sustainability of future mobile networks will largely depend on the strong specialization of its offered services. Network operators will need to provide added value to their tenants, by moving from the traditional one-size-fits-all strategy to a set of virtual end-to-end instances of a common physical infrastructure, named network slices , which are especially tailored to the requirements of each application. Implementing network slicing has significant consequences in terms of resource management: service customization entails assigning to each slice fully dedicated resources, which may also be dynamically reassigned and overbooked in order to increase the cost-efficiency of the system. In this paper, we adopt a data-driven approach to quantify the efficiency of resource sharing in future sliced networks. Building on metropolitan-scale real-world traffic measurements, we carry out an extensive parametric analysis that highlights how diverse performance guarantees, technological settings, and slice configurations impact the resource utilization at different levels of the infrastructure in presence of network slicing. Our results provide insights on the achievable efficiency of network slicing architectures, their dimensioning, and their interplay with resource management algorithms at different locations and reconfiguration timescales.

Journal ArticleDOI
TL;DR: This paper proposes a model for dynamic trading of mobile network resources in a market that enables automatic optimization of technical parameters and of economic prices according to high level policies defined by the tenants and introduces a mathematical formulation for the problems of resource allocation and price definition.
Abstract: Expanding the market of mobile network services and defining solutions that are cost efficient are the key challenges for next generation mobile networks. Network slicing is commonly considered to be the main instrument to exploit the flexibility of the new radio interface and core network functions. It targets splitting resources among services with different requirements and tailoring system parameters according to their needs. Regulation authorities also recognize network slicing as a way of opening the market to new players who can specialize in providing new mobile services acting as “tenants” of the slices. Resources can also be distributed between infrastructure providers and tenants so that they meet the requirements of the services offered. In this paper, we propose a model for dynamic trading of mobile network resources in a market that enables automatic optimization of technical parameters and of economic prices according to high level policies defined by the tenants. We introduce a mathematical formulation for the problems of resource allocation and price definition and show how the proposed approach can cope with quite diverse service scenarios presenting a large set of numerical results.

Journal ArticleDOI
TL;DR: A novel method to adaptively and automatically identify the most appropriate model to accurately estimate data center resources utilization is proposed and trains a classifier based on statistical features of historical resources usage to decide the appropriate prediction model to use for given resource utilization observations collected during a specific time interval.
Abstract: Accurate estimation of data center resource utilization is a challenging task due to multi-tenant co-hosted applications having dynamic and time-varying workloads. Accurate estimation of future resources utilization helps in better job scheduling, workload placement, capacity planning, proactive auto-scaling, and load balancing. The inaccurate estimation leads to either under or over-provisioning of data center resources. Most existing estimation methods are based on a single model that often does not appropriately estimate different workload scenarios. To address these problems, we propose a novel method to adaptively and automatically identify the most appropriate model to accurately estimate data center resources utilization. The proposed approach trains a classifier based on statistical features of historical resources usage to decide the appropriate prediction model to use for given resource utilization observations collected during a specific time interval. We evaluated our approach on real datasets and compared the results with multiple baseline methods. The experimental evaluation shows that the proposed approach outperforms the state-of-the-art approaches and delivers 6% to 27% improved resource utilization estimation accuracy compared to baseline methods.

Journal ArticleDOI
TL;DR: Two virtual functions placement approaches in a Fog domain are proposed, aiming at minimizing both the worst application completion time and the number of applications in outage, and the stability of the reached matchings has been theoretically proved for both the proposed solutions.
Abstract: This paper proposes two virtual functions (VFs) placement approaches in a Fog domain. The considered solutions formulate a matching game with externalities, aiming at minimizing both the worst application completion time and the number of applications in outage, i.e., the number of applications with an overall completion time greater than a given deadline. The first proposed matching game is established between the VFs set and the fog nodes (FNs) set by taking into account the ordered sequence of services (i.e., chain) requested by each application. Conversely, the second proposed method overlooks the applications service chain structure in formulating the VF placement problem, with the aim at lowering the computation complexity without loosing the performance. Furthermore, in order to complete our analysis, the stability of the reached matchings has been theoretically proved for both the proposed solutions. Finally, performance comparisons of the proposed matching theory approaches with different alternatives are provided to highlight the superior performance of the proposed methods.

Journal ArticleDOI
TL;DR: A heuristic approach called Jcap is developed to solve the problem in two stages and the simulations show that Jcap achieves competitive performance with the optimal results obtained from mathematical model.
Abstract: With the development of network function virtualization (NFV), service function chains (SFCs) are deployed via virtual network functions (VNFs). In general, the SFCs are served via composition and then deployed into data center infrastructures. However, most of the existing works neglect SFC composition. Furthermore, they consider that VNF instances are independently deployed for each SFC, which may underutilize the computational power of servers. We consider, for each required VNF in the chain, the operator can either place it on a new instance or assign it to an established instance if the residual resource of that instance is sufficient. Such a deployment scheme can leverage resources more efficiently and we define it as SFC placement and assignment. In this paper, we first combine SFC composition, placement and assignment together to enhance resource allocation. We present the system model and formulate the problem as 0–1 integer programming. We aim to improve the VNF instance utilization as well as reduce the link consumption. A heuristic approach called Jcap is developed to solve the problem in two stages. The simulations show that Jcap achieves competitive performance with the optimal results obtained from mathematical model.

Journal ArticleDOI
TL;DR: Numerical analysis shows that the performance of the proposed framework approaches the one of the optimal solution of a formulated integer linear programming problem, and system-level ndnSIM simulations confirm that the proposal also outperforms the considered state-of-the-art benchmark solutions in terms of service provisioning time.
Abstract: Edge computing is a key paradigm to offload the core network and effectively process massive Internet of Things (IoT) raw data without sending them to the cloud. This paradigm normally relies on a set of purpose-built and pre-planned servers, which host storage and processing resources to provide IoT services close to the data sources, thus saving core network resources and offloading the remote cloud infrastructure. In this paper, we propose to turn the network edge into a dynamic, distributed computing environment that supports the provisioning of IoT services, by exploiting the recent evolution of named data networking (NDN), supporting both name-based data retrieval and computation. Specific name structure and novel NDN forwarding mechanisms are designed; a distributed strategy is also engineered to select the service executor among edge nodes, with the objectives to: 1) limit the raw IoT data traffic crossing the network and 2) allocate the service execution according to the nodes’ available processing resources. Numerical analysis shows that the performance of the proposed framework approaches the one of the optimal solution of a formulated integer linear programming problem. System-level ndnSIM simulations confirm that the proposal also outperforms the considered state-of-the-art benchmark solutions in terms of service provisioning time.

Journal ArticleDOI
TL;DR: Numerical results show that the considered ML algorithms succeed in achieving effective trade-offs between energy consumption and QoS, and show that energy savings strongly depend on traffic patterns that are typical of the considered area.
Abstract: The use of base station (BS) sleep modes is one of the most studied approaches for the reduction of the energy consumption of radio access networks (RANs). Many papers have shown that the potential energy saving of sleep modes is huge, provided the future behavior of the RAN traffic load is known. This paper investigates the effectiveness of sleep modes combined with machine learning (ML) approaches for traffic forecast. A portion of an RAN is considered, comprising one macro BS and a few small cell BSs. Each BS is powered by a photovoltaic (PV) panel, equipped with energy storage units, and a connection to the power grid. The PV panel and battery provide green energy, while the power grid provides brown energy. This paper examines the impacts of different prediction models on the consumed energy mix and on QoS. Numerical results show that the considered ML algorithms succeed in achieving effective trade-offs between energy consumption and QoS. Results also show that energy savings strongly depend on traffic patterns that are typical of the considered area. This implies that a widespread implementation of these energy saving strategies without the support of ML would require a careful tuning that cannot be performed autonomously and that needs continuous updates to follow traffic pattern variations. On the contrary, ML approaches provide a versatile framework for the implementation of the desired trade-off that naturally adapts the network operation to the traffic characteristics typical of each area and to its evolution.

Journal ArticleDOI
TL;DR: The simulation results show that the virtual network embedding algorithm proposed in this paper can make full use of the SN resources and improve the overall revenue, while effectively dealing with the multi-demand problem of tenants.
Abstract: Network virtualization provides a promising tool to allow multiple virtual networks (VNs) to run on a shared substrate network (SN) simultaneously. VN embedding (VNE) is one of the key technologies of network virtualization. The main goal of VNE is to effectively map VN requests to the SN, which is efficiently utilizes the network resources. The emergence of software defined networks provides a platform for network virtualization to be used and promoted. In a real environment, the resource requirements of tenants are generally different. A single VN mapping algorithm can not effectively handle the multi-demand problem of tenants. We propose a self-adaptive VNE algorithm. VN requests are divided into different types by an adaptive algorithm, we use an integer linear programming formulation to solve VNE problem. This paper considers three different types of VN requests. Type 1 VN requests for high bandwidth requirements, type 2 VN requests for low latency requirements, and type 3 VN requests for high bandwidth requirements and latency requirements. The simulation results show that the virtual network embedding algorithm proposed in this paper can make full use of the SN resources and improve the overall revenue, while effectively dealing with the multi-demand problem of tenants.

Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of approaches and technologies to manage the big NTMA data, additionally briefly discussing big data analytics (e.g., machine learning) for the sake of NTMA.
Abstract: Network Traffic Monitoring and Analysis (NTMA) represents a key component for network management, especially to guarantee the correct operation of large-scale networks such as the Internet. As the complexity of Internet services and the volume of traffic continue to increase, it becomes difficult to design scalable NTMA applications. Applications such as traffic classification and policing require real-time and scalable approaches. Anomaly detection and security mechanisms require to quickly identify and react to unpredictable events while processing millions of heterogeneous events. At last, the system has to collect, store, and process massive sets of historical data for post-mortem analysis. Those are precisely the challenges faced by general big data approaches: Volume, Velocity, Variety, and Veracity. This survey brings together NTMA and big data. We catalog previous work on NTMA that adopt big data approaches to understand to what extent the potential of big data is being explored in NTMA. This survey mainly focuses on approaches and technologies to manage the big NTMA data, additionally briefly discussing big data analytics (e.g., machine learning) for the sake of NTMA. Finally, we provide guidelines for future work, discussing lessons learned, and research directions.

Journal ArticleDOI
TL;DR: Results show how the green association proposal can reduce on-grid energy consumption in a HetNet by up to 34%, while is able to exceed the savings obtained by other methods, including the best-signal level policy, additionally providing high network efficiency and low computational complexity.
Abstract: In this paper, we focus on reducing the on-grid energy consumption in heterogeneous radio access networks (HetNets) supplied with hybrid power sources (grid and renewables). The energy efficiency problem is analyzed over both short- and long-timescales by means of reactive and proactive management strategies. For short-timescale case, a renewable-energy (RE) aware user equipment-base station association is proposed and analyzed for the cases when no storage infrastructure is available. For long-timescale case, a traffic flow method is proposed for load balancing in RE base stations (BSs), which is combined with a model predictive controller (MPC) to include forecast capabilities of the RE source behavior in order to better exploit a Green HetNet with storage support. The mechanisms are evaluated with data of solar measurements from the region of Valle de Aburra, Medellin, Colombia and wind estimations from the Moscow region, Russian Federation. Results show how the green association proposal can reduce on-grid energy consumption in a HetNet by up to 34%, while is able to exceed the savings obtained by other methods, including the best-signal level policy by up to 15%, additionally providing high network efficiency and low computational complexity. For the long-timescale case, MPC attainable savings can be up to 22% with respect to the on-grid only Macro-BS approach. Finally, an analysis of our proposals in a common scenario is included, which highlights the relevance of storage management, although emphasizing the importance of combining reactive and proactive methods in a common framework to exploit the best of each approach.

Journal ArticleDOI
TL;DR: The problem of Workload Assignment (WA) is studied and formed as a Mixed Integer Program (MIP) to decide on the assignment of the workloads to the available MEC nodes and the performance of the decomposition approach and a more scalable approach are evaluated.
Abstract: Along with the dramatic increase in the number of IoT devices, different IoT services with heterogeneous QoS requirements are evolving with the aim of making the current society smarter and more connected. In order to deliver such services to the end users, the network infrastructure has to accommodate the tremendous workload generated by the smart devices and their heterogeneous and stringent latency and reliability requirements. This would only be possible with the emergence of ultra reliable low latency communications (uRLLC) promised by 5G. Mobile Edge Computing (MEC) has emerged as an enabling technology to help with the realization of such services by bringing the remote computing and storage capabilities of the cloud closer to the users. However, integrating uRLLC with MEC would require the network operator to efficiently map the generated workloads to MEC nodes along with resolving the trade-off between the latency and reliability requirements. Thus, we study in this paper the problem of Workload Assignment (WA) and formulate it as a Mixed Integer Program (MIP) to decide on the assignment of the workloads to the available MEC nodes. Due to the complexity of the WA problem, we decompose the problem into two subproblems; Reliability Aware Candidate Selection (RACS) and Latency Aware Workload Assignment (LAWA-MIP). We evaluate the performance of the decomposition approach and propose a more scalable approach; Tabu meta-heuristic (WA-Tabu). Through extensive numerical evaluation, we analyze the performance and show the efficiency of our proposed approach under different system parameters.

Journal ArticleDOI
TL;DR: This paper proposes a distributed slicing strategy based on coalitional game and matching theory over an SDN-based LoRaWAN architecture that shows utility in respecting quality of service (QoS) thresholds in terms of delay, throughput, energy consumption and improving reliability while providing complete isolation between LoRa slices.
Abstract: The massive growth of the Internet of Things (IoT) poses important challenges on network operators to support billions of IoT devices connected through the cloud with each having constrained battery life and computational capacity. To support these requirements over long distances, Long Range Wide Area Network (LoRaWAN), is now widely being deployed with the promise to support an all-connected world with numerous IoT applications. In large scale access networks, supporting urgent and reliable communications with their QoS demands becomes more challenging. Hence, network slicing within an SDN-based architecture brings numerous advantages to solve this problem by easily managing network resources and reserving part of the latter for urgent traffic and avoiding its performance degradation due to congestion. In this paper, we tackle the raised questions regarding scalability limitations by proposing a distributed slicing strategy based on coalitional game and matching theory over an SDN-based LoRaWAN architecture. In this context, resource reservation for LoRa slices and configuration optimization are performed closer to the edge at the gateway level. Simulation results performed over NS3 highlight the utility of the distributed slicing strategy in respecting quality of service (QoS) thresholds in terms of delay, throughput, energy consumption and improving reliability while providing complete isolation between LoRa slices.