scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 2017"


Journal ArticleDOI
TL;DR: The proposed consolidation algorithm is based on a migration policy of VNFIs that considers the revenue loss due to QoS degradation that a user suffers due to information loss occurring during the migrations.
Abstract: Network function virtualization foresees the virtualization of service functions and their execution on virtual machines. Any service is represented by a service function chain (SFC) that is a set of VNFs to be executed according to a given order. The running of VNFs needs the instantiation of VNF Instances (VNFIs) that in general are software modules executed on virtual machines. The virtualization challenges include: 1) where to instantiate VNFIs; ii) how many resources to allocate to each VNFI; iii) how to route SFC requests to the appropriate VNFIs in the right sequence; and iv) when and how to migrate VNFIs in response to changes to SFC request intensity and location. We develop an approach that uses three algorithms that are used back-to-back resulting in VNFI placement, SFC routing, and VNFI migration in response to changing workload. The objective is to first minimize the rejection of SFC bandwidth and second to consolidate VNFIs in as few servers as possible so as to reduce the energy consumed. The proposed consolidation algorithm is based on a migration policy of VNFIs that considers the revenue loss due to QoS degradation that a user suffers due to information loss occurring during the migrations. The objective is to minimize the total cost given by the energy consumption and the revenue loss due to QoS degradation. We evaluate our suite of algorithms on a test network and show performance gains that can be achieved over using other alternative naive algorithms.

285 citations


Journal ArticleDOI
TL;DR: This work compares two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration and bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints.
Abstract: Major interest is currently given to the integration of clusters of virtualization servers, also referred to as ‘cloudlets’ or ‘edge clouds’, into the access network to allow higher performance and reliability in the access to mobile edge computing services. We tackle the edge cloud network design problem for mobile access networks. The model is such that the virtual machines (VMs) are associated with mobile users and are allocated to cloudlets. Designing an edge cloud network implies first determining where to install cloudlet facilities among the available sites, then assigning sets of access points, such as base stations to cloudlets, while supporting VM orchestration and considering partial user mobility information, as well as the satisfaction of service-level agreements. We present link-path formulations supported by heuristics to compute solutions in reasonable time. We qualify the advantage in considering mobility for both users and VMs as up to 20% less users not satisfied in their SLA with a little increase of opened facilities. We compare two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration, while bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints.

203 citations


Journal ArticleDOI
TL;DR: The proposed traffic patterns extraction and modeling methodology, combined with the empirical analysis on the mobile traffic, pave the way toward a deep understanding of the traffic patterns of large scale cellular towers in modern metropolis.
Abstract: Understanding mobile traffic patterns of large scale cellular towers in urban environment is extremely valuable for Internet service providers, mobile users, and government managers of modern metropolis. This paper aims at extracting and modeling the traffic patterns of large scale towers deployed in a metropolitan city. To achieve this goal, we need to address several challenges, including lack of appropriate tools for processing large scale traffic measurement data, unknown traffic patterns, as well as handling complicated factors of urban ecology and human behaviors that affect traffic patterns. Our core contribution is a powerful model which combines three dimensional information (time, locations of towers, and traffic frequency spectrum) to extract and model the traffic patterns of thousands of cellular towers. Our empirical analysis reveals the following important observations. First, only five basic time-domain traffic patterns exist among the 9600 cellular towers. Second, each of the extracted traffic pattern maps to one type of geographical locations related to urban ecology, including residential area, business district, transport, entertainment, and comprehensive area. Third, our frequency-domain traffic spectrum analysis suggests that the traffic of any tower among 9600 can be constructed using a linear combination of four primary components corresponding to human activity behaviors. We believe that the proposed traffic patterns extraction and modeling methodology, combined with the empirical analysis on the mobile traffic, pave the way toward a deep understanding of the traffic patterns of large scale cellular towers in modern metropolis.

167 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors studied the strategies that select seed users in an adaptive manner and showed that a simple greedy adaptive seeding strategy finds an effective solution with a provable performance guarantee.
Abstract: For the purpose of propagating information and ideas through a social network, a seeding strategy aims to find a small set of seed users that are able to maximize the spread of the influence, which is termed influence maximization problem. Despite a large number of works have studied this problem, the existing seeding strategies are limited to the models that cannot fully capture the characteristics of real-world social networks. In fact, due to high-speed data transmission and large population of participants, the diffusion processes in real-world social networks have many aspects of uncertainness. As shown in the experiments, when taking such uncertainness into account, the state-of-the-art seeding strategies are pessimistic as they fail to trace the influence diffusion. In this paper, we study the strategies that select seed users in an adaptive manner. We first formally model the dynamic independent Cascade model and introduce the concept of adaptive seeding strategy. Then, based on the proposed model, we show that a simple greedy adaptive seeding strategy finds an effective solution with a provable performance guarantee. Besides the greedy algorithm, an efficient heuristic algorithm is provided for better scalability. Extensive experiments have been performed on both the real-world networks and synthetic power-law networks. The results herein demonstrate the superiority of the adaptive seeding strategies to other baseline methods.

158 citations


Journal ArticleDOI
TL;DR: ROSE, a novel robustness enhancing algorithm for scale-free WSNs, is proposed, which exploits the position and degree information of nodes to rearrange the edges to resemble an onion-like structure, which has been proven to be robust against malicious attacks.
Abstract: Due to the recent proliferation of cyber-attacks, improving the robustness of wireless sensor networks (WSNs), so that they can withstand node failures has become a critical issue. Scale-free WSNs are important, because they tolerate random attacks very well; however, they can be vulnerable to malicious attacks, which particularly target certain important nodes. To address this shortcoming, this paper first presents a new modeling strategy to generate scale-free network topologies, which considers the constraints in WSNs, such as the communication range and the threshold on the maximum node degree. Then, ROSE, a novel robustness enhancing algorithm for scale-free WSNs, is proposed. Given a scale-free topology, ROSE exploits the position and degree information of nodes to rearrange the edges to resemble an onion-like structure, which has been proven to be robust against malicious attacks. Meanwhile, ROSE keeps the degree of each node in the topology unchanged such that the resulting topology remains scale-free. The extensive experimental results verify that our new modeling strategy indeed generates scale-free network topologies for WSNs, and ROSE can significantly improve the robustness of the network topologies generated by our modeling strategy. Moreover, we compare ROSE with two existing robustness enhancing algorithms, showing that ROSE outperforms both.

137 citations


Journal ArticleDOI
TL;DR: It is proved that SCP is NP-hard and a solution which provably outperforms the optimal solution to SCP with a relaxed EMR threshold is proposed, which shows that the gap between the solution and the optimal one is only 6.7%.
Abstract: As battery-powered mobile devices become more popular and energy hungry, wireless power transfer technology, which allows the power to be transferred from a charger to ambient devices wirelessly, receives intensive interests. Existing schemes mainly focus on the power transfer efficiency but overlook the health impairments caused by RF exposure. In this paper, we study the safe charging problem (SCP) of scheduling power chargers so that more energy can be received while no location in the field has electromagnetic radiation (EMR) exceeding a given threshold $R_{t}$ . We show that SCP is NP-hard and propose a solution, which provably outperforms the optimal solution to SCP with a relaxed EMR threshold $(1-\epsilon )R_{t}$ . Testbed results based on 8 Powercast TX91501 chargers validate our results. Extensive simulation results show that the gap between our solution and the optimal one is only 6.7% when $\epsilon = 0.1$ , while a naive greedy algorithm is 34.6% below our solution.

130 citations


Journal ArticleDOI
TL;DR: The basic idea of STPP is that by moving a reader over a set of tags during which the reader continuously interrogating the tags, for each tag, the reader obtains a sequence of RF phase values, which the authors call a phase profile, from the tag’s responses over time.
Abstract: Many object localization applications need the relative locations of a set of objects as oppose to their absolute locations. Although many schemes for object localization using radio frequency identification (RFID) tags have been proposed, they mostly focus on absolute object localization and are not suitable for relative object localization because of large error margins and the special hardware that they require. In this paper, we propose an approach called spatial-temporal phase profiling (STPP) to RFID-based relative object localization. The basic idea of STPP is that by moving a reader over a set of tags during which the reader continuously interrogating the tags, for each tag, the reader obtains a sequence of RF phase values, which we call a phase profile, from the tag’s responses over time. By analyzing the spatial-temporal dynamics in the phase profiles, STPP can calculate the spatial ordering among the tags. In comparison with prior absolute object localization schemes, STPP requires neither dedicated infrastructure nor special hardware. We implemented STPP and evaluated its performance in two real-world applications: locating misplaced books in a library and determining the baggage order in an airport. The experimental results show that STPP achieves about 84% ordering accuracy for misplaced books and 95% ordering accuracy for baggage handling. We further leverage the controllable reader antenna and upgrade STPP to infer the spacing between each pair of tags. The result shows that STPP could achieve promising performance on distance ranging.

128 citations


Journal ArticleDOI
TL;DR: This paper considers a criterion for dynamic resource allocation amongst tenants, based on a weighted proportionally fair objective, which achieves desirable fairness/protection across the network slices of the different tenants and their associated users, and designs a distributed semi-online algorithm which meets performance guarantees in equilibrium.
Abstract: This paper addresses the slicing of radio access network resources by multiple tenants, e.g., virtual wireless operators and service providers. We consider a criterion for dynamic resource allocation amongst tenants, based on a weighted proportionally fair objective, which achieves desirable fairness/protection across the network slices of the different tenants and their associated users. Several key properties are established, including: the Pareto-optimality of user association to base stations, the fair allocation of base stations’ resources, and the gains resulting from dynamic resource sharing across slices, both in terms of utility gains and capacity savings. We then address algorithmic and practical challenges in realizing the proposed criterion. We show that the objective is NP-hard, making an exact solution impractical, and design a distributed semi-online algorithm, which meets performance guarantees in equilibrium and can be shown to quickly converge to a region around the equilibrium point. Building on this algorithm, we devise a practical approach with limited computational information and handoff overheads. We use detailed simulations to show that our approach is indeed near-optimal and provides substantial gains both to tenants (in terms of capacity savings) and end users (in terms of improved performance).

121 citations


Journal ArticleDOI
TL;DR: This work considers the standard model of TE with ECMP and proves that, in general, even approximating the optimal link-weight configuration for ECMP within any constant ratio is an intractable feat, settling a long-standing open question.
Abstract: To efficiently exploit the network resources operators, do traffic engineering (TE), i.e., adapt the routing of traffic to the prevailing demands. TE in large IP networks typically relies on configuring static link weights and splitting traffic between the resulting shortest paths via the Equal-Cost-MultiPath (ECMP) mechanism. Yet, despite its vast popularity, crucial operational aspects of TE via ECMP are still little-understood from an algorithmic viewpoint. We embark upon a systematic algorithmic study of TE with ECMP. We consider the standard model of TE with ECMP and prove that, in general, even approximating the optimal link-weight configuration for ECMP within any constant ratio is an intractable feat, settling a long-standing open question. We establish, in contrast, that ECMP can provably achieve optimal traffic flow for the important category of Clos datacenter networks. We last consider a well-documented shortcoming of ECMP: suboptimal routing of large (“elephant”) flows. We present algorithms for scheduling “elephant” flows on top of ECMP (as in, e.g., Hedera) with provable approximation guarantees. Our results complement and shed new light on past experimental and empirical studies of the performance of TE with ECMP.

119 citations


Journal ArticleDOI
TL;DR: This work proposes a hierarchical two-phase algorithm that integrates key concepts from both matching theory and coalitional games to solve the dynamic controller assignment problem efficiently and proves that the algorithm converges to a near-optimal Nash stable solution within tens of iterations.
Abstract: Software defined networking is increasingly prevalent in data center networks for it enables centralized network configuration and management. However, since switches are statically assigned to controllers and controllers are statically provisioned, traffic dynamics may cause long response time and incur high maintenance cost. To address these issues, we formulate the dynamic controller assignment problem (DCAP) as an online optimization to minimize the total cost caused by response time and maintenance on the cluster of controllers. By applying the randomized fixed horizon control framework, we decompose DCAP into a series of stable matching problems with transfers, guaranteeing a small loss in competitive ratio. Since the matching problem is NP-hard, we propose a hierarchical two-phase algorithm that integrates key concepts from both matching theory and coalitional games to solve it efficiently. Theoretical analysis proves that our algorithm converges to a near-optimal Nash stable solution within tens of iterations. Extensive simulations show that our online approach reduces total cost by about 46%, and achieves better load balancing among controllers compared with static assignment.

118 citations


Journal ArticleDOI
TL;DR: This paper proposes a generalized resource placement methodology that can work across different cloud architectures, resource request constraints, with real-time request arrivals and departures, and derives worst case competitive ratio for the algorithms.
Abstract: One of the primary functions of a cloud service provider is to allocate cloud resources to users upon request. Requests arrive in real-time and resource placement decisions must be made as and when a request arrives, without any prior knowledge of future arrivals. In addition, when a cloud service provider operates a geographically diversified cloud that consists of a large number of small data centers, the resource allocation problem becomes even more complex. This is due to the fact that resource request can have additional constraints on data center location, service delay guarantee, and so on, which is especially true for the emerging network function virtualization application. In this paper, we propose a generalized resource placement methodology that can work across different cloud architectures, resource request constraints, with real-time request arrivals and departures. The proposed algorithms are online in the sense that allocations are made without any knowledge of resource requests that arrive in the future, and the current resource allocations are made in such a manner as to permit the acceptance of as many future arrivals as possible. We derive worst case competitive ratio for the algorithms. We show through experiments and case studies the superior performance of the algorithms in practice.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a scalable framework for data shuffling in a wireless distributed computing system, in which the required communication bandwidth for shuffling does not increase with the number of users in the network.
Abstract: We consider a wireless distributed computing system, in which multiple mobile users, connected wirelessly through an access point, collaborate to perform a computation task. In particular, users communicate with each other via the access point to exchange their locally computed intermediate computation results, which is known as data shuffling . We propose a scalable framework for this system, in which the required communication bandwidth for data shuffling does not increase with the number of users in the network. The key idea is to utilize a particular repetitive pattern of placing the data set (thus a particular repetitive pattern of intermediate computations), in order to provide the coding opportunities at both the users and the access point, which reduce the required uplink communication bandwidth from users to the access point and the downlink communication bandwidth from access point to users by factors that grow linearly with the number of users. We also demonstrate that the proposed data set placement and coded shuffling schemes are optimal (i.e., achieve the minimum required shuffling load) for both a centralized setting and a decentralized setting, by developing tight information-theoretic lower bounds.

Journal ArticleDOI
TL;DR: This work investigates the problem of developing optimal joint routing and caching policies in a network supporting in-network caching with the goal of minimizing expected content-access delay and identifies the structural property of the user-cache graph that makes the problem NP-complete.
Abstract: In-network content caching has been deployed in both the Internet and cellular networks to reduce content-access delay. We investigate the problem of developing optimal joint routing and caching policies in a network supporting in-network caching with the goal of minimizing expected content-access delay. Here, needed content can either be accessed directly from a back-end server (where content resides permanently) or be obtained from one of multiple in-network caches. To access content, users must thus decide whether to route their requests to a cache or to the back-end server. In addition, caches must decide which content to cache. We investigate two variants of the problem, where the paths to the back-end server can be considered as either congestion-sensitive or congestion-insensitive, reflecting whether or not the delay experienced by a request sent to the back-end server depends on the request load, respectively. We show that the problem of optimal joint caching and routing is NP-complete in both cases. We prove that under the congestion-insensitive delay model, the problem can be solved optimally in polynomial time if each piece of content is requested by only one user, or when there are at most two caches in the network. We also identify the structural property of the user-cache graph that makes the problem NP-complete. For the congestion-sensitive delay model, we prove that the problem remains NP-complete even if there is only one cache in the network and each content is requested by only one user. We show that approximate solutions can be found for both cases within a $(1-1/e)$ factor from the optimal, and demonstrate a greedy solution that is numerically shown to be within 1% of optimal for small problem sizes. Through trace-driven simulations, we evaluate the performance of our greedy solutions to joint caching and routing, which show up to 50% reduction in average delay over the solution of optimized routing to least recently used caches.

Journal ArticleDOI
TL;DR: Travi-Navi is a vision-guided navigation system that enables a self-motivated user to easily bootstrap and deploy indoor navigation services, without comprehensive indoor localization systems or even the availability of floor maps.
Abstract: We present Travi-Navi—a vision-guided navigation system that enables a self-motivated user to easily bootstrap and deploy indoor navigation services, without comprehensive indoor localization systems or even the availability of floor maps. Travi-Navi records high-quality images during the course of a guider’s walk on the navigation paths, collects a rich set of sensor readings, and packs them into a navigation trace. The followers track the navigation trace, get prompt visual instructions and image tips, and receive alerts when they deviate from the correct paths. Travi-Navi also finds shortcuts whenever possible. In this paper, we describe the key techniques to solve several practical challenges, including robust tracking, shortcut identification, and high-quality image capture while walking. We implement Travi-Navi and conduct extensive experiments. The evaluation results show that Travi-Navi can track and navigate users with timely instructions, typically within a four-step offset, and detect deviation events within nine steps. We also characterize the power consumption of Travi-Navi on various mobile phones.

Journal ArticleDOI
TL;DR: LineSwitch is compared to the state of the art, and it is shown that it provides at the same time, the same level of protection against the control plane saturation attack, and a reduced time overhead by up to 30%.
Abstract: Software defined networking (SDN) is a new networking paradigm that in recent years has revolutionized network architectures. At its core, SDN separates the data plane, which provides data forwarding functionalities, and the control plane, which implements the network control logic. The separation of these two components provides a virtually centralized point of control in the network, and at the same time abstracts the complexity of the underlying physical infrastructure. Unfortunately, while promising, the SDN approach also introduces new attacks and vulnerabilities. Indeed, previous research shows that, under certain traffic conditions, the required communication between the control and data plane can result in a bottleneck. An attacker can exploit this limitation to mount a new, network-wide, type of denial of service attack, known as the control plane saturation attack . This paper presents LineSwitch, an efficient and effective data plane solution to tackle the control plane saturation attack. LineSwitch employs probabilistic proxying and blacklisting of network traffic to prevent the attack from reaching the control plane, and thus preserve network functionality. We implemented LineSwitch as an extension of the reference SDN implementation, OpenFlow, and run a thorough set of experiments under different traffic and attack scenarios. We compared LineSwitch to the state of the art, and we show that it provides at the same time, the same level of protection against the control plane saturation attack, and a reduced time overhead by up to 30%.

Journal ArticleDOI
TL;DR: The proposed algorithms are the very first approximation algorithms with guaranteed approximation ratios for the mobile charger scheduling in a rechargeable sensor network under the energy capacity constraint on the mobile chargers.
Abstract: Wireless energy transfer has emerged as a promising technology for wireless sensor networks to power sensors with controllable yet perpetual energy. In this paper, we study sensor energy replenishment by employing a mobile charger (charging vehicle) to charge sensors wirelessly in a rechargeable sensor network, so that the sum of charging rewards collected from all charged sensors by the mobile charger per tour is maximized, subject to the energy capacity of the mobile charger, where the amount of reward received from a charged sensor is proportional to the amount of energy charged to the sensor. The energy of the mobile charger will be spent on both its mechanical movement and sensor charging. We first show that this problem is NP-hard. We then propose approximation algorithms with constant approximation ratios under two different settings: one is that a sensor will be charged to its full energy capacity if it is charged; another is that a sensor can be charged multiple times per tour but the total amount of energy charged is no more than its energy demand prior to the tour. We finally evaluate the performance of the proposed algorithms through experimental simulations. The simulation results demonstrate that the proposed algorithms are very promising, and the solutions obtained are fractional of the optimum. To the best of our knowledge, the proposed algorithms are the very first approximation algorithms with guaranteed approximation ratios for the mobile charger scheduling in a rechargeable sensor network under the energy capacity constraint on the mobile charger.

Journal ArticleDOI
TL;DR: This paper addresses two technical challenges: an incremental deployment strategy and a throughput-maximization routing, for deploying a hybrid network incrementally, and shows that the algorithms can obtain significant performance gains and perform better than the theoretical worst-case bound.
Abstract: To explore the advantages of software defined network (SDN), while preserving the legacy networking systems, a natural deployment strategy is to deploy a hybrid SDN incrementally to improve the network performance. In this paper, we address two technical challenges: an incremental deployment strategy and a throughput-maximization routing, for deploying a hybrid network incrementally. For incremental deployment, we propose a heuristic algorithm for deploying a hybrid SDN under the budget constraint, and prove the approximate factor of $ 1- \frac {1}{e} $ . For throughput-maximization routing, we apply a depth-first-search method and a randomized rounding mechanism to solve the multi-commodity $h$ -splittable flow routing problem in a hybrid SDN, where $h\ge 1$ . We also prove that our method has approximation ratio $O\left({\frac {1}{\log N}}\right)$ , where $ N $ is the number of links in a hybrid SDN. We then show, by both analysis and simulations, that our algorithms can obtain significant performance gains and perform better than the theoretical worst-case bound. For example, our incremental deployment scheme helps to enhance the throughout about 40% compared with the previous deployment scheme by deploying a small number of SDN devices, and the proposed routing algorithm can improve the throughput about 31% compared with ECMP in hybrid networks.

Journal ArticleDOI
TL;DR: A taxonomy is introduced to offer insight into the common pitfalls that enable SDN stacks to be broken or destabilized when fielded within hostile computing environments to offer a deeper understanding of the common design and implementation pitfalls that are enabling the abuse of SDN networks.
Abstract: Emerging software defined network (SDN) stacks have introduced an entirely new attack surface that is exploitable from a wide range of launch points. Through an analysis of the various attack strategies reported in prior work, and through our own efforts to enumerate new and variant attack strategies, we have gained two insights. First, we observe that different SDN controller implementations, developed independently by different groups, seem to manifest common sets of pitfalls and design weakness that enable the extensive set of attacks compiled in this paper. Second, through a principled exploration of the underlying design and implementation weaknesses that enables these attacks, we introduce a taxonomy to offer insight into the common pitfalls that enable SDN stacks to be broken or destabilized when fielded within hostile computing environments. This paper first captures our understanding of the SDN attack surface through a comprehensive survey of existing SDN attack studies, which we extend by enumerating 12 new vectors for SDN abuse. We then organize these vulnerabilities within the well-known confidentiality, integrity, and availability model, assess the severity of these attacks by replicating them in a physical SDN testbed, and evaluate them against three popular SDN controllers. We also evaluate the impact of these attacks against published SDN defense solutions. Finally, we abstract our findings to offer the research and development communities with a deeper understanding of the common design and implementation pitfalls that are enabling the abuse of SDN networks.

Journal ArticleDOI
TL;DR: This paper evaluates the maximum aggregate throughput, offloading efficiency, and in particular, the delay performance of FiWi enhanced LTE-Advanced (LTE-A) heterogeneous networks (HetNets), and develops a decentralized routing algorithm for FiWiEnhanced LTE-A HetNet.
Abstract: To cope with the unprecedented growth of mobile data traffic, we investigate the performance gains obtained from unifying coverage-centric 4G mobile networks and capacity-centric fiber-wireless (FiWi) broadband access networks based on data-centric Ethernet technologies with resulting fiber backhaul sharing and WiFi offloading capabilities. Despite recent progress on backhaul-aware 4G studies with capacity-limited backhaul links, the performance-limiting impact of backhaul latency and reliability has not been examined in sufficient detail previously. In this paper, we evaluate the maximum aggregate throughput, offloading efficiency, and in particular, the delay performance of FiWi enhanced LTE-Advanced (LTE-A) heterogeneous networks (HetNets), including the beneficial impact of various localized fiber-lean backhaul redundancy and wireless protection techniques, by means of probabilistic analysis and verifying simulation, paying close attention to fiber backhaul reliability issues and WiFi offloading limitations due to WiFi mesh node failures as well as temporal and spatial WiFi coverage constraints. We use recent and comprehensive smartphone traces of the PhoneLab data set to verify whether the previously reported assumption that the complementary cumulative distribution function of both WiFi connection and interconnection times fit a truncated Pareto distribution is still valid. In this paper, we put a particular focus on the 5G key attributes of very low latency and ultra-high reliability and investigate how they can be achieved in FiWi enhanced LTE-A HetNets. Furthermore, given the growing interest in decentralization of future 5G networks (e.g., user equipment assisted mobility), we develop a decentralized routing algorithm for FiWi enhanced LTE-A HetNets.

Journal ArticleDOI
TL;DR: This paper deeply study the problem of computing operational sequences to safely and quickly update arbitrary networks, and proposes and thoroughly evaluates a generic sequence-computation approach, based on two new algorithms that combine to overcome limitations of prior proposals.
Abstract: The support for safe network updates, i.e., live modification of device behavior without service disruption, is a critical primitive for current and future networks. Several techniques have been proposed by previous works to implement such a primitive. Unfortunately, existing techniques are not generally applicable to any network architecture, and typically require high overhead (e.g., additional memory) to guarantee strong consistency (i.e., traversal of either initial or final paths, but never a mix of them) during the update. In this paper, we deeply study the problem of computing operational sequences to safely and quickly update arbitrary networks. We characterize cases, for which this computation is easy, and revisit previous algorithmic contributions in the new light of our theoretical findings. We also propose and thoroughly evaluate a generic sequence-computation approach, based on two new algorithms that we combine to overcome limitations of prior proposals. Our approach always finds an operational sequence that provably guarantees strong consistency throughout the update, with very limited overhead. Moreover, it can be applied to update networks running any combination of centralized and distributed control-planes, including different families of IGPs, OpenFlow or other SDN protocols, and hybrid SDN networks. Our approach therefore supports a large set of use cases, ranging from traffic engineering in IGP-only or SDN-only networks to incremental SDN roll-out and advanced requirements (e.g., per-flow path selection or dynamic network function virtualization) in partial SDN deployments.

Journal ArticleDOI
TL;DR: This work suggests a novel approach for planning and deploying backup schemes for network functions that guarantee high levels of survivability with significant reduction in resource consumption and describes different goals that network designers can take into account when determining which functions to implement in each of the backup servers.
Abstract: In enterprise networks, network functions, such as address translation, firewall, and deep packet inspection, are often implemented in middleboxes. Those can suffer from temporary unavailability due to misconfiguration or software and hardware malfunction. Traditionally, middlebox survivability is achieved by an expensive active-standby deployment where each middlebox has a backup instance, which is activated in case of a failure. Network function virtualization (NFV) is a novel networking paradigm allowing flexible, scalable and inexpensive implementation of network services. In this paper, we suggest a novel approach for planning and deploying backup schemes for network functions that guarantee high levels of survivability with significant reduction in resource consumption. In the suggested backup scheme, we take advantage of the flexibility and resource-sharing abilities of the NFV paradigm in order to maintain only a few backup servers, where each can serve one of multiple functions when corresponding middleboxes are unavailable. We describe different goals that network designers can consider when determining which functions to implement in each of the backup servers. We rely on a graph theoretical model to find properties of efficient assignments and to develop algorithms that can find them. Extensive experiments show, for example, that under realistic function failure probabilities, and reasonable capacity limitations, one can obtain 99.9% survival probability with half the number of servers, compared with standard techniques.

Journal ArticleDOI
TL;DR: This paper is the first to jointly consider charging tour planning and MC depot positioning for large-scale WSNs and leads to an average reduction in the number of MCs by 64%, and an average increase of 19.7 times on the ratio of total charging time over total traveling time.
Abstract: Recent breakthrough in wireless energy transfer technology has enabled wireless sensor networks (WSNs) to operate with zero-downtime through the use of mobile energy chargers (MCs), that periodically replenish the energy supply of the sensor nodes. Due to the limited battery capacity of the MCs, a significant number of MCs and charging depots are required to guarantee perpetual operations in large scale networks. Existing methods for reducing the number of MCs and charging depots treat the charging tour planning and depot positioning problems separately even though they are inter-dependent. This paper is the first to jointly consider charging tour planning and MC depot positioning for large-scale WSNs. The proposed method solves the problem through the following three stages: charging tour planning, candidate depot identification and reduction, and depot deployment and charging tour assignment. The proposed charging scheme also considers the association between the MC charging cycle and the operational lifetime of the sensor nodes, in order to maximize the energy efficiency of the MCs. This overcomes the limitations of existing approaches, wherein MCs with small battery capacity ends up charging sensor nodes more frequently than necessary, while MCs with large battery capacity return to the depots to replenish themselves before they have fully transferred their energy to the sensor nodes. Compared with existing approaches, the proposed method leads to an average reduction in the number of MCs by 64%, and an average increase of 19.7 times on the ratio of total charging time over total traveling time.

Journal ArticleDOI
TL;DR: The simulations and test bed experiments show that Amoeba, by harnessing DNA’s malleability, accommodates 15% more user requests with deadlines, while achieving 60% higher WAN utilization than prior solutions.
Abstract: Inter-data center wide area networks (inter-DC WANs) carry a significant amount of data transfers that require to be completed within certain time periods, or deadlines. However, very little work has been done to guarantee such deadlines. The crux is that the current inter-DC WAN lacks an interface for users to specify their transfer deadlines and a mechanism for provider to ensure the completion while maintaining high WAN utilization. In this paper, we address the problem by introducing a deadline-based network abstraction (DNA) for inter-DC WANs. DNA allows users to explicitly specify the amount of data to be delivered and the deadline by which it has to be completed. The malleability of DNA provides flexibility in resource allocation. Based on this, we develop a system called Amoeba that implements DNA. Our simulations and test bed experiments show that Amoeba , by harnessing DNA’s malleability, accommodates 15% more user requests with deadlines, while achieving 60% higher WAN utilization than prior solutions.

Journal ArticleDOI
TL;DR: In this paper, the authors present a methodology combining multiscale analysis (wavelet and wavelet leaders) and random projections (or sketches), permitting a precise, efficient and robust characterization of scaling.
Abstract: In the mid 1990s, it was shown that the statistics of aggregated time series from Internet traffic departed from those of traditional short range-dependent models, and were instead characterized by asymptotic self-similarity. Following this seminal contribution, over the years, many studies have investigated the existence and form of scaling in Internet traffic. This contribution first aims at presenting a methodology, combining multiscale analysis (wavelet and wavelet leaders) and random projections (or sketches), permitting a precise, efficient and robust characterization of scaling, which is capable of seeing through non-stationary anomalies. Second, we apply the methodology to a data set spanning an unusually long period: 14 years, from the MAWI traffic archive, thereby allowing an in-depth longitudinal analysis of the form, nature, and evolutions of scaling in Internet traffic, as well as network mechanisms producing them. We also study a separate three-day long trace to obtain complementary insight into intra-day behavior. We find that a biscaling (two ranges of independent scaling phenomena) regime is systematically observed: long-range dependence over the large scales, and multifractallike scaling over the fine scales. We quantify the actual scaling ranges precisely, verify to high accuracy the expected relationship between the long range dependent parameter and the heavy tail parameter of the flow size distribution, and relate fine scale multifractal scaling to typical IP packet inter-arrival and to round-trip time distributions.

Journal ArticleDOI
TL;DR: TensorDet can achieve significantly lower false positive rate and higher true positive rate, and benefiting from the well designed algorithm to reduce the computation cost of tensor factorization, the tensorFactorization process in TensorDet is 5 (Abilene) and 13 (GÈANT) times faster than that of the traditional Tucker decomposition solution.
Abstract: Detecting anomalous traffic is a critical task for advanced Internet management. Many anomaly detection algorithms have been proposed recently. However, constrained by their matrix-based traffic data model, existing algorithms often suffer from low accuracy in anomaly detection. To fully utilize the multi-dimensional information hidden in the traffic data, this paper takes the initiative to investigate the potential and methodologies of performing tensor factorization for more accurate Internet anomaly detection. More specifically, we model the traffic data as a three-way tensor and formulate the anomaly detection problem as a robust tensor recovery problem with the constraints on the rank of the tensor and the cardinality of the anomaly set. These constraints, however, make the problem extremely hard to solve. Rather than resorting to the convex relaxation at the cost of low detection performance, we propose TensorDet to solve the problem directly and efficiently. To improve the anomaly detection accuracy and tensor factorization speed, TensorDet exploits the factorization structure with two novel techniques, sequential tensor truncation and two-phase anomaly detection. We have conducted extensive experiments using Internet traffic trace data Abilene and GEANT. Compared with the state of art algorithms for tensor recovery and matrix-based anomaly detection, TensorDet can achieve significantly lower false positive rate and higher true positive rate. Particularly, benefiting from our well designed algorithm to reduce the computation cost of tensor factorization, the tensor factorization process in TensorDet is 5 (Abilene) and 13 (GEANT) times faster than that of the traditional Tucker decomposition solution.

Journal ArticleDOI
TL;DR: A dispatching policy, Redundant-to-Idle-Queue, is designed, which is both analytically tractable within the $S\&X$ model and has provably excellent performance.
Abstract: Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to replicate a request so that it joins the queue at multiple servers. The request is considered complete as soon as any one of its copies completes. Redundancy allows us to overcome server-side variability–the fact that a server might be temporarily slow due to factors such as background load, network interrupts, and garbage collection to reduce response time. In the past few years, queueing theorists have begun to study redundancy, first via approximations, and, more recently, via exact analysis. Unfortunately, for analytical tractability, most existing theoretical analysis has assumed an Independent Runtimes (IR) model, wherein the replicas of a job each experience independent runtimes (service times) at different servers. The IR model is unrealistic and has led to theoretical results that can be at odds with computer systems implementation results. This paper introduces a much more realistic model of redundancy. Our model decouples the inherent job size ( $X$ ) from the server-side slowdown ( $S$ ), where we track both $S$ and $X$ for each job. Analysis within the $S\&X$ model is, of course, much more difficult. Nevertheless, we design a dispatching policy, Redundant-to-Idle-Queue, which is both analytically tractable within the $S\&X$ model and has provably excellent performance.

Journal ArticleDOI
TL;DR: This paper designs efficient online auctions for cloud resource provisioning that executes in an online fashion, runs in polynomial time, provides truthfulness guarantee, and achieves optimal social welfare for the cloud ecosystem.
Abstract: This paper studies the cloud market for computing jobs with completion deadlines, and designs efficient online auctions for cloud resource provisioning. A cloud user bids for future cloud resources to execute its job. Each bid includes: 1) a utility, reflecting the amount that the user is willing to pay for executing its job and 2) a soft deadline, specifying the preferred finish time of the job, as well as a penalty function that characterizes the cost of violating the deadline. We target cloud job auctions that executes in an online fashion, runs in polynomial time, provides truthfulness guarantee, and achieves optimal social welfare for the cloud ecosystem. Towards these goals, we leverage the following classic and new auction design techniques. First, we adapt the posted pricing auction framework for eliciting truthful online bids. Second, we address the challenge posed by soft deadline constraints through a new technique of compact exponential-size LPs coupled with dual separation oracles. Third, we develop efficient social welfare approximation algorithms using the classic primal-dual framework based on both LP duals and Fenchel duals. Empirical studies driven by real-world traces verify the efficacy of our online auction design.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that caching transient data are a promising information-centric networking technique that can reduce the distance between content requesters and the location in the network where the content is fetched from.
Abstract: The Internet-of-Things (IoT) paradigm envisions billions of devices all connected to the Internet, generating low-rate monitoring and measurement data to be delivered to application servers or end-users. Recently, the possibility of applying in-network data caching techniques to IoT traffic flows has been discussed in research forums. The main challenge as opposed to the typically cached content at routers, e.g., multimedia files, is that IoT data are transient and therefore require different caching policies. In fact, the emerging location-based services can also benefit from new caching techniques that are specifically designed for small transient data. This paper studies in-network caching of transient data at content routers, considering a key temporal data property: data item lifetime . An analytical model that captures the trade-off between multihop communication costs and data item freshness is proposed. Simulation results demonstrate that caching transient data are a promising information-centric networking technique that can reduce the distance between content requesters and the location in the network where the content is fetched from. To the best of our knowledge, this is a pioneering research work aiming to systematically analyze the feasibility and benefit of using Internet routers to cache transient data generated by IoT applications.

Journal ArticleDOI
TL;DR: FitLoc is proposed, a fine-grained and low cost DfL approach that can localize multiple targets over various areas, especially in the outdoor environment and similar furnitured indoor environment and greatly reduces the human effort cost.
Abstract: Many emerging applications driven the fast development of the device-free localization (DfL) technique, which does not require the target to carry any wireless devices. Most current DfL approaches have two main drawbacks in practical applications. First, as the pre-calibrated received signal strength (RSS) in each location ( i.e. , radio-map) of a specific area cannot be directly applied to the new areas, the manual calibration for different areas will lead to a high human effort cost. Second, a large number of RSS are needed to accurately localize the targets, thus causes a high communication cost and the areas variety will further exacerbate this problem. This paper proposes FitLoc, a fine-grained and low cost DfL approach that can localize multiple targets over various areas, especially in the outdoor environment and similar furnitured indoor environment. FitLoc unifies the radio-map over various areas through a rigorously designed transfer scheme, thus greatly reduces the human effort cost. Furthermore, benefiting from the compressive sensing theory, FitLoc collects a few RSS and performs a fine-grained localization, thus reduces the communication cost. Theoretical analyses validate the effectivity of the problem formulation and the bound of localization error is provided. Extensive experimental results illustrate the effectiveness and robustness of FitLoc.

Journal ArticleDOI
TL;DR: D-Watch is introduced, a device-free system built on the top of low cost commodity-off-the-shelf RFID hardware that harnesses the angle-of-arrival information from the RFID tags’ backscatter signals to provide a decimeter-level localization accuracy without offline training.
Abstract: Device-free localization, which does not require any device attached to the target, is playing a critical role in many applications, such as intrusion detection, elderly monitoring and so on. This paper introduces D-Watch, a device-free system built on the top of low cost commodity-off-the-shelf RFID hardware. Unlike previous works which consider multipaths detrimental, D-Watch leverages the “bad” multipaths to provide a decimeter-level localization accuracy without offline training. D-Watch harnesses the angle-of-arrival information from the RFID tags’ backscatter signals. The key intuition is that whenever a target blocks a signal’s propagation path, the signal power experiences a drop which can be accurately detected by the proposed novel P-MUSIC algorithm. The proposed wireless phase calibration scheme does not interrupt the ongoing data communication and thus reduces the deployment burden. We implement and evaluate D-Watch with extensive experiments in three different environments. D-Watch achieves a median accuracy of 16.5 cm for library, 25.5 cm for laboratory, and 31.2 cm for hall environment, outperforming the state-of-the-art systems. In a table area of 2 $\text{m}\times 2$ m, D-Watch can track a user’s fist at a median accuracy of 5.8 cm. D-Watch is also capable of localizing multiple targets which is well known to be challenging in passive localization.