scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Network and Service Management in 2015"


Journal ArticleDOI
TL;DR: This evaluation shows how NetVM can compose complex network functionality from multiple pipelined VMs and still obtain throughputs up to 10 Gbps, an improvement of more than 250% compared to existing techniques that use SR-IOV for virtualized networking.
Abstract: NetVM brings virtualization to the Network by enabling high bandwidth network functions to operate at near line speed, while taking advantage of the flexibility and customization of low cost commodity servers. NetVM allows customizable data plane processing capabilities such as firewalls, proxies, and routers to be embedded within virtual machines, complementing the control plane capabilities of Software Defined Networking. NetVM makes it easy to dynamically scale, deploy, and reprogram network functions. This provides far greater flexibility than existing purpose-built, sometimes proprietary hardware, while still allowing complex policies and full packet inspection to determine subsequent processing. It does so with dramatically higher throughput than existing software router platforms. NetVM is built on top of the KVM platform and Intel DPDK library. We detail many of the challenges we have solved such as adding support for high-speed inter-VM communication through shared huge pages and enhancing the CPU scheduler to prevent overheads caused by inter-core communication and context switching. NetVM allows true zero-copy delivery of data to VMs both for packet processing and messaging among VMs within a trust boundary. Our evaluation shows how NetVM can compose complex network functionality from multiple pipelined VMs and still obtain throughputs up to 10 Gbps, an improvement of more than 250% compared to existing techniques that use SR-IOV for virtualized networking.

399 citations


Journal ArticleDOI
TL;DR: POCO is presented, a framework for Pareto-based Optimal COntroller placement that provides operators with Pare to optimal placements with respect to different performance metrics and can be extended to solve similar virtual functions placement problems which appear in the context of Network Functions Virtualization (NFV).
Abstract: Software Defined Networking (SDN) marks a paradigm shift towards an externalized and logically centralized network control plane. A particularly important task in SDN architectures is that of controller placement, i.e., the positioning of a limited number of resources within a network to meet various requirements. These requirements range from latency constraints to failure tolerance and load balancing. In most scenarios, at least some of these objectives are competing, thus no single best placement is available and decision makers need to find a balanced trade-off. This work presents POCO, a framework for Pareto-based Optimal COntroller placement that provides operators with Pareto optimal placements with respect to different performance metrics. In its default configuration, POCO performs an exhaustive evaluation of all possible placements. While this is practically feasible for small and medium sized networks, realistic time and resource constraints call for an alternative in the context of large scale networks or dynamic networks whose properties change over time. For these scenarios, the POCO toolset is extended by a heuristic approach that is less accurate, but yields faster computation times. An evaluation of this heuristic is performed on a collection of real world network topologies from the Internet Topology Zoo. Utilizing a measure for quantifying the error introduced by the heuristic approach allows an analysis of the resulting trade-off between time and accuracy. Additionally, the proposed methods can be extended to solve similar virtual functions placement problems which appear in the context of Network Functions Virtualization (NFV).

357 citations


Journal ArticleDOI
TL;DR: In this article, an integrated energy-aware resource provisioning framework for cloud data centers is proposed, which predicts the number of virtual machine (VM) requests, along with the amount of CPU and memory resources associated with each of these requests, and reduces energy consumption by putting to sleep unneeded PMs.
Abstract: Energy efficiency has recently become a major issue in large data centers due to financial and environmental concerns. This paper proposes an integrated energy-aware resource provisioning framework for cloud data centers. The proposed framework: i ) predicts the number of virtual machine (VM) requests, to be arriving at cloud data centers in the near future, along with the amount of CPU and memory resources associated with each of these requests, ii ) provides accurate estimations of the number of physical machines (PMs) that cloud data centers need in order to serve their clients, and iii ) reduces energy consumption of cloud data centers by putting to sleep unneeded PMs. Our framework is evaluated using real Google traces collected over a 29-day period from a Google cluster containing over 12,500 PMs. These evaluations show that our proposed energy-aware resource provisioning framework makes substantial energy savings.

130 citations


Journal ArticleDOI
TL;DR: This work presents a set of programming abstractions modeling the fundamental aspects of a wireless network, namely state management, resource provisioning, network monitoring, and network reconfiguration, and investigates the usefulness, efficiency and flexibility of the platform over a real 802.11-based WLAN.
Abstract: Software-Defined Networking (SDN) has received, in the last years, significant interest from the academic and the industrial communities alike. The decoupled control and data planes found in an SDN allows for logically centralized intelligence in the control plane and generalized network hardware in the data plane. Although the current SDN ecosystem provides a rich support for wired packet--switched networks, the same cannot be said for wireless networks where specific radio data-plane abstractions, controllers, and programming primitives are still yet to be established. In this work, we present a set of programming abstractions modeling the fundamental aspects of a wireless network, namely state management, resource provisioning, network monitoring, and network reconfiguration. The proposed abstractions hide away the implementation details of the underlying wireless technology providing programmers with expressive tools to control the state of the network. We also present a Software-Defined Radio Access Network Controller for Enterprise WLANs and a Python--based Software Development Kit implementing the proposed abstractions. Finally, we experimentally evaluate the usefulness, efficiency and flexibility of the platform over a real 802.11-based WLAN.

117 citations


Journal ArticleDOI
TL;DR: This paper develops a placement algorithm to determine the allocation of managers and controllers in the proposed distributed management and control layer, and shows how this layer can satisfy the requirements of two specific applications for adaptive load-balancing and energy management purposes.
Abstract: The heterogeneous nature of the applications, technologies and equipment that today's networks have to support has made the management of such infrastructures a complex task. The Software-Defined Networking (SDN) paradigm has emerged as a promising solution to reduce this complexity through the creation of a unified control plane independent of specific vendor equipment. However, designing a SDN-based solution for network resource management raises several challenges as it should exhibit flexibility, scalability and adaptability. In this paper, we present a new SDN-based management and control framework for fixed backbone networks, which provides support for both static and dynamic resource management applications. The framework consists of three layers which interact with each other through a set of interfaces. We develop a placement algorithm to determine the allocation of managers and controllers in the proposed distributed management and control layer. We then show how this layer can satisfy the requirements of two specific applications for adaptive load-balancing and energy management purposes.

93 citations


Journal ArticleDOI
TL;DR: This work presents an approach for building the multicast mechanism whereby multicast flows are processed by NFV before reaching their end users, and proposes a routing algorithm and a method for building an appropriate multicast topology.
Abstract: Many multicast services such as live multimedia distribution and real-time event monitoring require multicast mechanisms that involve network functions (e.g., firewall and video transcoding). Network function virtualization (NFV) is a concept that proposes using virtualization to implement network functions on infrastructure building block (such as high volume servers and virtual machines), where software provides the functionality of existing purpose-built network equipment. We present an approach for building the multicast mechanism whereby multicast flows are processed by NFV before reaching their end users. We propose a routing algorithm and a method for building an appropriate multicast topology.

83 citations


Journal ArticleDOI
TL;DR: A novel approach promoting ISP and CDN collaboration based on a minimal deployment of software-defined networking switches in the ISP's network is presented, which complements standard DNS-based redirection by allowing for a migration of high-volume flows between surrogates in the backend even if the communication has state information, such as Hyper Text Transfer Protocol sessions.
Abstract: The collaboration of Internet service providers (ISPs) and content distribution network (CDN) providers was shown to be beneficial for both parties in a number of recent works. Influencing CDN edge server (surrogate) selection allows the ISP to manage the rising amount of traffic emanating from CDNs to reduce the operational expenditures (OPEX) of his infrastructure, e.g., by preventing peered traffic. At the same time, including the ISP's hidden network knowledge in the surrogate selection process influences the quality of service a CDN provider can deliver positively. As a large amount of CDN traffic is video-on-demand traffic, this paper investigates the topic of CDN/ISP collaboration from a perspective of high-volume long-living flows. These types of flows are hardly manageable with state-of-the-art Dynamic Name Service (DNS)-based redirection, as a reassignment of flows during the session is difficult to achieve. Consequently, varying load of surrogates caused by flash crowds and congestion events in the ISP's network are hard to compensate. This paper presents a novel approach promoting ISP and CDN collaboration based on a minimal deployment of software-defined networking switches in the ISP's network. The approach complements standard DNS-based redirection by allowing for a migration of high-volume flows between surrogates in the backend even if the communication has state information, such as Hyper Text Transfer Protocol sessions. In addition to a proof-of-concept, the evaluation identifies factors influencing performance and shows large performance increases when compared to standard DNS-based redirection.

80 citations


Journal ArticleDOI
TL;DR: A one-shot, unsplittable flow VNE solution based on column generation that ensures embedding accuracy, while the use of column generation is aimed at enhancing the computation time to make the approach more scalable.
Abstract: As the virtualization of networks continues to attract attention from both industry and academia, the virtual network embedding (VNE) problem remains a focus of researchers. This paper proposes a one-shot, unsplittable flow VNE solution based on column generation. We start by formulating the problem as a path-based mathematical program called the primal, for which we derive the corresponding dual problem. We then propose an initial solution which is used, first, by the dual problem and then by the primal problem to obtain a final solution. Unlike most approaches, our focus is not only on embedding accuracy but also on the scalability of the solution. In particular, the one-shot nature of our formulation ensures embedding accuracy, while the use of column generation is aimed at enhancing the computation time to make the approach more scalable. In order to assess the performance of the proposed solution, we compare it against four state of the art approaches as well as the optimal link-based formulation of the one-shot embedding problem. Experiments on a large mix of virtual network (VN) requests show that our solution is near optimal (achieving about 95% of the acceptance ratio of the optimal solution), with a clear improvement over existing approaches in terms of VN acceptance ratio and average substrate network (SN) resource utilization, and a considerable improvement (92% for a SN of 50 nodes) in time complexity compared to the optimal solution.

70 citations


Journal ArticleDOI
Yonghong Fu1, Jun Bi1, Ze Chen1, Kai Gao1, Baobao Zhang1, Guangxu Chen1, Jianping Wu1 
TL;DR: Orion is proposed, a hybrid hierarchical control plane for large-scale networks that can effectively reduce the computational complexity of an SDN control plane by several orders of magnitude and is implemented to verify the feasibility and effectiveness.
Abstract: The decoupled architecture and the fine-grained flow-control feature limit the scalability of a flow-based software-defined network (SDN). In order to address this problem, some studies construct a flat control plane architecture; others build a hierarchical control plane architecture to improve the scalability of an SDN. However, the two kinds of structure still have unresolved issues: A flat control plane structure cannot solve the superlinear computational complexity growth of the control plane when the SDN scales to a large size, and the centralized abstracted hierarchical control plane structure brings a path stretch problem. To address these two issues, we propose Orion, a hybrid hierarchical control plane for large-scale networks. Orion can effectively reduce the computational complexity of an SDN control plane by several orders of magnitude. We also design an abstracted hierarchical routing method to solve the path stretch problem. Furthermore, we propose a hierarchical fast reroute method to illustrate how to achieve fast rerouting in the proposed hybrid hierarchical control plane. Orion is implemented to verify the feasibility of the hybrid hierarchical approach. Finally, we verify the effectiveness of Orion from both the theoretical and experimental aspects.

63 citations


Journal ArticleDOI
TL;DR: This paper investigates the visibility of VN Providers on substrate network resources and question the suitability of topology-based requests for VNE, and investigates the suboptimality of LID on VNE against a “best-case” scenario where the complete network topology and resource availability information is available to Vn Providers.
Abstract: The ever-increasing need to diversify the Internet has recently revived the interest in network virtualization. Wide-area virtual network (VN) deployment raises the need for VN embedding (VNE) across multiple Infrastructure Providers (InPs), due to the InP's limited geographic footprint. Multi-provider VNE, in turn, requires a layer of indirection, interposed between the Service Providers and the InPs. Such brokers, usually known as VN Providers, are expected to have very limited knowledge of the physical infrastructure, since InPs will not be willing to disclose detailed information about their network topology and resource availability to third parties. Such information disclosure policies entail significant implications on resource discovery and allocation. In this paper, we study the challenging problem of multi-provider VNE with limited information disclosure (LID). In this context, we initially investigate the visibility of VN Providers on substrate network resources and question the suitability of topology-based requests for VNE. Subsequently, we present linear programming formulations for: (i) the partitioning of traffic matrix based VN requests into segments mappable to InPs, and (ii) the mapping of VN segments into substrate network topologies. VN request partitioning is carried out under LID, i.e., VN Providers access only information which is not deemed confidential by InPs. We further investigate the suboptimality of LID on VNE against a “best-case” scenario where the complete network topology and resource availability information is available to VN Providers.

52 citations


Journal ArticleDOI
TL;DR: The target of this paper is to define a management model for NFV customers and service providers, a green policy of the customer premises equipment (CPE) nodes, and an analytical model to support their design.
Abstract: In the last few years, SDN and NFV have been introduced with the potential to change the ossified Internet paradigm, with the final goal of creating a more agile and flexible network, at the same time reducing both CAPEX and OPEX costs. For this reason, a lot of research efforts have been devoted to optimize the implementation of these technologies, also inheriting experience from data center management. However, orchestration and management of SDN/NFV nodes present new challenges with respect to data center management, mainly due to the telecommunications context where NFV resides. With this in mind, the target of this paper is to define a management model for NFV customers and service providers, a green policy of the customer premises equipment (CPE) nodes, and an analytical model to support their design. The model is then applied to a case study to demonstrate how it can be used to optimize system performance and choose the most important parameters characterizing the design of a CPE node.

Journal ArticleDOI
TL;DR: This work proposes a holistic model for intradomain networks to characterize the network performance of routing contents to clients and the network cost incurred by globally coordinating the in-network storage capability, and derives the optimal strategy for provisioning the storage capability that optimizes the overall network performance and cost.
Abstract: In content-centric networks, it is challenging how to optimally provision in-network storage to cache contents, to balance the tradeoffs between the network performance and the provisioning cost. To address this problem, we first propose a holistic model for intradomain networks to characterize the network performance of routing contents to clients and the network cost incurred by globally coordinating the in-network storage capability. We then derive the optimal strategy for provisioning the storage capability that optimizes the overall network performance and cost, and analyze the performance gains via numerical evaluations on real network topologies. Our results reveal interesting phenomena; for instance, different ranges of the Zipf exponent can lead to opposite optimal strategies, and the tradeoffs between the network performance and the provisioning cost have great impacts on the stability of the optimal strategy. We also demonstrate that the optimal strategy can achieve significant gain on both the load reduction at origin servers and the improvement on the routing performance. Moreover, given an optimal coordination level $\ell^\ast$ , we design a routing-aware content placement (RACP) algorithm that runs on a centralized server. The algorithm computes and assigns contents to each CCN router to store, which can minimize the overall routing cost, e.g., transmission delay or hop counts, to deliver contents to clients. By conducting extensive simulations using a large-scale trace dataset collected from a commercial 3G network in China, our results demonstrate that our caching scheme can achieve 4% to 22% latency reduction on average over the state-of-the-art caching mechanisms.

Journal ArticleDOI
TL;DR: This paper proposes an online flow-based routing approach that allows dynamic reconfiguration of existing flows as well as dynamic link rate adaptation, while taking into account users' demands and mobility, and shows that the energy consumption can be reduced by up to 7%, 35%, 44%, and 49% compared to Greedy-OFER, MRC, SP, and LB, respectively.
Abstract: Recent studies have shown that the energy consumption of wireless access networks is a threat to the sustainability of mobile cloud services. Consequently, energy efficient solutions are becoming crucial for both local and wireless access networks. In this paper, we propose a flow-based management framework to achieve energy efficiency in campus networks. We address the problem from the dynamic perspective, where users come and leave the system in an unpredictable way. Specifically, we propose an online flow-based routing approach that allows dynamic reconfiguration of existing flows as well as dynamic link rate adaptation, while taking into account users’ demands and mobility. Our approach is compliant with the emerging software defined networking (SDN) paradigm since it can be integrated as an application on top of an SDN controller. To achieve this, we first formulate the flow-based routing problem as an integer linear program (ILP). As this problem is known to be $\mathcal{NP}$ -hard, we then propose a simple yet efficient ant colony-based approach to solve the formulated ILP. Through extensive simulations, we show that our proposed approach is able to achieve significant gains in terms of energy consumption, compared to heuristic solutions and conventional routing solutions such as the shortest path (SP) routing, the minimum link residual capacity routing metric (MRC), and the load balancing (LB) scheme. In particular, we show that the energy consumption can be reduced by up to 7%, 35%, 44%, and 49% compared to Greedy-OFER, MRC, SP, and LB, respectively, while ensuring the required quality of service (QoS).

Journal ArticleDOI
TL;DR: A novel algorithm is proposed to identify malicious data injections and build measurement estimates that are resistant to several compromised sensors even when they collude in the attack.
Abstract: Wireless sensor networks (WSNs) are vulnerable and can be maliciously compromised, either physically or remotely, with potentially devastating effects. When sensor networks are used to detect the occurrence of events such as fires, intruders, or heart attacks, malicious data can be injected to create fake events, and thus trigger an undesired response, or to mask the occurrence of actual events. We propose a novel algorithm to identify malicious data injections and build measurement estimates that are resistant to several compromised sensors even when they collude in the attack. We also propose a methodology to apply this algorithm in different application contexts and evaluate its results on three different datasets drawn from distinct WSN deployments. This leads us to identify different tradeoffs in the design of such algorithms and how they are influenced by the application context.

Journal ArticleDOI
TL;DR: The basic issues, the technical approaches, and the methodologies for the implementation of power management primitives in the context of the emerging Software Defined Networking are described and an analytical model for the management of a network with these capabilities is proposed.
Abstract: The constant evolution and expansion of the Internet and Internet-related technologies has exposed the limitations of the current networking infrastructures, which are represented by the unsustainable power consumption and low level of scalability. In fact, these infrastructures are still based on the typical, ossified architecture of the TCP/IP paradigm. In order to cope with the Future Internet requirements, recent contributions envisage an evolution towards more programmable and efficient paradigms. In this respect, this paper describes the basic issues, the technical approaches, and the methodologies for the implementation of power management primitives in the context of the emerging Software Defined Networking. In detail, we propose to extend one of the most prominent solutions aimed at increasing networking flexibility, the OpenFlow Protocol, to integrate the energy-aware capabilities offered by the Green Abstraction Layer (GAL). However, the mere introduction of node-level solutions would be of little or no use in the absence of a network-wide management scheme to guarantee inter-operability and effectiveness of the proposed architecture. In this respect, this work also proposes an analytical model for the management of a network with these capabilities. The results will show how our solutions are well suited to provide a scalable and efficient network architecture able to manage the orchestration and consolidation of the available resources.

Journal ArticleDOI
TL;DR: This paper proposes a resource management framework allowing cloud providers to provision resources in the form of Virtual Data Centers (VDCs) across a geo-distributed infrastructure with the aim of reducing operational costs and green SLA violation penalties.
Abstract: With the massive adoption of cloud-based services, high energy consumption and carbon footprint of cloud infrastructures have become a major concern in the IT industry. Consequently, many governments and IT advisory organizations have urged IT stakeholders (i.e., cloud provider and cloud customers) to embrace green IT and regularly monitor and report their carbon emissions and put in place efficient strategies and techniques to control the environmental impact of their infrastructures and/or applications. Motivated by this growing trend, we investigate, in this paper, how cloud providers can meet Service Level Agreements (SLAs) with green requirements. In such SLAs, a cloud customer requires from cloud providers that carbon emissions generated by the leased resources should not exceed a fixed bound. We hence propose a resource management framework allowing cloud providers to provision resources in the form of Virtual Data Centers (VDCs) (i.e., a set of virtual machines and virtual links with guaranteed bandwidth) across a geo-distributed infrastructure with the aim of reducing operational costs and green SLA violation penalties. Extensive simulations show that the proposed solution maximizes the cloud provider's profit and minimizes the violation of green SLAs.

Journal ArticleDOI
TL;DR: Performance analysis and simulation results show that even with added security mechanisms, the proposed protocol outperforms similar existing protocols.
Abstract: In this paper, we propose a low-overhead identity-based distributed dynamic address configuration scheme for secure allocation of IP addresses to authorized nodes of a managed mobile ad hoc network. A new node will receive an IP address from an existing neighbor node. Thereafter, each node in a network is able to generate a set of unique IP addresses from its own IP address, which it can further assign to more new nodes. Due to lack of infrastructure, apart from security issues, such type of networks poses several design challenges such as high packet error rate, network partitioning, and network merging. Our proposed protocol takes care of these issues incurring less overhead as it does not require any message flooding mechanism over the entire MANET. Performance analysis and simulation results show that even with added security mechanisms, our proposed protocol outperforms similar existing protocols.

Journal ArticleDOI
TL;DR: This paper studies the problem of scheduling multiple bandwidth reservation requests (BRRs) concurrently within an HPN while achieving their best average transfer performance, and proposes two fast and efficient heuristic algorithms with polynomial-time complexity.
Abstract: Because of the deployment of large-scale experimental and computational scientific applications, big data is being generated on a daily basis. Such large volumes of data usually need to be transferred from the data generating center to remotely located scientific sites for collaborative data analysis in a timely manner. Bandwidth reservation along paths provisioned by dedicated high-performance networks (HPNs) has proved to be a fast, reliable, and predictable way to satisfy the transfer requirements of massive time-sensitive data. In this paper, we study the problem of scheduling multiple bandwidth reservation requests (BRRs) concurrently within an HPN while achieving their best average transfer performance. Two common data transfer performance parameters are considered: the Earliest Completion Time (ECT) and the Shortest Duration (SD). Since not all BRRs in one batch can oftentimes be successfully scheduled, the problem of scheduling all BRRs in one batch while achieving their best average ECT and SD are converted into the problem of scheduling as many BRRs as possible while achieving the average ECT and SD of scheduled BRRs, respectively. The aforementioned two problems are proved to be NP-complete problems. Two fast and efficient heuristic algorithms with polynomial-time complexity are proposed. Extensive simulation experiments are conducted to compare their performance with two proposed naive algorithms in various performance metrics. Performance superiority of these two fast and efficient algorithms is verified.

Journal ArticleDOI
TL;DR: A novel time-aware request model is proposed which enables tenants to specify an estimated required time-duration, in addition to their required server resources for Virtual Machines (VMs) and network bandwidth for their communication, to provide resource guarantees.
Abstract: Increased power usage and network performance variation due to best-effort bandwidth sharing significantly affect tenancy cost, cloud adoption, and data center efficiencies. In this paper, we propose a novel time-aware request model which enables tenants to specify an estimated required time-duration, in addition to their required server resources for Virtual Machines (VMs) and network bandwidth for their communication. We investigate the VM-placement and routing problem, which allocates both server and network resources for the specified time-duration, to provide resource guarantees. Further, we exploit VM-migration while considering its power consumption overhead, to improve power saving and resource utilization. Using the multi-component utilization-based power model, we formulate the problem as an optimization problem that maximizes the acceptance rate while consuming as low power as possible. We develop fast online heuristics that allocate resources for requests, considering their duration and bandwidth demand. We also develop migration policies augmenting these heuristics. For migration heuristics, we propose server-migration and switch-migration approaches, which migrate the VMs between the powered-on servers only if their migrations result in turning-off at least one server and switch, respectively. We demonstrate the effectiveness of the proposed heuristics in terms of power saving, acceptance ratio, and migration overhead using comprehensive simulation results.

Journal ArticleDOI
TL;DR: It is shown that the probe imposes low overhead and is remarkably effective at detecting performance degradations due to inter-VM interference over a wide variety of workload scenarios and on two different server architectures.
Abstract: Public and private cloud computing environments employ virtualization methods to consolidate application workloads onto shared servers. Modern servers typically have one or more sockets each with one or more computing cores, a multi-level caching hierarchy, a memory subsystem, and an interconnect to the memory of other sockets. While resource management methods may manage application performance by controlling the sharing of processing time and input-output rates, there is generally no management of contention for virtualization kernel resources or for the memory hierarchy and subsystems. Yet such contention can have a significant impact on application performance. Hardware platform specific counters have been proposed for detecting such contention. We show that such counters alone are not always sufficient for detecting contention. We propose a software probe based approach for detecting contention for shared platform resources and demonstrate its effectiveness. We show that the probe imposes low overhead and is remarkably effective at detecting performance degradations due to inter-VM interference over a wide variety of workload scenarios and on two different server architectures. The probe successfully detected virtualization-induced software bottleneck and memory contention on both server architectures. Our approach supports the management of workload placement on shared servers and pools of shared servers.

Journal ArticleDOI
TL;DR: This paper shows, in particular, how virtual bridging and multipath forwarding impact common DCN optimization goals, traffic engineering (TE) and energy efficiency (EE), and assess their utility in the various cases of four different DCN topologies.
Abstract: The increasing adoption of server virtualization has recently favored three key technology advances in data-center networking: the emergence at the hypervisor software level of virtual bridging functions between virtual machines and the physical network; the possibility to dynamically migrate virtual machines across virtualization servers in the data-center network (DCN); a more efficient exploitation of the large path diversity by means of multipath forwarding protocols. In this paper, we investigate the impact of these novel features in DCN optimization by providing a comprehensive mathematical formulation and a repeated matching heuristic for its resolution. We show, in particular, how virtual bridging and multipath forwarding impact common DCN optimization goals, traffic engineering (TE) and energy efficiency (EE), and assess their utility in the various cases of four different DCN topologies. We show that virtual bridging brings a high performance gain when TE is the primary goal and should be deactivated when EE becomes important. Moreover, we show that multipath forwarding can bring relevant gains only when EE is the primary goal and virtual bridging is not enabled.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed technique can control the delay violation ratio of a target VoIP service and keep the ratio extremely low and comparable to that under IEEE 802.11e.
Abstract: This paper proposes a WiFi network virtualization technique to control the connectivity of a target service. The packet-level delay violation ratio can be reduced even in a congested situation by provisioning dedicated base station (BS) resources (a set of dedicated BSs) to the target service and allowing only the corresponding terminals to associate with the BSs. The proposed technique is novel in that BSs are specially configured to use the same MAC address, and thus all the decisions on BS selection and handover are separated from those BSs and terminals and are put together into a centralized controller, while consistent layer-2 data paths in a backhaul network are also cooperatively configured. Simulation results show that the proposed technique can control the delay violation ratio of a target VoIP service and keep the ratio extremely low and comparable to that under IEEE 802.11e. A proof-of-concept prototype including two multi-channel virtualization-capable WiFi BSs and a BS switch is developed using off-the-shelf WiFi modules and a commercial OpenFlow switch. Experimental results show that the terminals can make a handover to a dedicated BS in less than 65 ms without any packet drop and association break, and confirm that the effect of the managed handover is limited even in a VoIP application.

Journal ArticleDOI
TL;DR: This work proposes iTop, an algorithm for inferring the network topology when only partial information is available, and shows that the topologies inferred by iTop significantly improve the performance of fault localization algorithms when compared with other approaches.
Abstract: Full knowledge of the routing topology of the Internet is useful for a multitude of network management tasks. However, the full topology is often not known and is instead estimated using topology inference algorithms. Many of these algorithms use Traceroute to probe paths and then use the collected information to infer the topology. We perform real experiments and show that, in practice, routers may severely disrupt the operation of Traceroute and cause it to only provide partial information. We propose iTop, an algorithm for inferring the network topology when only partial information is available. iTop constructs a virtual topology, which overestimates the number of network components, and then repeatedly merges links in this topology to resolve it toward the structure of the true network. We perform extensive simulations to compare iTop to state-of-the-art inference algorithms. Results show that iTop significantly outperforms previous approaches and its inferred topologies are within 5% of the original networks for all considered metrics. Additionally, we show that the topologies inferred by iTop significantly improve the performance of fault localization algorithms when compared with other approaches.

Journal ArticleDOI
TL;DR: A generic analytical framework of evolutionary dynamics is presented to model VPEF scheme, and theoretically proved that V PEF scheme's efficiency loss defined as the ratio of system time, in which no users will provide resource, is $4/(8+M)$.
Abstract: This paper focuses on incentivizing cooperative behavior in community-based autonomous networking environments (like mobile social networks, etc.), in which through dynamically forming virtual and/or physical communities, users voluntarily participate in and contribute resources (or provide services) to the community while consuming. Specifically, we proposed a simple but effective EGT (Evolutionary Game Theory)-based mechanism, VPEF (Voluntary Principle and round-based Entry Fee), to drive the networking environment into cooperative. VPEF builds incentive mechanism as two simple system rules: The first is VP meaning that all behaviors are voluntarily conducted by users: Users voluntarily participate (after paying round-based entry fee), voluntarily contribute resource, and voluntarily punish other defectors (incurring extra cost to those so-called punishers); The second is EF meaning that an arbitrarily small round-based entry fee is set for each user who wants to participate in the community. We presented a generic analytical framework of evolutionary dynamics to model VPEF scheme, and theoretically proved that VPEF scheme's efficiency loss defined as the ratio of system time, in which no users will provide resource, is $4/(8+M)$ . $M$ is the number of users in community-based collaborative system. Finally, the simulated results using content availability as an example verified our theoretical analysis.

Journal ArticleDOI
TL;DR: An effective Web service ranking approach based on collaborative filtering (CF) by exploring the user behavior is proposed, in which the invocation and query history are used to infer the potential user behavior.
Abstract: Service-oriented computing and Web services are becoming more and more popular, enabling organizations to use the Web as a market for selling their own Web services and consuming existing Web services from others. Nevertheless, with the increasing adoption and presence of Web services, it becomes more difficult to find the most appropriate Web service that satisfies both users’ functional and nonfunctional requirements. In this paper, we propose an effective Web service ranking approach based on collaborative filtering (CF) by exploring the user behavior, in which the invocation and query history are used to infer the potential user behavior. CF-based user similarity is calculated through similar invocations and similar queries (including functional query and QoS query) between users. Three aspects of Web services—functional relevance, CF based score, and QoS utility, are all considered for the final Web service ranking. To avoid the impact of different units, range, and distribution of variables, three ranks are calculated for the three factors respectively. The final Web service ranking is obtained by using a rank aggregation method based on rank positions. We also propose effective evaluation metrics to evaluate our approach. Large-scale experiments are conducted based on a real world Web service dataset. Experimental results show that the proposed approach outperforms the existing approach on the rank performance.

Journal ArticleDOI
TL;DR: This paper proposes a two-step optimization where the optimal overlays are firstly computed, then an optimal resource allocation based on these pre-computed overlays is performed; and a joint optimization where both optimization problems are simultaneously solved.
Abstract: The delivery of live video channels for services such as twitch.tv leverages the so-called Telco-CDN—Content Delivery Network (CDN) deployed within the Internet Service Provider (ISP) domain. A Telco-CDN can be regarded as an intra-domain overlay network with tight resources and critical deployment constraints. This paper addresses two problems in this context: (1) the construction of the overlays used to deliver the video channels from the entrypoints of the Telco-CDN to the appropriate edge servers; and (2) the allocation of the required resources to these overlays. Since bandwidth is critical for entrypoints and edge servers, our ultimate goal is to deliver as many video channels as possible while minimizing the total bandwidth consumption. To achieve this goal, we propose two approaches: a two-step optimization where the optimal overlays are firstly computed, then an optimal resource allocation based on these pre-computed overlays is performed; and a joint optimization where both optimization problems are simultaneously solved. We also devise fast heuristic algorithms for each of these approaches. The conducted evaluations of these two approaches and algorithms provide useful insights into the management of critical Telco-CDN infrastructures.

Journal ArticleDOI
TL;DR: This work proposes Attendre, an OpenFlow extension, to mitigate the ill effects of the race conditions in OpenFlow networks and shows that Attendre can reduce verification time by several orders of magnitude, and significantly reduce TCP connection setup time.
Abstract: OpenFlow is a Software Defined Networking (SDN) protocol that is being deployed in many network systems. SDN application verification takes an important role in guaranteeing the correctness of the application. Through our investigation, we discover that application verification can be very inefficient under the OpenFlow protocol since there are many race conditions between the data packets and control plane messages. Furthermore, these race conditions also increase the control plane workload and packet forwarding delay. We propose Attendre, an OpenFlow extension, to mitigate the ill effects of the race conditions in OpenFlow networks. We have implemented Attendre in NICE (a model checking verifier), Open vSwitch (a software virtual switch), and NOX (an OpenFlow controller). Experiments show that Attendre can reduce verification time by several orders of magnitude, and significantly reduce TCP connection setup time.

Journal ArticleDOI
TL;DR: A new nonlinear control approach is presented that enables achieving differentiated performance requirements effectively in virtualized environments through the automated provisioning of resources, using a nonlinear block control structure called the Hammerstein and Wiener model.
Abstract: The efficient management of shared resources in virtualized environments has become an important issue with the advent of cloud computing. This is a challenging management task because the resources of a single physical server may have to be shared between multiple virtual machines (VMs) running applications with different performance objectives, under unpredictable and erratic workloads. A number of existing works have developed performance differentiation and resource management techniques for shared resource environments by using linear feedback control approaches. However, the dominant nonlinearities of performance differentiation schemes and virtualized environments mean that linear control techniques do not provide effective control under a wide range of operating conditions. Instead of using linear control techniques, this paper presents a new nonlinear control approach that enables achieving differentiated performance requirements effectively in virtualized environments through the automated provisioning of resources. By using a nonlinear block control structure called the Hammerstein and Wiener model, a nonlinear feedback control system is integrated to the physical server (hypervisor) to efficiently achieve the performance differentiation objectives. The novelty of this approach is the inclusion of a compensation framework, which reduces the impact of nonlinearities on the management system. The experiments conducted in a virtual machine environment have shown significant improvements in performance differentiation and system stability of the proposed nonlinear control approach compared to a linear control system. In addition, the simulation results demonstrate the scalability of this nonlinear approach, providing stable performance differentiation between 10 applications/VMs.

Journal ArticleDOI
TL;DR: This work shows that a class of DWRR policies provide the service differentiation objectives, without requiring any knowledge about the arrival and the service process statistics, using stochastic control theory.
Abstract: This work focuses on the design, analysis and evaluation of Dynamic Weighted Round Robin (DWRR) algorithms that can guarantee CPU service shares in clusters of servers. Our motivation comes from the need to provision multiple server CPUs in cloud-based data center environments. Using stochastic control theory we show that a class of DWRR policies provide the service differentiation objectives, without requiring any knowledge about the arrival and the service process statistics. The member policies provide the data center administrator with trade-off options, so that the communication and computation overhead of the policy can be adjusted. We further evaluate the proposed policies via simulations, using both synthetic and real traces obtained from a medium scale mobile computing application.

Journal ArticleDOI
TL;DR: This work develops a model-based analysis methodology with simulation validation to identify the best defense protocol settings under which the sensor network lifetime is maximized against selective capture and smart attack.
Abstract: We propose and analyze adaptive network defense management for countering smart attack and selective capture which aim to cripple the basic data delivery functionality of a base station based wireless sensor network With selective capture, the adversaries strategically capture sensors and turn them into inside attackers With smart attack, an inside attacker is capable of performing random, opportunistic, and insidious attacks to evade detection and maximize their chance of success We develop a model-based analysis methodology with simulation validation to identify the best defense protocol settings under which the sensor network lifetime is maximized against selective capture and smart attack