scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Network and Systems Management in 2015"


Journal ArticleDOI
TL;DR: This paper outlines a conceptual framework for cloud resource management and uses it to structure the state-of-the-art review, and identifies five challenges for future investigation that relate to providing predictable performance for cloud-hosted applications.
Abstract: Resource management in a cloud environment is a hard problem, due to: the scale of modern data centers; the heterogeneity of resource types and their interdependencies; the variability and unpredictability of the load; as well as the range of objectives of the different actors in a cloud ecosystem. Consequently, both academia and industry began significant research efforts in this area. In this paper, we survey the recent literature, covering 250+ publications, and highlighting key results. We outline a conceptual framework for cloud resource management and use it to structure the state-of-the-art review. Based on our analysis, we identify five challenges for future investigation. These relate to: providing predictable performance for cloud-hosted applications; achieving global manageability for cloud systems; engineering scalable resource management systems; understanding economic behavior and cloud pricing; and developing solutions for the mobile cloud paradigm .

506 citations


Journal ArticleDOI
TL;DR: This paper presents a software-defined networking platform for joint HOst-NEtwork (HONE) traffic management, which presents a uniform view of a diverse collection of measurement data, minimizes measurement overhead by performing lazy materialization of fine-grained statistics, and scales the analysis by processing data locally on the end hosts.
Abstract: Applications running in modern datacenters interact with the underlying network in complex ways, making administrators run multiple traffic management tasks to tune the system. However, today's traffic management solutions are limited by an artificial division between the hosts and the network. While network devices only have knowledge of the network layer, the hosts can provide more visibility into how applications interact with the network. This paper proposes to extend the scope of traffic management to the end-host network stack. We present a software-defined networking platform for joint HOst-NEtwork (HONE) traffic management. HONE presents a uniform view of a diverse collection of measurement data, minimizes measurement overhead by performing lazy materialization of fine-grained statistics, and scales the analysis by processing data locally on the end hosts. HONE offers a simple and expressive programming framework for network and service administrators. We evaluate HONE by implementing several canonical traffic management applications, measuring its efficiency with micro-benchmarks, and demonstrating its scalability with larger experiments on Amazon EC2.

50 citations


Journal ArticleDOI
TL;DR: It is argued that both SDN and NFV are part of a bigger networking picture, that of the complete lifecycle of the network devices and therefore could take advantage of the definition of a common abstraction model, both for the forwarding model and for the network functions.
Abstract: Recent advances in networking, namely the reemergence of network programmability with a new name, that of Software-Defined Networking (SDN) have paved the way for a new approach to network datapath configuration. SDN provides an abstraction model of the forwarding plane and separates it from the control plane using open APIs. On the other hand, regarding network infrastructure, motivated by the advances of virtualization, major operators created the Network Function Virtualization (NFV) group, as an Industry Specification Group at the European Telecommunications Standards Institute. NFV's goal is to define how network functions such as firewalls and load-balancers or any other data or control plane functionality in the mobile and fixed network, can be virtualized and run as software on high-volume servers instead of using specialized hardware. We argue that both SDN and NFV are part of a bigger networking picture, that of the complete lifecycle of the network devices and therefore could take advantage of the definition of a common abstraction model, both for the forwarding model and for the network functions. Such a model will allow interoperability and homogeneity, as well as one protocol, for control, management and orchestration of the network datapath and the network functions respectively. This paper proposes, defines and designs a reference Network Abstraction Model based on a building block approach and demonstrates an initial proof-of-concept implementation.

49 citations


Journal ArticleDOI
TL;DR: Simulation results show that the FlowBroker architecture, with broker-based collaborative load-balancing and controller-based distributed reputation, can significantly increase the network performance of a multi-domain software-defined network.
Abstract: Software defined networking (SDN) provides great flexibility by decoupling the control and the data planes, allowing for the implementation of new and innovative network control plane experiments. Presently, the reference implementation for SDN architecture is reliant upon a single controller to push flow rules to all SDN-enabled switches in the network, creating a performance bottleneck and single-point of failure in large networks. To provide a scalable yet efficient solution to distributed SDN network management, we propose FlowBroker, a hierarchical brokering agent layer which manages and coordinates among distributed SDN controllers, where each controller is charged with the flow-rule maintenance of the switches in its managed domain. This paper proposes the FlowBroker architecture as a collaborative multi-domain approach to load balancing and network performance enhancement in software-defined networks. Moreover, we introduce distributed machine learning agents to allow controllers to evaluate which brokers are more advantageous than others, from a performance-based reputation perspective. Simulation results show that the FlowBroker architecture, with broker-based collaborative load-balancing and controller-based distributed reputation, can significantly increase the network performance of a multi-domain software-defined network. FlowBroker yields significant reductions in traffic loss, end-to-end delay and maximum link utilization when cooperative brokering and reputation are utilized.

46 citations


Journal ArticleDOI
TL;DR: This work forms an architecture for resource allocation, which centers around a set of cooperating controllers, each solving a problem related to the chosen management objective, and illustrates how a global management objective is mapped onto objectives that govern the execution of these controllers.
Abstract: We consider the problem of jointly allocating compute and network resources in a large Infrastructure-as-a-service cloud. We formulate the problem of optimally allocating resources to virtual data centers (VDCs) for four well-known management objectives: balanced load, energy efficiency, fair allocation, and service differentiation. Then, we outline an architecture for resource allocation, which centers around a set of cooperating controllers, each solving a problem related to the chosen management objective. We illustrate how a global management objective is mapped onto objectives that govern the execution of these controllers. For a key controller, the Dynamic Placement Controller, we give a detailed distributed design, which is based on a gossip protocol that can switch between management objectives. The design is applicable to a broad class of management objectives, which we characterize through a property of the objective function. The property ensures the applicability of an iterative descent method that the gossip protocol implements. We evaluate, through simulation, the dynamic placement of VDCs for a large cloud under changing load and VDC churn. Simulation results show that this controller is effective and highly scalable, up to 100'000 nodes, for the management objectives considered.

45 citations


Journal ArticleDOI
TL;DR: A novel cloud-architecture for RFID supply chain system is proposed and the detail schemes such as authentication, ownership transfer, authority recovery and data sharing are proposed, which enhances the efficiency of supply chain.
Abstract: RFID technology is the popular technology in recent years and applies to many automatic identification applications such as supply chain management, thief-prevention and so on. Supply chain system with the help of RFID system reduces the cost in business operation. Different members in the supply chain connect to the same RFID tag for acquiring their own data. Members can protect their secret information not being revealed by internal attacker, and unauthorized member and cloud service provider cannot decrypt the secret information. However, the traditional architecture lacks of data search and scalability. It causes the difficulty in searching a specific object. When one of supply chain member wants to search data of specific tag in database, it will be difficult to locate and search the object. In this paper we proposed a novel cloud-architecture for RFID supply chain system and we also proposed the detail schemes such as authentication, ownership transfer, authority recovery and data sharing. The proposed method comfort to supply chain environment and data sharing phase enhances the efficiency of supply chain. We analyze our method with security and performance.

34 citations


Journal ArticleDOI
TL;DR: Two bandwidth reservation algorithms with rigorous optimality proofs are proposed to compute the reservation options with the earliest completion time and with the shortest duration for a local BRR to achieve the balanced resource utilization for the network system.
Abstract: Large-scale collaborative e-science requires fast and reliable data transfer with guaranteed performance, which is made possible by reserving bandwidth as needed in advance in high-performance networks. In scientific applications, users typically know the data size, the data available time, and the deadline to finish the data transfer, and they always wish to achieve the earliest possible finish time or the minimum time duration for the data transfer. On the other hand, the network service provider wishes to serve as many users' bandwidth reservation requests (BRRs) as possible to maximize the network resource utilization without compromising their deadlines. Such multi-objective requirements and high system throughput call for a fast and efficient bandwidth reservation strategy that can quickly discover various reservation options in a time-varying network environment. We propose two bandwidth reservation algorithms with rigorous optimality proofs to compute the reservation options with the earliest completion time and with the shortest duration for a local BRR. Our algorithms aim to achieve the balanced resource utilization for the network system. Extensive simulation results demonstrate the superiority of the proposed algorithms in terms of execution time, success ratio, success ratio of BRRs with different priorities and searched complexity of BRRs in comparison with similar scheduling algorithms.

32 citations


Journal ArticleDOI
TL;DR: A Software-Defined Networking-based cross-layer approach, called Software-defined Multicast (SDM), is proposed in this paper, enabling ISPs to offer network layer multicast support for OTT and overlay-based live streaming as a service.
Abstract: The increasing amount of over-the-top (OTT) live streams and the lack of global network layer multicast support poses challenges for a scalable and efficient streaming over the Internet. Content Delivery Networks (CDNs) help by delivering the streams to the edge of almost every Internet Service Provider (ISP) network of the world but usually also end there. From there on, the streams are to be delivered to the clients using IP unicast, although an IP multicast functionality would be desirable to reduce the load on CDN nodes, transit links, and the ISP infrastructure. IP multicast is usually not available due to missing control and management features of the protocol. Alternatively, Peer-to-Peer (P2P) mechanisms can be applied to extend the overlay multicast functionality of the CDN towards the clients. Unfortunately, P2P only improves the situation for the CDN but makes it more challenging for the ISP as even more unicast flows are generated between clients inside and outside the ISP network. To tackle this problem, a Software-Defined Networking-based cross-layer approach, called Software-Defined Multicast (SDM), is proposed in this paper, enabling ISPs to offer network layer multicast support for OTT and overlay-based live streaming as a service. SDM is specifically tailored towards the needs of P2P-based video stream delivery originating from outside the ISP network and can easily be integrated with existing streaming systems. Prototypical evaluations show significantly improved network layer transmission efficiencies when compared to other overlay streaming mechanisms, down to a level as low as for IP multicast, at linearly bounded costs.

30 citations


Journal ArticleDOI
TL;DR: This paper introduces a cooperative caching strategy in the context of a Content-Centric-Network (CCN), designed for the treatment of large video streams with on-demand access, and describes an augmented version of the CCNx protocol implementing this policy.
Abstract: Recent work on Information Centric Networking enables the exploitation of the caching resources of the new generation of routers [Content Routers (CR)]. So far, only a basic Least Recently Used strategy implemented on every CR has been proposed. In this paper, we introduce a cooperative caching strategy in the context of a Content-Centric-Network (CCN). This strategy has been designed for the treatment of large video streams with on-demand access. This caching strategy combines the traditional hash-based and directory-based cooperative caching scheme, and addresses the need of the Internet Service Provider by halving the cross-domain traffic. We present a first algorithm, which is a core element of this strategy, then we show the changes that must be brought to the CCN protocol in order to implement this strategy. Thereafter, we prove the advantages of this cooperative policy over standard non-cooperative policies in simple network structures. Finally, we describe an augmented version of the CCNx protocol implementing this policy, and we present a set of simulations that have been conducted on an experimental platform for CCNx.

23 citations


Journal ArticleDOI
TL;DR: The current state of research in this area is surveyed, the results are compared and some future directions for smarter handling of outliers in WSN are presented.
Abstract: Outlier detection is a well studied problem in various fields. The unique characteristics and constraints of wireless sensor networks (WSN) make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we survey the current state of research in this area, compare them and present some future directions for smarter handling of outliers in WSN.

22 citations


Journal ArticleDOI
TL;DR: The results on different network traces, as well as on the evasion of a Skype classifier, demonstrate that the performance of the signatures are very promising, which implies that the statistical information based on the network layer with the use of ML can achieve high classification accuracy and produce robust signatures.
Abstract: The classification of encrypted network traffic represents an important issue for network management and security tasks including quality of service, firewall enforcement, and security. Traffic classification becomes more challenging since the traditional techniques, such as port numbers or Deep Packet Inspection, are ineffective against Peer-to-Peer Voice over Internet Protocol (VoIP) applications, which used non-standard ports and encryption. Moreover, traffic classification also represents a particularly challenging application domain for machine learning (ML). Solutions should ideally be both simple--therefore efficient to deploy--and accurate. Recent advances in ML provide the opportunity to decompose the original problem into a subset of classifiers with non-overlapping behaviors, in effect providing further insight into the problem domain and increasing the throughput of solutions. In this work, we investigate the robustness of an ML approach to classify encrypted traffic on not only different network traffic but also against evasion attacks. Our ML based approach only employs statistical network traffic flow features without using the Internet Protocol addresses, source/destination ports, and payload information to unveil encrypted VoIP applications in network traffic. What we mean by robust signatures is that the signatures learned by training on one network are still valid when they are applied to traffic coming from totally different locations, networks, time periods, and also against evasion attacks. The results on different network traces, as well as on the evasion of a Skype classifier, demonstrate that the performance of the signatures are very promising, which implies that the statistical information based on the network layer with the use of ML can achieve high classification accuracy and produce robust signatures.

Journal ArticleDOI
TL;DR: A new load balancing solution that reduces service response times by applying a protocol for the reservation of remote resources and describes a middleware that supports this protocol by managing the trade of resources among a set of servers, allowing them to share their spare capacity in a transparent way.
Abstract: A key issue for good performance of worldwide distributed web services is the efficiency of the load balancing mechanism used to distribute client requests among the replicated servers. The load balancing solution allows providers to make better use of their resources, soften the need for over-provision, and help tolerate abrupt load peaks. In this paper, we propose a new load balancing solution that reduces service response times by applying a protocol for the reservation of remote resources. This protocol prevents the overload of remote servers by limiting the amount of load each server can redirect to the others. We describe a middleware that supports this protocol by managing the trade of resources among a set of servers, allowing them to share their spare capacity in a transparent way. We also present the results of a large set of simulations that encompass an exhaustive set of workload scenarios based on realistic internet traffic models.

Journal ArticleDOI
TL;DR: The main novelty of the proposed autonomic traffic classification system is that it can automatically retrain itself in order to sustain a high classification accuracy along time, and is currently being used by network managers of more than 90 institutions connected to this network.
Abstract: Traffic classification is an important aspect in network operation and management, but challenging from a research perspective. During the last decade, several works have proposed different methods for traffic classification. Although most proposed methods achieve high accuracy, they present several practical limitations that hinder their actual deployment in production networks. For example, existing methods often require a costly training phase or expensive hardware, while their results have relatively low completeness. In this paper, we address these practical limitations by proposing an autonomic traffic classification system for large networks. Our system combines multiple classification techniques to leverage their advantages and minimize the limitations they present when used alone. Our system can operate with Sampled NetFlow data making it easier to deploy in production networks to assist network operation and management tasks. The main novelty of our system is that it can automatically retrain itself in order to sustain a high classification accuracy along time. We evaluate our solution using a 14-day trace from a large production network and show that our system can sustain an accuracy <96 %, even in presence of sampling, during long periods of time. The proposed system has been deployed in production in the Catalan Research and Education network and it is currently being used by network managers of more than 90 institutions connected to this network.

Journal ArticleDOI
TL;DR: This paper proposes an adaptive monitoring framework where end-to-end probing-based solutions are used to adapt the at-a-point monitoring tools and presents algorithms to select and analyze probes and to dynamically adapt the monitoring policies based on probe analysis.
Abstract: Availability of good quality monitoring data is a vital need for management of today's data centers. However, effective use of monitoring tools demands an understanding of the monitoring requirements that system administrators most often lack. Instead of a well-defined process of defining a monitoring strategy, system administrators adopt a manual and intuition-based approach. In this paper, we propose to replace the ad-hoc, manual, intuition-based approach with a more systematic, automated, and analytics-based approach for system monitoring. We propose an adaptive monitoring framework where end-to-end probing-based solutions are used to adapt the at-a-point monitoring tools. We present a systematic framework to use probes for adjusting monitoring levels. We present algorithms to select and analyze probes and to dynamically adapt the monitoring policies based on probe analysis. We demonstrate the effectiveness of the proposed solution using real-world examples as well as simulations.

Journal ArticleDOI
TL;DR: A fault diagnosis and problem isolation solution that addresses the IPTV monitoring challenge and recommends problem-specific remedial action and it is demonstrated that professional service providers can provide timely reports on the quality of IPTV service delivery using this system.
Abstract: Increasing and variable traffic demands due to triple play services pose significant Internet Protocol Television (IPTV) resource management challenges for service providers. Managing subscriber expectations via consolidated IPTV quality reporting will play a crucial role in guaranteeing return-on-investment for players in the increasingly competitive IPTV delivery ecosystem. We propose a fault diagnosis and problem isolation solution that addresses the IPTV monitoring challenge and recommends problem-specific remedial action. IPTV delivery-specific metrics are collected at various points in the delivery topology, the residential gateway and the Digital Subscriber Line Access Multiplexer through to the video Head-End. They are then pre-processed using new metric rules. A semantic uplift engine takes these raw metric logs; it then transforms them into World Wide Web Consortium's standard Resource Description Framework for knowledge representation and annotates them with expert knowledge from the IPTV domain. This system is then integrated with a monitoring visualization framework that displays monitoring events, alarms, and recommends solutions. A suite of IPTV fault scenarios is presented and used to evaluate the feasibility of the solution. We demonstrate that professional service providers can provide timely reports on the quality of IPTV service delivery using this system.

Journal ArticleDOI
TL;DR: This work presents here how autonomous SMCs can be composed and federated into complex structures through the systematic composition of interaction patterns, and presents a formalisation of the model that allows a rigorous verification of the properties satisfied by the SMC interactions before policies are deployed in physical devices.
Abstract: Ubiquitous systems and applications involve interactions between multiple autonomous entities--for example, robots in a mobile ad-hoc network collaborating to achieve a goal, communications between teams of emergency workers involved in disaster relief operations or interactions between patients' and healthcare workers' mobile devices. We have previously proposed the Self-Managed Cell (SMC) as an architectural pattern for managing autonomous ubiquitous systems that comprise both hardware and software components and that implement policy-based adaptation strategies. We have also shown how basic management interactions between autonomous SMCs can be realised through exchanges of notifications and policies, to effectively program management and context-aware adaptations. We present here how autonomous SMCs can be composed and federated into complex structures through the systematic composition of interaction patterns. By composing simpler abstractions as building blocks of more complex interactions it is possible to leverage commonalities across the structural, control and communication views to manage a broad variety of composite autonomous systems including peer-to-peer collaborations, federations and aggregations with varying degrees of devolution of control. Although the approach is more broadly applicable, we focus on systems where declarative policies are used to specify adaptation and on context-aware ubiquitous systems that present some degree of autonomy in the physical world, such as body sensor networks and autonomous vehicles. Finally, we present a formalisation of our model that allows a rigorous verification of the properties satisfied by the SMC interactions before policies are deployed in physical devices.

Journal ArticleDOI
Hongquan Liu1, Hongfeng Yang, Yehua Wang, Bing Wang, Yuantao Gu1 
TL;DR: Coding-aware opportunistic routing (CAR) is proposed in this paper to address this problem on fixed wireless mesh networks (WMNs) and dynamically selects a route for a given flow according to the real-time coding opportunities, which greatly reduces the overhead costs.
Abstract: An intermediate node in an inter-flow network coding scheme needs to know exactly which are the previous hop and next hop of a packet before coding. It is difficult to incorporate inter-flow network coding into opportunistic routing (OR) because the next hop of a packet in OR can't be determined in advance. Coding-aware opportunistic routing (CAR) is proposed in this paper to address this problem on fixed wireless mesh networks (WMNs). Meanwhile, it aims to maximize the number of native packets coded in each single transmission. It dynamically selects a route for a given flow according to the real-time coding opportunities. There are no control packets in CAR, which greatly reduces the overhead costs. CAR gives the coded packet that consists of a larger number of native packets with a smaller forwarding delay. The forwarder with the largest number of native packets coded together is ultimately selected to send data. Simulations demonstrate that CAR achieves significantly better throughput gains and derives a reasonable end-to-end delay in both cross topology and mesh topology under both transmission control protocol (TCP) and user datagram protocol (UDP) traffic, as explained below. CAR achieves more than 35 % throughput improvement under TCP traffic and more than 15 % throughput improvement under UDP traffic, compared to other state-of-art protocols in cross topology, respectively. CAR also provides a several-fold increase in throughput in a large scale network (mesh topology). In a word, CAR significantly improves network performance of a WMN.

Journal ArticleDOI
TL;DR: A novel policy-based mechanism to provide context-aware network-wide policies to Software Defined Networking applications, implemented with a policy flow based on property graph models, validated in a transport SDN controller.
Abstract: This paper presents a novel policy-based mechanism to provide context-aware network-wide policies to Software Defined Networking (SDN) applications, implemented with a policy flow based on property graph models. The proposal has been validated in a transport SDN controller, supporting optical network virtualization via slicing of physical resources such as nodes, links and wavelengths, through use case testbed demonstrations of policy enforcement for SDN applications, including optical equalization and virtual optical network control. Additionally, the policy engine incorporates a simulation-assisted pre-setting mechanism for local policy decisions in case of problems in communication with the controller.

Journal ArticleDOI
TL;DR: This paper builds an intelligent agent based hybrid SDN architecture by providing a control and management abstraction and filtering layer, called IRIS-CoMan, which serves an essential component for the reliable, scalable, and secure SDN deployment.
Abstract: A software-defined network (SDN) enables agile network control and configuration as well as shortens the network function deployment time. Despite the projected benefits of an SDN, the abstractions toward the remote and centralized control tend to impose excessive control traffic overhead in order for the controller to acquire global network visibility as well as extend the legacy network's inaccurate and unreliable management problems into the control plane. In addition, many recent SDNs facilitate multiple management pillars (such as customized interfaces and protocols) so that user applications can directly communicate to the data plane to measure and monitor specific information. Not only logical control centralization, but also virtualization of the underlying computing and network resources add demands of more flexible and programmable monitoring functions per the virtual domain. A complex combination of multiple and heterogeneous management channels introduces the significant scalability, control tuning, and reliability problems in SDN. In this paper, to address the above control and management problems, we propose a highly scalable and reliable SDN control and management architecture, called IRIS Control and Management (IRIS-CoMan). It builds an intelligent agent based hybrid SDN architecture by providing a control and management abstraction and filtering layer. It serves an essential component for the reliable, scalable, and secure SDN deployment. We present design, implementation, deployment, and feasibility evaluation results of IRIS-CoMan.

Journal ArticleDOI
TL;DR: This study proposes a new approach to detecting abnormal internet traffic that begins with extracting hidden, but important, features by utilizing discrete wavelet transformation and finds that the model detects abnormalinternet traffic successfully with high accuracy.
Abstract: Detection of abnormal internet traffic has become a significant area of research in network security. Due to its importance, many predictive models are designed by utilizing machine learning algorithms. The models are well designed to show high performances in detecting abnormal internet traffic behaviors. However, they may not guarantee reliable detection performances for new incoming abnormal internet traffic because they are designed using raw features from imbalanced internet traffic data. Since internet traffic is non-stationary time-series data, it is difficult to identify abnormal internet traffic with the raw features. In this study, we propose a new approach to detecting abnormal internet traffic. Our approach begins with extracting hidden, but important, features by utilizing discrete wavelet transformation. Then, statistical analysis is performed to filter out irrelevant and less important features. Only statistically significant features are used to design a reliable predictive model with logistic regression. A comparative analysis is conducted to determine the importance of our approach by measuring accuracy, sensitivity, and the Area Under the receiver operating characteristic Curve. From the analysis, we found that our model detects abnormal internet traffic successfully with high accuracy.

Journal ArticleDOI
TL;DR: An online risk management strategy for protecting VoIP infrastructures to minimize the network exposure to security attacks while maintaining the quality of service, through the dynamic application of countermeasures is proposed.
Abstract: Telephony over IP has been widely deployed, supported by the standardization of VoIP signalling and media transfer protocols. This deployment has also led to the emergence of several security threats, including attacks inherited from the IP layer and attacks specific to the application layer. A large variety of security mechanisms has been proposed for addressing them, but these mechanisms may seriously degrade such a critical service. We propose in this paper an online risk management strategy for protecting VoIP infrastructures. The objective is to minimize the network exposure to security attacks while maintaining the quality of service, through the dynamic application of countermeasures. We describe our approach from the formalization of a dedicated risk model to its proof-of-concept implementation into an Asterisk VoIP server. We detail a portfolio of countermeasures and evaluate the performance of our solution with respect to different criteria, including the number of countermeasures, the risk threshold and the size of attack signatures.

Journal ArticleDOI
TL;DR: A new scheduling method ZERO–ONE scheduling with multiple targets (ZEROONEMT) is proposed to solve the Gang scheduling in the Grid by reducing the values in the average waiting time, the average response time, and the standard deviation of waiting time of all the jobs.
Abstract: Gang scheduling combines time-sharing with space-sharing to ensure a short response time for interactive tasks and high overall system throughput. It has been widely studied in different areas including the Grid. Gang scheduling tries to assign the task belonging to one job to different Grid nodes. During the tasks assignment, there are three targets as follows: (1) to keep the Grid in higher resource utilization, (2) to keep the jobs in a low average waiting time and executing time, and, (3) to keep the system in fairness between jobs. In order to meet these targets, we propose a new model according to the waiting time of the jobs. Then we propose a new scheduling method ZERO---ONE scheduling with multiple targets (ZEROONEMT) to solve the Gang scheduling in the Grid. We have conducted extensive evaluations to compare our method with the existing methods based on a simulation environment and a real log from a Grid. In the experiments, in order to justify our method, different metrics, including adapted first come first served and largest job first served, are selected to test the performance of our methods. Experimental results illustrate that our proposed ZEROONEMT reduces the values in the average waiting time, the average response time, and the standard deviation of waiting time of all the jobs.

Journal ArticleDOI
TL;DR: An autonomic synchronous e-training platform based on standard protocols implementing self-management features such as self-deployment, self-organization and self-healing is proposed and the platform deploys an overlay network composed of Real-time Transport Protocol relays acting as multicast/unicast reflectors.
Abstract: Synchronous e-training of human resources uses conferencing services to support training at the workplace. Multimedia data is interchanged between participants in real-time in a synchronous e-training activity providing an enriched learning experience. Large organizations are usually widely dispersed in several sites, so synchronous e-training activities are especially difficult. Most of the sites constitute multicast islands, as IP multicast is available, so the data transport can use IP multicast within the sites of the organization in order to save bandwidth. In this paper, an autonomic synchronous e-training platform based on standard protocols implementing self-management features such as self-deployment, self-organization and self-healing is proposed. The platform deploys an overlay network composed of Real-time Transport Protocol relays acting as multicast/unicast reflectors. The overlay is automatically reorganized according to the joining and leaving of participants and network failures. The use of standard protocols and a modular design enable the interoperability, extensibility and portability of the platform.

Journal ArticleDOI
TL;DR: The proposed method provides microseconds scale accuracy, which is applicable for trace events with nanosecond timestamp granularity, and does not require adding new network traffic, using the send and receive time of existing traffic.
Abstract: Time synchronization is a fundamental requirement for many services provided by distributed systems. For this purpose, several time synchronization protocols have been proposed. However, they either achieve high accuracy by adding further network traffic, even more than common protocols such as network time protocol and precision time protocol, or consume a lot of time in additional computations. An online distributed tracing and monitoring system, used to identify functional and performance problems in distributed systems, must offer high precision with minimum time overhead and system resource consumption. The aim of this paper is to propose an efficient algorithm for time synchronization in online mode, applicable for all distributed services. The proposed method in this paper addresses five key requirements for a practical solution in distributed systems. First, it provides microseconds scale accuracy, which is applicable for trace events with nanosecond timestamp granularity. Secondly, it does not require adding new network traffic, using the send and receive time of existing traffic. Thirdly, it synchronizes the distributed traces in average time complexity of O(1) per synchronization update. Fourthly, it updates online synchronization parameters immediately without latency. Finally, it iteratively refines the early estimates without requiring significant buffering of earlier data. Although we used this work for distributed trace synchronization, it is a general, fully incremental, continuous synchronization approach applicable to most synchronization purposes.

Journal ArticleDOI
TL;DR: It is found that migration to SDN can be eased by joint migration toPCE, and that the benefits derived from SDN are best exploited in combination with PCE, than by itself.
Abstract: Conventionally, network migration models study competition between emerging and incumbent technologies by considering the resulting increase in revenue and associated cost of migration. We propose to advance the science in the existing network migration models by considering additional critical factors, including (1) synergistic relationships across multiple technologies, (2) reduction in operational expenditures as a reason to migrate, and (3) implications of local network effects on migration decisions. To this end, we propose a novel agent-based migration model considering these factors. Based on the model, we analyze the case study of network migration to two emerging networking paradigms, i.e., IETF Path Computation Element (PCE) and Software-Defined Networking (SDN). We validate our model using extensive simulations. Our results demonstrate the synergistic effects of migration to multiple complementary technologies, and show that a technology migration may be eased by the joint migration to multiple technologies. In particular, we find that migration to SDN can be eased by joint migration to PCE, and that the benefits derived from SDN are best exploited in combination with PCE, than by itself.

Journal ArticleDOI
TL;DR: An automated training dataset generation for an on-line P2P traffic classification is proposed to allow frequent classifier retraining and can generate an accurate training dataset by classifying around 94 % of total flows with high accuracy and a low false positive rate.
Abstract: Peer-to-peer (P2P) classifications based on flow statistics have been proven accurate in detecting P2P traffic. A machine learning classification is affected by the quality and recency of the training dataset used. Hence, to classify P2P traffic on-line requires the removal of these limitations. In this paper, an automated training dataset generation for an on-line P2P traffic classification is proposed to allow frequent classifier retraining. A two-stage training dataset generator (TSTDG) is proposed by combining a 3-class heuristic and a 3-class statistical classification to automatically generate a training dataset. In the heuristic stage, traffic is classified as P2P, non-P2P, or unknown. In the statistical stage, a dual Decision Tree is built based on a dataset generated in the heuristic stage to reduce the amount of classified unknown traffic. The final training dataset is generated based on all flows that are classified in these two stages. The proposed system has been evaluated on traces captured from a campus network. The overall results show that the TSTDG can generate an accurate training dataset by classifying around 94 % of total flows with high accuracy (98.59 %) and a low false positive rate (1.27 %).

Journal ArticleDOI
TL;DR: This paper proposes a simple method to estimate end-to-end network traffic from the aggregated data based on the network tomography method, and validates the method by real data from the Abilene and GÉANT backbone networks.
Abstract: Traffic matrices (TM) represent the volumes of end-to-end network traffic between each of the origin---destination pairs. Accurate estimates of TM are used by network operators to perform network management functions and traffic engineering tasks. Despite a large number of methods devoted to the problem of traffic matrix estimation, the inference of end-to-end network traffic is still a main challenge in the large-scale IP backbone network, due to an ill-posed nature of itself. In this paper, we focus on the problem of end-to-end network traffic reconstruction. Based on the network tomography method, we propose a simple method to estimate end-to-end network traffic from the aggregated data. By analyzing, in depth, the properties of the network tomography method, compressive sensing reconstruction algorithms are put forward to overcome the ill-posed nature of the network tomography model. In this case, to satisfy the technical conditions of compressive sensing, we propose a modified network tomography model. Besides, we give a further discussion that the proposed model follows the constraints of compressive sensing. Finally, we validate our method by real data from the Abilene and GEANT backbone networks.

Journal ArticleDOI
TL;DR: This paper claims that the social-aware sharing of under-utilized resources in physical proximity will be of paramount importance in future heterogeneous wireless networks, i.e., spontaneous networks (SNs), where neighbors opportunistically and temporarily cooperate for service provisioning.
Abstract: The growing availability of connectivity/computing/storage resources on smartphones and tablets together with the trend toward frequent and voluntary collaborations in user communities (such as in experience/status sharing and shopping recommending applications) are enabling novel scenarios of high relevance for the user mass market. In particular, we claim that the social-aware sharing of under-utilized resources in physical proximity will be of paramount importance in future heterogeneous wireless networks, i.e., spontaneous networks (SNs), where neighbors opportunistically and temporarily cooperate for service provisioning. In this paper we propose a novel middleware for multi-hop SNs, by specifically focusing on the hard technical challenges of supporting collaborative re-casting of live multimedia flows with dynamic quality adaptation. Our middleware prototype originally (1) adopts a middleware-level multimedia redistribution approach with cross-layer visibility of underlying SNs, (2) dynamically exploits collaborating nodes to monitor and tailor end-to-end streams by splitting them in sub-segments in a completely decentralized way, and (3) performs quality/resource-aware management decisions with limited resource consumption at collaborating peers. The reported experimental results demonstrate that, notwithstanding the middleware-layer approach and the challenging characteristics of multi-hop SNs, our solution effectively supports redistribution of tailored multimedia content with limited overhead.

Journal ArticleDOI
TL;DR: An analytical model for the distribution of the response time, a key metric in the Spotify service, is introduced and found that the model is accurate—measurements are within 11 % of predictions—within the range of normal load patterns.
Abstract: We model and evaluate the performance of a distributed key-value storage system that is part of the Spotify backend. Spotify is an on-demand music streaming service, offering low-latency access to a library of over 20 million tracks and serving over 20 million users currently. We first present a simplified model of the Spotify storage architecture, in order to make its analysis feasible. We then introduce an analytical model for the distribution of the response time, a key metric in the Spotify service. We parameterize and validate the model using measurements from two different testbed configurations and from the operational Spotify infrastructure. We find that the model is accurate--measurements are within 11 % of predictions--within the range of normal load patterns. In addition, we model the capacity of the Spotify storage system under different object allocation policies and find that measurements on our testbed are within 9 % of the model predictions. The model helps us justify the object allocation policy adopted for Spotify storage system.

Journal ArticleDOI
TL;DR: This special issue covers a wide scope of SDN management including a transport network, a broader abstraction platform, a hybrid architecture with a legacy networks, a multi-domain architecture, multicast, and end-host management.
Abstract: Software Defined Networking attempts to use software defined packages or modules for implementing network features/functions and their control and management. It utilizes the concept of separating the network control plane from the network traffic forwarding plane. The control plane can be physically distributed, but it is logically centralized. SDN decouples the physical networking infrastructure/resources from the services that utilize them so that flexibility and programmability in a resources’ assignment can be achieved seamlessly for the desired applications and services. However, there are a number of network management issues to be explored as to how developers and engineers should design, build, and manage networks from the existing networks, and what and how new functionalities can be offered. We received a large number of submissions from seven different countries. We have gone through a rigorous review process and all the papers received at least two reviews. Ultimately, we selected five papers from the open call, and an invited paper from the industry. This special issue covers a wide scope of SDN management including a transport network, a broader abstraction platform, a hybrid architecture with a legacy network, a multi-domain architecture, multicast, and end-host management. Some of them are near or already at the operational/deployment stage.