scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 2017"


Proceedings ArticleDOI
14 Jun 2017
TL;DR: A Cloud Assisted Mobile Edge computing (CAME) framework, in which cloud resources are leased to enhance the system computing capacity and an algorithm with linear complexity is proposed by exploiting the linear property of constraints.
Abstract: Mobile edge computing is envisioned as a promising computing paradigm with the advantage of low latency. However, compared with conventional mobile cloud computing, mobile edge computing is constrained in computing capacity, especially under the scenario of dense population. In this paper, we propose a Cloud Assisted Mobile Edge computing (CAME) framework, in which cloud resources are leased to enhance the system computing capacity. To balance the tradeoff between system delay and cost, mobile workload scheduling and cloud outsourcing are further devised. Specifically, the system delay is analyzed by modeling the CAME system as a queuing network. In addition, an optimization problem is formulated to minimize the system delay and cost. The problem is proved to be convex, which can be solved by using the Karush-Kuhn-Tucker (KKT) conditions. Instead of directly solving the KKT conditions, which incurs exponential complexity, an algorithm with linear complexity is proposed by exploiting the linear property of constraints. Extensive simulations are conducted to evaluate the proposed algorithm. Compared with the fair ratio algorithm and the greedy algorithm, the proposed algorithm can reduce the system delay by up to 33% and 46%, respectively, at the same outsourcing cost. Furthermore, the simulation results demonstrate that the proposed algorithm can effectively deal with the challenge of heterogeneous mobile users and balance the tradeoff between computation delay and transmission overhead.

85 citations


Proceedings ArticleDOI
14 Jun 2017
TL;DR: A frequent template tree (FT-tree) model is proposed in which frequent combinations of (syslog) words are identified and then used as message templates and empirically extracts message templates more accurately than existing approaches, and naturally supports incremental learning.
Abstract: Syslogs on switches are a rich source of information for both post-mortem diagnosis and proactive prediction of switch failures in a datacenter network. However, such information can be effectively extracted only through proper processing of syslogs, e.g., using suitable machine learning techniques. A common approach to syslog processing is to extract (i.e., build) templates from historical syslog messages and then match syslog messages to these templates. However, existing template extraction techniques either have low accuracies in learning the “correct” set of templates, or does not support incremental learning in the sense the entire set of templates has to be rebuilt (from processing all historical syslog messages again) when a new template is to be added, which is prohibitively expensive computationally if used for a large datacenter network. To address these two problems, we propose a frequent template tree (FT-tree) model in which frequent combinations of (syslog) words are identified and then used as message templates. FT-tree empirically extracts message templates more accurately than existing approaches, and naturally supports incremental learning. To compare the performance of FT-tree and three other template learning techniques, we experimented them on two-years' worth of failure tickets and syslogs collected from switches deployed across 10+ datacenters of a tier-1 cloud service provider. The experiments demonstrated that FT-tree improved the estimation/prediction accuracy (as measured by F1) by 155% to 188%, and the computational efficiency by 117 to 730 times.

53 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: This work first studies how to allocate a minimum amount of backup resource for a single SFC request as an integer nonlinear program and provides an optimal solution, and proposes an online heuristic algorithm for mapping multiple SFC requests with the objective of maximizing the number of S FC requests that can be served.
Abstract: Network Function Virtualization (NFV) is a promising technique to greatly improve the effectiveness and flexibility of network services through a process named Service Function Chain (SFC) mapping, with which network functions (NFs) are deployed over virtualized and shared platforms in data centers. NFV typically requires a higher availability at the carrier grade than conventional cloud-based IT services, provided by native IaaS mechanism, for example. To achieve the high availability, each VNFs in an SFC can be provisioned with sufficient on-site backups. However, having too many backups may greatly decrease the resource utilization. Therefore, an open challenge is to find an effective method to allocate backup resource in order to maximize the number of SFC requests that can be served while meeting their heterogeneous availability requirements. To address this challenge, we first study how to allocate a minimum amount of backup resource for a single SFC request as an integer nonlinear program and provide an optimal solution. Based on the solution, we then propose an online heuristic algorithm for mapping multiple SFC requests with the objective of maximizing the number of SFC requests that can be served. Last but not least, we introduce a novel backup pooling mechanism to further improve the efficiency of backup resource usage. Through simulations, we show that our proposed algorithm can significantly reduce resource consumption due to backups and increase the number of co-existing SFC requests that can be served.

47 citations


Proceedings ArticleDOI
14 Jun 2017
TL;DR: A reinforcement learning-based adaptive resource management algorithm, which aims to get the balance between QoS revenue and power consumption and is more robust than the existing algorithms.
Abstract: For better service provision and utilization of renewable energy, Internet service providers have already built their data centers in geographically distributed locations. These companies balance quality of service (QoS) revenue and power consumption by migrating virtual machines (VMs) and allocating the resource of servers adaptively. However, existing approaches model the QoS revenue by service-level agreement (SLA) violation, and ignore the network communication cost and immigration time. In this paper, we propose a reinforcement learning-based adaptive resource management algorithm, which aims to get the balance between QoS revenue and power consumption. Our algorithm does not need to assume prior distribution of resource requirements, and is robust in actual workload. It outperforms other existing approaches in three aspects: 1) The QoS revenue is directly modeled by differentiated revenue of different tasks, instead of using SLA violation. 2) For geo-distributed data centers, the time spent on VM migration and network communication cost are taken into consideration. 3) The information storage and random action selection of reinforcement learning algorithms are optimized for rapid decision making. Experiments show that our proposed algorithm is more robust than the existing algorithms. Besides, the power consumption of our algorithm is around 13.3% and 9.6% better than the existing algorithms in non-differentiated and differentiated services.

47 citations


Proceedings ArticleDOI
14 Jun 2017
TL;DR: A framework of CO Selection (CS) and VNF Assignment (VA) for distributed deployment of NFV, which first select a set of COs that minimizes the communication cost among the selected COs, and employs a shadow-routing based approach to jointly solve the VNF-CO and V NF-server assignment problem.
Abstract: Network functions virtualization (NFV) is increasingly adopted by telecommunications (telecos) service providers for cost savings and flexible management. However, deploying virtual network functions (VNFs) in geo-distributed central offices (COs) is not straightforward. Unlike most existing centralized schemes in clouds, VNFs of a service chain usually need to be deployed in multiple COs due to limited resource capacity and uneven setup cost at various locations. To ensure the Quality of Service of service chains, a key problem for service providers is to determine where a VNF should go, in order to achieve cost-efficiency and load balancing of both computing and bandwidth resources, across all selected COs. To this end, we present a framework of CO Selection (CS) and VNF Assignment (VA) for distributed deployment of NFV. Specifically, we first select a set of COs that minimizes the communication cost among the selected COs. Then, we employ a shadow-routing based approach, which minimizes the maximum of appropriately defined CO utilizations, to jointly solve the VNF-CO and VNF-server assignment problem. Simulations demonstrate the effectiveness of CS algorithm, and asymptotic optimality, scalability and high adaptivity of the VNF assignment approach.

37 citations


Proceedings ArticleDOI
14 Jun 2017
TL;DR: The main idea is to first narrow down the domain of the noise distribution parameter, in order to decrease the possibility of violating the battery limits, and combines a multi-armed bandit algorithm to further reduce the cost as much as possible.
Abstract: Millions of the smart meters, as essential components, are being deployed ubiquitously in the next generation power system. However, the public privacy concerns over the users' power consumption leakage raise, since the smart meters' unintermittent readings contain customers' behavior patterns. To alleviate this problem, the state-of-the-art techniques are common to use a rechargeable battery to hide the actual power consumption. Unfortunately, none of the existing works completely provide a rigorous privacy protection with reasonable cost under real-world battery settings, i.e., achieving the well-known differential privacy guarantee economically using batteries with limited charge/discharge rate and capacity. To attain this goal, this paper proposes a differentially private meter reading report mechanism. The main idea is to first narrow down the domain of the noise distribution parameter, in order to decrease the possibility of violating the battery limits. It also combines a multi-armed bandit algorithm to further reduce the cost as much as possible. In addition, a novel switch mechanism is proposed to prevent the meter from reporting its reading when the battery limitations might be violated. The theoretical analysis provides a formal proof of the privacy guarantee of the proposed scheme. Besides, experimental results show that the privacy protection of the proposed scheme is at least nine times stronger than that of the existing solutions with acceptable extra cost.

32 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: This paper considers per-flow counting over the sliding window model, and proposes two novel solutions, ACE and S-ACE, which utilize the counter sharing idea to reduce memory footprint and can be implemented in on-chip SRAMs in modern routers to keep up with the line speed.
Abstract: Per-flow counting for big network data streams is a fundamental problem in various network applications such as traffic monitoring, load balancing, capacity planning, etc. Traditional research focused on designing compact data structures to estimate flow sizes from the beginning of the data stream (i.e., landmark window model). However, for many applications, the most recent elements of a stream are more significant than those arrived long time ago, which gives rise to the sliding window model. In this paper, we consider per-flow counting over the sliding window model, and propose two novel solutions, ACE and S-ACE. Instead of allocating a separate data structure for each flow, both solutions utilize the counter sharing idea to reduce memory footprint, so they can be implemented in on-chip SRAMs in modern routers to keep up with the line speed. ACE has to reset the sliding window periodically to give precise estimates, while S-ACE based on a novel segment design can achieve persistently accurate estimates. Our extensive simulations as well as experimental evaluations based on real network traffic trace demonstrate that S-ACE can achieve fast processing speed and high measurement accuracy even with a very tight memory.

26 citations


Proceedings ArticleDOI
14 Jun 2017
TL;DR: This paper designs a novel algorithm that can generate a stable outcome for the many-to-one matching problem with lower and upper bounds (i.e., quality requirement and budget constraint), as well as heterogeneous worker skill levels.
Abstract: Crowdsourcing leverages the collective intelligence of the massive crowd workers to accomplish tasks in a cost-effective way. On a crowdsourcing platform, it is challenging to assign tasks to workers in an appropriate way due to heterogeneity in both tasks and workers. In this paper, we explore the problem of assigning workers with various skill levels to tasks with different quality requirements and budget constraints. We first formulate the task assignment as a many-to-one matching problem, in which multiple workers are assigned to a task, and the task can be successfully completed only if a minimum quality requirement can be satisfied within its limited budget. Different from traditional task assignment mechanisms which focus on utility maximization for the crowdsourcing platform, our proposed matching framework takes into consideration the preferences of individual crowdsourcers and workers towards each other. We design a novel algorithm that can generate a stable outcome for the many-to-one matching problem with lower and upper bounds (i.e., quality requirement and budget constraint), as well as heterogeneous worker skill levels. Through extensive simulations, we show that the proposed algorithm can greatly improve the success ratio of task accomplishment and worker happiness, when compared with existing algorithms.

23 citations


Proceedings ArticleDOI
14 Jun 2017
TL;DR: This paper proposes Single Connection Proxy (SCoP) system based on fog computing to merge multiple keep-alive connections into one, and push messages in an energy-saving way, and experimental results show that the proposed system consumes 30% less energy than the current push service for real-time apps, and 60% lessEnergy for delay-tolerant apps.
Abstract: Energy saving solutions on smartphone devices can greatly extend a smartphone's lasting time. However, today's push services require keep-alive connections to notify users of incoming messages, which cause costly energy consuming and drain a smartphone's battery quickly in cellular communications. Most keep-alive connections force smartphones to frequently send heartbeat packets that create additional energy-consuming radio-tails. No previous work has addressed the high-energy consumption of keep-alive connections in smartphones push services. In this paper, we propose Single Connection Proxy (SCoP) system based on fog computing to merge multiple keep-alive connections into one, and push messages in an energy-saving way. The new design of SCoP can satisfy a predefined message delay constraint and minimize the smartphone energy consumption for both real-time and delay-tolerant apps. SCoP is transparent to both smartphones and push servers, which does not need any changes on today's push service framework. Theoretical analysis shows that, given the Poisson distribution of incoming messages, SCoP can reduce the energy consumption by up to 50%. We implement SCoP system, including both the local proxy on the smartphone and remote proxy on the “Fog”. Experimental results show that the proposed system consumes 30% less energy than the current push service for real-time apps, and 60% less energy for delay-tolerant apps.

23 citations


Proceedings ArticleDOI
14 Jun 2017
TL;DR: A Verifiable Ranked Searchable Symmetric Encryption (VRSSE) scheme that allows a user to perform top-K searches on a dynamic file collection while efficiently verifying the correctness of the search results.
Abstract: Big data has become a hot topic in many areas where the volume and growth rate of data require cloud-based platforms for processing and analysis. Due to open cloud environments with very limited user-side control, existing research suggests encrypting data before outsourcing and adopting Searchable Symmetric Encryption (SSE) to facilitate keyword-based searches on the ciphertexts. However, no prior SSE constructions can simultaneously achieve sublinear search time, efficient update and verification, and on-demand file retrieval, which are all essential to the development of big data. To address this, we propose a Verifiable Ranked Searchable Symmetric Encryption (VRSSE) scheme that allows a user to perform top-K searches on a dynamic file collection while efficiently verifying the correctness of the search results. VRSSE is constructed based on the ranked inverted index, which contains multiple inverted lists that link sets of file nodes relating a specific keyword. For verifiable ranked searches, file nodes are ordered according to their ranks for such a keyword, and information about a node's prior/following neighbor will be encoded with the RSA accumulator. Extensive experiments on real data sets demonstrate the efficiency and effectiveness of our proposed scheme.

23 citations


Proceedings ArticleDOI
Zhili Chen1, Xuemei Wei1, Hong Zhong1, Jie Cui1, Yan Xu1, Shun Zhang1 
14 Jun 2017
TL;DR: This paper designs a secure two-party protocol computing a socially efficient double spectrum auction, TDSA, without leaking any information about sellers' requests or buyers' bids beyond the auction outcome, and theoretically proves the security that the design achieves.
Abstract: Truthful spectrum auction is believed to be an effective method for spectrum redistribution. However, privacy concerns have largely hampered the practical applications of truthful spectrum auctions. In this paper, to make the applications of double spectrum auctions practical, we present a secure, efficient and practical double spectrum auction design, SDSA. Specifically, by combining three security techniques: homomorphic encryption, secret sharing and garbled circuits, we design a secure two-party protocol computing a socially efficient double spectrum auction, TDSA, without leaking any information about sellers' requests or buyers' bids beyond the auction outcome. We give the formal security definition in our context, and theoretically prove the security that our design achieves. Experimental results show that our design is efficient and practical even for large-scale double spectrum auctions.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: This paper proposes verifiable SFC, i.e., vSFC, the first scheme that allows an enterprise to accurately verify the correct enforcement of SFC in realtime, which is generic and agile, which can be deployed on various clouds, while not requiring modifications to any NFs on cloud.
Abstract: Network Function Virtualization (NFV) is an emerging technology to enable network functions (NFs) outsourcing on cloud so as to reduce the costs of deploying and maintaining NFs. However, NF outsourcing poses a serious gap between the expected service function chains (SFCs) and the real enforcement because SFC deployment and management on cloud is invisible to NF customers (i.e., enterprises). In this paper, we propose verifiable SFC, i.e., vSFC, the first scheme that allows an enterprise to accurately verify the correct enforcement of SFC in realtime. In particular, different from the-state-of-the-art network function verification schemes, vSFC is generic and agile, which can be deployed on various clouds, while not requiring modifications to any NFs on cloud. vSFC detects a wide range of SFC violations including forwarding path incompliance, flow dropping, and packet injection attacks. To demonstrate the feasibility and performance of vSFC, we implement a vSFC prototype built on top of KVM and conduct experiments with real traces. Our experiment results show that vSFC detects various SFC violations with a negligible overhead.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: An efficient and generalized symmetric-key geometric range search scheme on encrypted spatial data in the cloud, which supports queries with different range shapes and dimensions and extends the secure kNN computation with dynamic geometric transformation, which dynamically transforms the points in the dataset and the queried geometric range simultaneously.
Abstract: With cloud services, users can easily host their data in the cloud and retrieve the part needed by search. Searchable encryption is proposed to conduct such process in a privacy-preserving way, which allows a cloud server to perform search over the encrypted data in the cloud according to the search token submitted by the user. However, existing works mainly focus on textual data and merely take numerical spatial data into account. Especially, geometric range search is an important queries on spatial data and has wide applications in machine learning, location-based services(LBS), computer-aided design(CAD), and computational geometry. In this paper, we proposed an efficient and generalized symmetric-key geometric range search scheme on encrypted spatial data in the cloud, which supports queries with different range shapes and dimensions. To provide secure and efficient search, we extend the secure kNN computation with dynamic geometric transformation, which dynamically transforms the points in the dataset and the queried geometric range simultaneously. Besides, we further extend the proposed scheme to support sub-linear search efficiency through novel usage of tree structures. We also present extensive experiments to evaluate the proposed schemes on a real-world dataset. The results show that the proposed schemes are efficient over encrypted datasets and secure against the curious cloud servers.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: A non-intrusive method to capture the dependency relationships of components, which improves the feasibility of root cause localization system and exploits measurement data of both application layer and underlay infrastructure and a random walk procedure to improve its accuracy.
Abstract: Anomalies of multitier services running in cloud platform can be caused by components of the same tenant or performance interference from other tenants. If the performance of a multitier service degrades, we need to find out the root causes precisely to recover the service as soon as possible. In this paper, we argue that cloud providers are in a better position than tenants to solve this problem, and the solution should be non-intrusive to tenants' services or applications. Based on these two considerations, we propose a solution for cloud providers to help tenants to localize root causes of any anomaly. We design a non-intrusive method to capture the dependency relationships of components, which improves the feasibility of root cause localization system. Our solution can find out root causes no matter they are in the same tenant as the anomaly or from other tenants. Our proposed two-step localization algorithm exploits measurement data of both application layer and underlay infrastructure and a random walk procedure to improve its accuracy. Our real-world experiments of a three-tier web application running in a small-scale cloud platform show a 38.9% improvement in mean average precision compared to current methods.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: Link-Coupled TCP (LCTCP) is a new transport solution that leverages the architectural trends of 5G networks to enable accurate satisfaction of the unique requirements of each application.
Abstract: Applications are typically offered a single, generic option for end-to-end transport, which most commonly consists of the particular flavor of TCP that runs in the server host. This arrangement does not suit well the diversity of the requirements that individual applications pose on network metrics like throughput and delay. Link-Coupled TCP (LCTCP) is a new transport solution that leverages the architectural trends of 5G networks to enable accurate satisfaction of the unique requirements of each application. LCTCP first isolates the 5G access link as the only possible network bottleneck for the application flow, then establishes a lightweight signaling channel between the link buffer and the application server to convey critical information for flexible control of the data source. LCTCP can be deployed in the network without modification of the TCP clients. We use a Linux prototype to demonstrate its feasibility and effectiveness.

Proceedings ArticleDOI
Jianyuan Lu1, Ying Wan1, Yang Li1, Chuwen Zhang1, Huichen Dai1, Yi Wang2, Gong Zhang2, Bin Liu1 
01 Jun 2017
TL;DR: This paper proposes a new Bloom filter variant called Ultra-Fast Bloom Filters, by leveraging the SIMD techniques, and makes three improvements for the UFBF to accelerate the membership check speed.
Abstract: The network link speed is increasing at an alarming rate, which requires all network functions on routers/switches to keep pace. Bloom filter is a widely-used membership check data structure in network applications. It also faces the urgent demand of improving the performance in membership check speed. To this end, this paper proposes a new Bloom filter variant called Ultra-Fast Bloom Filters, by leveraging the SIMD techniques. We make three improvements for the UFBF to accelerate the membership check speed. First, we develop a novel hash computation algorithm which can compute multiple hash functions in parallel with the use of SIMD instructions. Second, we change a Bloom filter's bit-test process from sequential to parallel. Third, we increase the cache efficiency of membership check by encoding an element's information to a small block which can easily fit into a cache-line. Both theoretical analysis and extensive simulations show that the UFBF greatly exceeds the state-of-the-art Bloom filter variants on membership check speed.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: The developed prototype named SmartYARN is extended Apache YARN equipped with the learning algorithm which can enable cloud applications to negotiate multiple resources cost-effectively and performs well in reducing the cost of resource usage while maintaining compliance with the SLA constraints of cloud service simultaneously.
Abstract: Cloud applications can achieve similar performance with diverse multi-resource configurations, allowing cloud service providers to benefit from optimal resource allocation for reducing their operation cost. This paper aims to solve the problem of multi-resource negotiation with considerations of both the service-level agreement (SLA) and the cost efficiency. The performance and resource demand are usually application-dependent, making the optimization problem complicated, especially when the dimension of multi-resource configuration is large. To this end, we use reinforcement learning to solve the optimization problem of multi-resource configuration with simultaneous optimization of the learning efficiency and performance guarantee. The developed prototype named SmartYARN is extended Apache YARN equipped with our learning algorithm which can enable cloud applications to negotiate multiple resources cost-effectively. The extensive evaluations show that SmartYARN performs well in reducing the cost of resource usage while maintaining compliance with the SLA constraints of cloud service simultaneously.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: ARTPoS, an adaptive radio and transmission power selection system, which makes available multiple wireless technologies at runtime and selects the radio(s) and transmissionpower) most suitable for the current conditions and requirements, shows that it can significantly reduce the power consumption, while maintaining desired link reliability.
Abstract: Research efforts over the last few decades produced multiple wireless technologies, which are readily available to support communication between devices in various Internet of Things (IoT) applications. However, none of the existing technologies delivers optimal performance across all critical quality of service (QoS) dimensions under varying environmental conditions. Using a single wireless technology therefore cannot meet the demands of varying workloads or changing environmental conditions. This problem is exacerbated with the increasing interest in placing embedded devices on the user's body or other mobile objects in mobile IoT applications. Instead of pursuing a one-radio-fits-all approach, we design ARTPoS, an adaptive radio and transmission power selection system, which makes available multiple wireless technologies at runtime and selects the radio(s) and transmission power(s) most suitable for the current conditions and requirements. Experimental results show that ARTPoS can significantly reduce the power consumption, while maintaining desired link reliability.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: This paper investigates a secure cloud storage protocol based on the classic discrete logarithm problem, which generates data block tags with only basic algebraic operations, which brings substantial computation savings compared with previous work.
Abstract: With the development of cloud storage, data owners no longer physically possess their data and thus how to ensure the integrity of their outsourced data becomes a challenging task. Several protocols have been proposed to audit cloud storage, all of which rely mainly on data block tags to check data integrity. However, their block tag constructions employ cryptographic operations, which makes them computationally complex. In this paper, we investigate a secure cloud storage protocol based on the classic discrete logarithm problem. Our protocol generates data block tags with only basic algebraic operations, which brings substantial computation savings compared with previous work. We also strictly prove that the proposed protocol is secure under a definition which captures the real-world uses of cloud storage. In order to fit more application scenarios, we extend the proposed protocol to support data dynamics by employing an index vector and third-party public auditing by using a random masking number, both of which are efficient and provably secure. At last, theoretical analysis and experimental evaluation are provided to validate the superiority of the proposed protocol.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: This paper assesses the layered management architecture in F2C systems, taking into account its distributed nature, and preliminary results show the tradeoff observed regarding controllers capacity, number of controllers, and number of controller layers in the F1C architecture.
Abstract: The recent deployment of novel network concepts, such as M2M communication or IoT, has undoubtedly stimulated the placement of a new set of services, leveraging both centralized resources in Cloud Data Centers and distributed resources shared by devices at the edge of the network. Moreover, Fog Computing has been recently proposed having as one of its main assets the reduction of service response time, further enabling the deployment of real-time services. Albeit QoS-aware network researches have been originally focused on data plane issues, the successful deployment of real-time services, demanding very low delay on the allocation of distributed resources, depends on the assessment of the impact of controlling decisions on QoS. Recently, Fog-to-Cloud (F2C) computing has been proposed as a hierarchical layered-architecture relying on a coordinated and distributed management of both Fog and Cloud resources, enabling the distributed and parallel allocation of resources at distinct layers, thus suitably mapping services demands into resources availability. In this paper, we assess the layered management architecture in F2C systems, taking into account its distributed nature. Preliminary results show the tradeoff observed regarding controllers capacity, number of controllers, and number of controller layers in the F2C architecture.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: It was discovered that Skype significantly under-utilizes the network resources, such as available bandwidth, and the root of these inefficiencies is the poor adaptability of Skype in many aspects, including overlay routing, rate control, state update and call termination.
Abstract: Recent advances in high speed rails (HSRs), coupled with user demands for communication on the move, are propelling the need for acceptable quality of communication services in high speed mobility scenarios. This calls for an evaluation of how well popular voice/video call applications, such as Skype, can perform in such scenarios. This paper presents the first comprehensive measurement study on Skype voice/video calls in LTE networks on HSRs with a peak speed of 310 km/h in China. We collected 50 GB of performance data, covering a total HSR distance of 39,900 km. We study various objective performance metrics (such as RTT, sending rate, call drop rate, etc.), as well as subjective metrics such as quality of experience of the calls. We also evaluate the efficiency of Skype's algorithms regarding the level of utilization of network resources. We observed that the quality of Skype calls degrades significantly on HSRs. Moreover, it was discovered that Skype significantly under-utilizes the network resources, such as available bandwidth. We discovered that the root of these inefficiencies is the poor adaptability of Skype in many aspects, including overlay routing, rate control, state update and call termination. These findings highlight the need to develop more adaptive voice/video call services for high speed mobility scenarios.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: This paper for the first time defines the joint duplicated deployment and routing (DDR) problem for throughput maximization (or optimal deployment) with a given budget constraint on the additional SDN resource cost, and presents an approximation algorithm based on the traffic mapping and randomized rounding methods that can improve the network throughput by about 26%.
Abstract: To take advantage of software defined networking (SDN) within a limited budget constraint, a natural strategy is to incrementally deploy a few SDN switches (and a limited amount of additional link bandwidth) into the legacy optical network. In such a hybrid optical network, operators can only change the routes of flows that traverse SDN switches. Therefore, to optimize SDN deployment, it is essential to decide the best places to deploy SDN resources (including SDN switches and link bandwidth) while taking the network traffic into consideration. In this paper, we propose a new SDN deployment scheme, called duplicated deployment, to provide a simple and efficient way for a hybrid network. Based on the proposed deployment scheme, we for the first time define the joint duplicated deployment and routing (DDR) problem for throughput maximization (or optimal deployment) with a given budget constraint on the additional SDN resource cost. Due to the NP-Hardness of the DDR problem, we then present an approximation algorithm based on the traffic mapping and randomized rounding methods, and prove that the approximation factor is (O(log n);O(log n)) in the worst case and (O(1);O(1)) under most practical situations for link capacity and flow-table size constraints, where n is the number of devices (including SDN switches and legacy routers) in the hybrid network. Through extensive simulations, we demonstrate high efficiency of our joint deployment and routing algorithm. For example, our proposed algorithm can improve the network throughput by about 26% compared with existing routing mechanisms with the same amount of extra resources.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: This paper proposes and implements EncSIM, an encrypted and scalable similarity search service, and describes a novel encrypted index construction for EncSIM based on searchable encryption to guarantee the security of service while preserving performance benefits of all-pairs LSH.
Abstract: Similarity-oriented services serve as a foundation in a wide range of data analytic applications such as machine learning, target advertising, and real-time decisions. Both industry and academia strive for efficient and scalable similarity discovery and querying techniques to handle massive, complex data records in the real world. In addition to performance, data security and privacy become an indispensable criterion in the quality of service due to progressively increased data breaches. To address this serious concern, in this paper, we propose and implement “EncSIM”, an encrypted and scalable similarity search service. The architecture of EncSIM enables parallel query processing over distributed, encrypted data records. To reduce client overhead, EncSIM resorts to a variant of the state-of-the-art similarity search algorithm, called all-pairs locality-sensitive hashing (LSH). We describe a novel encrypted index construction for EncSIM based on searchable encryption to guarantee the security of service while preserving performance benefits of all-pairs LSH. Moreover, EncSIM supports data record addition with a strong security notion. Intensive evaluations on a cluster of Redis demonstrate low client cost, linear scalability, and satisfied query performance of EncSIM.

Proceedings ArticleDOI
Kai Gao1, Qiao Xiang2, Xin Wang2, Yang Richard Yang2, Jun Bi1 
14 Jun 2017
TL;DR: A novel, on-demand network abstraction service that provides an abstract network view supporting not only accurate end-to-end QoS metrics, which satisfy the requirements of many peer- to-peer applications, but also multi-flow correlation, which is essential for bandwidth-sensitive applications containing many flows to conduct global network optimization.
Abstract: As many applications today migrate to distributed computing and cloud platforms, their user experience depends heavily on network performance. Software Defined Networking (SDN) makes it possible to obtain a global view of the network, introducing the new paradigm of developing adaptive applications with network views. A naive approach of realizing the paradigm, such as distributing the whole network view to applications, is not practical due to scalability and privacy concerns. Existing approaches providing network abstractions are limited to special cases, such as bottlenecks exist only at networks edges, resulting in potentially suboptimal or infeasible decisions. In this paper, we introduce a novel, on-demand network abstraction service that provides an abstract network view supporting not only accurate end-to-end QoS metrics, which satisfy the requirements of many peer-to-peer applications, but also multi-flow correlation, which is essential for bandwidth-sensitive applications containing many flows to conduct global network optimization. We prove that our abstract view is equivalent to the original network view, in the sense that applications can make the same optimal decision as with the complete information. Our evaluations demonstrate that the abstraction guarantees feasibility and optimality for network optimizations and protects the network service providers' privacy. Our evaluations also show that the service can be implemented efficiently; for example, for an extreme large network with 30,000 links and abstraction requests containing 3,000 flows, an abstract network view can be computed in less than one second.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: An original mathematical statement of the radio resource scheduling problem is given and a novel algorithm which solves this problem using the dynamic programming method is proposed which outperforms ones found in literature both in terms of goodput (i.e. the amount of data delivered to users within delay budget) and number of users with satisfied QoS requirements.
Abstract: In this paper, we consider the problem of radio resource scheduling for Industrial Internet and Tactile Internet. Both paradigms — being revolutionary drivers of 5G — are tightly connected with low-latency communications (i.e. latency of the order of 10 ms or even less). We give an original mathematical statement of the radio resource scheduling problem and propose a novel algorithm which solves this problem using the dynamic programming method. With simulations we show that the proposed algorithm outperforms ones found in literature both in terms of goodput (i.e. the amount of data delivered to users within delay budget) and number of users with satisfied QoS requirements. Finally, we discuss how the developed algorithm can be implemented in real networking equipment.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: This paper proposes a method based on min-K-cut to cluster the VNFs, and aggregates the instances that are deployed with V NFs from the same cluster into the same server or rack for decreasing link bandwidth occupation.
Abstract: Network Function Virtualization has attracted attention from both academia and industry as it can help the service provider to obtain agility and flexibility in network service deployment. In general, the enterprises require their flows to pass through a specific sequence of virtual network function (VNF) that varies from service to service. In addition, for each VNF required in the coming service demands, the operator can either launch a new instance for it or assign it to an established instance. This makes the network service deployment tasks even more complicated. In this paper, we first propose a method based on min-K-cut to cluster the VNFs. With clustering results as guidance, we determine whether to launch or reuse the instance to improve utilization rate of the VNF instance. Furthermore, for purpose of decreasing link bandwidth occupation, we aggregate the instances that are deployed with VNFs from the same cluster into the same server or rack. We evaluate our approach considering the average link bandwidth occupied by every accepted demand, the instance utilization rate and the total number of served demands. The simulation shows that our approach reduces link occupation effectively, and, meanwhile, guarantees the VNF instance utilization rate advantageously.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: A novel network management model based on a user-centric approach that allows residential users to define and control access network resources and the dynamic provision of traffic differentiation to fulfill QoS requirements is defined.
Abstract: The promises of SDN and NFV technologies to boost innovation and to reduce the time-to-market of new services is changing the way in which residential networks will be deployed, managed and maintained in the near future. New user-centric management models for residential networks combining SDN-based residential gateways and cloud technologies have already been proposed, providing flexibility and ease of deployment. Extending the scope of SDN technologies to optical access networks and bringing cloud technologies to the edge of the network enable the creation of advanced residential networks in which complex service function chains can be established to provide traffic differentiation. In this context, this paper defines a novel network management model based on a user-centric approach that allows residential users to define and control access network resources and the dynamic provision of traffic differentiation to fulfill QoS requirements.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: A new CPU power modeling approach that carefully considers the effects of CPU idle power states and achieves higher power estimation accuracy and stability for various benchmarks programs and real apps than the existing approaches.
Abstract: CPU is one of the most significant sources of power consumption on smartphones. Power modeling is a key technique and important tool for power estimation and management, both of which are critical for providing good QoS for smartphones. However, we find that existing CPU power models for smartphones are ill-suited for modern multicore CPUs: they can give high estimation errors (up to 34%) and high estimation accuracy variation (more than 30%) for different types of workloads on mainstream multicore smartphones. The cause is that the existing approaches do not appropriately consider the effects of CPU idle power states on smartphones CPU power modeling. Based on our extensive measurement experiments, we develop a new CPU power modeling approach that carefully considers the effects of CPU idle power states. We present the detailed design of our power modeling approach, and a prototype CPU power estimation system on commercial multicore smartphones. Evaluation results show that our approach consistently achieves higher power estimation accuracy and stability for various benchmarks programs and real apps than the existing approaches.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: This work intends to take advantage of emerging content-centric and software-defined networks whose principles are highly aligned with the smart grid communication system needs to propose a communication architecture that is inherently quality of service aware for smart grids.
Abstract: Providing the new power system with well-tailored communication architecture is a key factor to guarantee expected smart grid capabilities. Among many requirements, quality of service awareness is a relevant feature for the smart grid communication system. Indeed, various data flows are to be exchanged in order to ensure smart grid services, having each different resilience, bandwidth and latency thresholds. Proposing a communication architecture that is inherently quality of service aware for smart grids is, then, the main idea of the present research work. We intend to take advantage of emerging content-centric and software-defined networks whose principles are highly aligned with the smart grid communication system needs.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: This paper presents a novel SFC composition framework, called Automatic Composition Toolkit (ACT), which aims to automatically detect the dependencies and conflicts between NFs, so as to compose and verify SFCs before they are enforced on the physical infrastructure.
Abstract: NFV together with SDN promises to provide more flexible and efficient service provision methods by decoupling the network functions (NFs) from the physical network topology and devices, but requires the real-time and automatic composition and verification for service function chain (SFC). However, most of SFCs today are still typically built through manual configuration processes, which are slow and error prone. In this paper, we present a novel SFC composition framework, called Automatic Composition Toolkit (ACT). It aims to automatically detect the dependencies and conflicts between NFs, so as to compose and verify SFCs before they are enforced on the physical infrastructure.