scispace - formally typeset
Search or ask a question

Showing papers presented at "International Workshop on Quality of Service in 2014"


Proceedings ArticleDOI
26 May 2014
TL;DR: KEEP uses a validation-recombination mechanism to obtain consistent secret keys from CSI measurements of all subcarriers and achieves high security level of the keys and fast key-generation rate.
Abstract: Device to device (D2D) communication is expected to become a promising technology of the next-generation wireless communication systems. Security issues have become technical barriers of D2D communication due to its “open-air” nature and lack of centralized control. Generating symmetric keys individually on different communication parties without key exchange or distribution is desirable but challenging. Recent work has proposed to extract keys from the measurement of physical layer random variations of a wireless channel, e.g., the channel state information (CSI) from orthogonal frequency-division multiplexing (OFDM). Existing CSI-based key extraction methods usually use the measurement results of individual subcarriers. However, our real world experiment results show that CSI measurements from near-by subcarriers have strong correlations and a generated key may have a large proportion of repeated bit segments. Hence attackers may crack the key in a relatively short time and hence reduce the security level of the generated keys. In this work, we propose a fast secret key extraction protocol, called KEEP. KEEP uses a validation-recombination mechanism to obtain consistent secret keys from CSI measurements of all subcarriers. It achieves high security level of the keys and fast key-generation rate. We implement KEEP using off-the-shelf 802.11n devices and evaluate its performance via extensive experiments. Both theoretical analysis and experimental results demonstrate that KEEP is safer and more effective than the state-of-the-art approaches.

89 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: A novel Energy-Efficient Cooperative Offloading Model (E2COM) is proposed for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results.
Abstract: This paper presents a quantitative study on the energy-traffic tradeoff problem from the perspective of entire Wireless Local Area Network (WLAN). We propose a novel Energy-Efficient Cooperative Offloading Model (E2COM) for energy-traffic tradeoff, which can ensure the fairness of energy consumption of mobile devices and reduce the computation repetition and eliminate the Internet data traffic redundancy through cooperative execution and sharing computation results. We design an Online Task Scheduling Algorithm (OTS) based on a pricing mechanism and Lyapunov optimization to address the problem without predicting future information on task arrivals, transmission rates and so on. OTS can achieve a desirable tradeoff between the energy consumption and Internet data traffic by appropriately setting the tradeoff coefficient. Simulation results demonstrate that E2COM is more efficient than no offloading and cloud offloading for a variety of typical mobile devices, applications and link qualities in WLAN.

80 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: This paper proposes a metric of scalability for SDN control planes, and builds performance models for response time, based on which to evaluate the scalability of these three structures, including centralized, decentralized and hierarchical architectures.
Abstract: With the increasing popularity of Software defined network (SDN), designing a scalable SDN control plane becomes a critical problem An effective approach to improving the scalability is to design distributed architecture of SDN control plane However, how to evaluate the scalability of SDN control planes remains unexplored In this paper, we propose a metric of scalability for SDN control planes, and study three typical SDN control plane structures, including centralized, decentralized and hierarchical architectures We build performance models for response time, based on which we evaluate the scalability of these three structures Furthermore, the comparison between different architectures are analyzed by mathematical methods Numerical evaluations are also conducted to validate the conclusions drawn in this paper

75 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: A new end-to-end delay analysis for periodic flows whose transmissions are scheduled based on the Earliest Deadline First (EDF) policy is presented and a technique to reduce the pessimism in admission control by iteratively tightening the delay bounds for flows with short deadlines is proposed.
Abstract: Industry is adopting Wireless Sensor-Actuator Net- works (WSANs) as the communication infrastructure for process control applications. To meet the stringent real-time performance requirements of control systems, there is a critical need for fast end-to-end delay analysis for real-time flows that can be used for online admission control. This paper presents a new end- to-end delay analysis for periodic flows whose transmissions are scheduled based on the Earliest Deadline First (EDF) policy. Our analysis comprises novel techniques to bound the communication delays caused by channel contention and transmission conflicts in a WSAN. Furthermore, we propose a technique to reduce the pessimism in admission control by iteratively tightening the delay bounds for flows with short deadlines. Experiments on a WSAN testbed and simulations demonstrate the effectiveness of our analysis for online admission control of real-time flows.

54 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: A new near-perfect hash table data structure is proposed that combines many small sparse perfect hash tables into a larger dense one while keeping the worst-case access time of O(1) and supporting fast update.
Abstract: Complex name constitution plus huge-sized name routing table makes wire speed name lookup a challenging task in Named Data Networking. To overcome this challenge, we propose two techniques to significantly speed up the lookup process. First, we look up name prefixes in an order based on the distribution of prefix length in the forwarding table, which can find the longest match much faster than the linear search of current prototype CCNx. The search order can be dynamically adjusted as the forwarding table changes. Second, we propose a new near-perfect hash table data structure that combines many small sparse perfect hash tables into a larger dense one while keeping the worst-case access time of O(1) and supporting fast update. Also the hash table stores the signature of a key instead of the key itself, which further improves lookup speed and reduces memory use.

42 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: A bufferbloat mitigation algorithm is proposed: Multi-Path Transport Bufferbloat Mitigation (MPT-BM), which outperforms the current MPTCP implementation by increasing the application goodput quality and decreasingMPTCP's buffer delay, jitter and buffer space requirements.
Abstract: Today, most of the smart phones are equipped with two network interfaces: Mobile Broadband (MBB) and Wireless Local Area Network (WLAN). Multi-path transport protocols provide increased throughput or reliability, by utilizing these interfaces simultaneously. However, multi-path transmission over networks with very different QoS characteristics is a challenge. In this paper, we studied Multi-Path TCP (MPTCP) in heterogeneous networks, specifically MBB networks and WLAN. We first investigate the effect of bufferbloat in MBB on MPTCP performance. Then, we propose a bufferbloat mitigation algorithm: Multi-Path Transport Bufferbloat Mitigation (MPT-BM). Using our algorithm, we conduct experiments in real operational networks. The experimental results show that MPT-BM outperforms the current MPTCP implementation by increasing the application goodput quality and decreasing MPTCP's buffer delay, jitter and buffer space requirements.

40 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: The proposed auction mechanism RSMOA represents the first truthful online mechanism that timely responds to incoming users' demands and makes dynamic resource provisioning and allocation decisions, while guaranteeing efficiency in both the provider's revenue and system social welfare.
Abstract: We study online cloud resource auctions where users can arrive anytime and bid for heterogeneous types of virtual machines (VMs) assembled and provisioned on the fly. The proposed auction mechanism RSMOA, to the authors' knowledge, represents the first truthful online mechanism that timely responds to incoming users' demands and makes dynamic resource provisioning and allocation decisions, while guaranteeing efficiency in both the provider's revenue and system social welfare. RSMOA consists of two components: (1) an online mechanism that computes resource allocation and users' payments based on a global, non-decreasing pricing curve, and guarantees truthfulness; (2) a judiciously designed pricing curve, which is derived from a threat-based strategy and guarantees a competitive ratio O(ln(p)) in both system social welfare and the provider's revenue, as compared to the celebrated offline Vickrey-Clarke-Groves (VCG) auction. Here p is the ratio between the upper and lower bounds of users' marginal valuation of a type of resource. The efficacy of RSMOA is validated through extensive theoretical analysis and trace-driven simulation studies.

35 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: This paper studies a joint problem of multiradio cooperative routing and relay assignment to maximize the minimum rate among a set of concurrent communication sessions and proposes a centralized algorithm and a distributed algorithm to solve the problem.
Abstract: Cooperative communication (CC) for wireless networks has gained a lot of recent interests. It has been shown that CC has the potential to significantly increase the capacity of wireless networks, with its ability of mitigating fading by exploiting spatial diversity. However, most of the works on CC are limited to single radio wireless network. To demonstrate the benefits of CC in multiradio multihop wireless network, this paper studies a joint problem of multiradio cooperative routing and relay assignment to maximize the minimum rate among a set of concurrent communication sessions. We first model this problem as a mixed-integer programming (MIP) problem and prove it to be NP-hard. Then, we propose a centralized algorithm and a distributed algorithm to solve the problem. The centralized algorithm is designed within a branch-and-bound framework by using the relaxation of the formulated MIP, which can find a global (1+ε)-optimal solution. Our distributed algorithm includes two subalgorithms: a cooperative route selection subalgorithm and a fairness-aware route adjustment subalgorithm. Our simulation results demonstrate the effectiveness of the proposed algorithms and the significant rate gains that can be achieved by incorporating CC in multiradio multihop networks.

35 citations


Proceedings ArticleDOI
Huichen Dai1, Yi Wang1, Hao Wu1, Jianyuan Lu1, Bin Liu1 
26 May 2014
TL;DR: A Bloom filter-based method to continuously capture content popularity with efficient usage of memory is proposed and a real trace-driven comparison shows that LFU policy achieves higher hit rate than LRU with much less unnecessary cache replacements.
Abstract: NDN enables routers to cache received contents for future requests to reduce upstream traffic. To this end, various caching policies are proposed, typically based on some notion of content popularity, e.g., LFU. But these policies simply assume the availability of content popularity information without elaborating how that information is obtained and maintained in routers. Towards line-speed and accurate on-line popularity monitoring on NDN routers, we propose a Bloom filter-based method to continuously capture content popularity with efficient usage of memory. In this method, multiple Bloom filters are employed and each one is responsible for a particular range of popularity. Content objects whose popularities fall into a Bloom filter's range will be inserted into that Bloom filter. Meanwhile, a sliding window monitoring scheme is proposed to implement more frequent and real-time update of the popularities. Moreover, we put forward three optimization schemes to further speed up the monitoring operations. Using a real trace stored in off-chip memory as input and setting the monitoring time window to 30 min, this method achieves a monitoring speed of 20.92 million objects per second (M/s) with multiple threads. This speed is equivalent to 16.74 Gbps throughput assuming the content length is 100 Bytes in average, but only consumes around 32 MB memory. By simulating the environment on the line card using a real-time generated synthetic trace, this method even reaches a speed of 251.07 M/s (equivalent to 200.86 Gbps) because the trace is fetched from high speed on- chip memory, rather than the off-chip DRAMs. Furthermore, both theoretical and experimental analyses elucidate very low relative error of this method. At last, a real trace-driven comparison shows that LFU policy achieves higher hit rate than LRU with much less unnecessary cache replacements. Index Terms—NDN, Line-Speed, Popularity Monitoring, Bloom Filter.

28 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: This paper devise an inter-domain embedding algorithm to handle online VN requests in polynomial time and shows that this solution outperforms other counterparts and achieves 80%-90% of the benchmarks in an ideal scenario where VNP has complete knowledge of all substrate information.
Abstract: Network virtualization provides a promising way to run multiple virtual networks (VNs) simultaneously on a shared infrastructure. It is critical to efficiently map VNs onto substrate resources, which is known as the VN embedding problem. Most existing studies restrict this problem in a single substrate domain, whereas the VN embedding process across multiple domains (i.e., inter-domain embedding) is more practical, because a single domain rarely controls an entire end-to-end path. Since infrastructure providers (InPs) are usually reluctant to expose their substrate information, the inter-domain embedding is more sophisticated than the intra-domain case. In this paper, we develop an efficient solution to facilitate the inter-domain embedding problem. We start with extending the current business roles by employing a broker-like role, virtual network provider (VNP), to make centralized embedding decisions. Accordingly, a reasonable information sharing scheme is proposed to provide VNP with partial substrate information meanwhile keeping InPs' confidential information. Then we formulate the embedding problem as an integer programming problem. By relaxing integer constraints, we devise an inter-domain embedding algorithm to handle online VN requests in polynomial time. Simulation results show that our solution outperforms other counterparts and achieves 80%-90% of the benchmarks in an ideal scenario where VNP has complete knowledge of all substrate information.

25 citations


Proceedings ArticleDOI
26 May 2014
TL;DR: This work proposes a rule multiplexing scheme, in which the same set of rules deployed on each node apply to the whole flow of a session going through but towards different paths, with the objective of minimizing rule space occupation for multiple unicast sessions under QoS constraints.
Abstract: Software-Defined Network (SDN) is a promising network paradigm that separates the control plane and data plane in the network. It has shown great advantages in simplifying network management such that new functions can be easily supported without physical access to the network switches. However, Ternary Content Addressable Memory (TCAM), as a critical hardware storing rules for high-speed packet processing in SDN-enabled devices, can be supplied to each device with very limited quantity because it is expensive and energy-consuming. To efficiently use TCAM resources, we propose a rule multiplexing scheme, in which the same set of rules deployed on each node apply to the whole flow of a session going through but towards different paths. Based on this scheme, we study the rule placement problem with the objective of minimizing rule space occupation for multiple unicast sessions under QoS constraints.We formulate the optimization problem jointly considering routing engineering and rule placement under both existing and our rule multiplexing schemes. Finally, extensive simulations are conducted to show that our proposals significantly outperform existing solutions.

Proceedings ArticleDOI
26 May 2014
TL;DR: This paper focuses on the integrated approach through the technique of software defined networking to mitigate disaster risks while cut down the investment and management costs and can achieve high reliability, fast recovery and low control overhead.
Abstract: With the wide deployment of network facilities and the increasing requirement of network reliability, the disruptive event like natural disaster, power outage or malicious attack has become a non-negligible threat to the current communication network. Such disruptive event can simultaneously destroy all devices in a specific geographical area and affect many network based applications for a long time. Hence, it is essential to build disaster-resilient network for future highly survivable communication services. In this paper, we focus on the integrated approach through the technique of software defined networking to mitigate disaster risks while cut down the investment and management costs. Our design consists of a sub-graph based proactive protection approach for fast rerouting at the network nodes and a splicing approach at the controller for effective post-disaster restoration. Such a systematic design is implemented in OpenFlow framework through the Mininet emulator and Nox controller. Numerical results show that our approach can achieve high reliability, fast recovery and low control overhead.

Proceedings ArticleDOI
26 May 2014
TL;DR: A randomized online stack-centric scheduling algorithm (ROSA) is presented and theoretically prove the lower bound of its competitive ratio and Trace-driven simulation demonstrates that ROSA is superior to the conventional online scheduling algorithms in terms of cost saving.
Abstract: With the booming growth of cloud computing industry, computational resources are readily and elastically available to the customers. In order to attract customers with various demands, most Infrastructure-as-a-service (IaaS) cloud service providers offer several pricing strategies such as pay as you go, pay less per unit when you use more (so called volume discount), and pay even less when you reserve. The diverse pricing schemes among different IaaS service providers or even in the same provider form a complex economic landscape that nurtures the market of cloud brokers. By strategically scheduling multiple customers' resource requests, a cloud broker can fully take advantage of the discounts offered by cloud service providers. In this paper, we focus on how a broker may help a group of customers to fully utilize the volume discount pricing strategy offered by cloud service providers through cost-efficient online resource scheduling. We present a randomized online stack-centric scheduling algorithm (ROSA) and theoretically prove the lower bound of its competitive ratio. Our simulation shows that ROSA achieves a competitive ratio close to the theoretical lower bound under a special case cost function. Trace driven simulation using Google cluster data demonstrates that ROSA is superior to the conventional online scheduling algorithms in terms of cost saving.

Proceedings ArticleDOI
Wei Zhang1, Yaping Lin1, Sheng Xiao1, Qin Liu1, Ting Zhou1 
26 May 2014
TL;DR: This paper defines a distributed search model and proposes two schemes, which are extended with Shamir's secret schemes to achieve better availability and robustness and confirm the efficacy and efficiency of these schemes.
Abstract: Cloud computing provides abundant benefits including easy access, decreased costs and flexible resource management. For privacy concerns, sensitive data have to be encrypted before outsourcing, which obsoletes traditional data utilization based on plaintext keyword search. Therefore, developing a secure search service over encrypted cloud data is of paramount importance. There are several researches concerned about this problem. However, all these schemes are based on a single cloud model which has the threat of single point of failure, loss and corruption of data, loss of availability and loss of privacy. In this paper, we explore the problem of secure distributed keyword search in a multi-cloud paradigm. We first define a distributed search model. Based on this model, we propose two schemes. In scheme_I, we propose to cross-store all encrypted file slices, keywords and keys. In scheme_II, we systematically construct a keyword distributing strategy and a file distributing strategy. Further, we extend both schemes with Shamir's secret schemes to achieve better availability and robustness. Extensive experiments on real-world datasets confirm the efficacy and efficiency of our schemes.

Proceedings ArticleDOI
26 May 2014
TL;DR: The basic idea is to configure several routing policies in advance and then dynamically rebalance traffic by applying different preconfigured routing policy to react to traffic fluctuations, which achieves nearoptimal load balancing and thus obtain about at least 96% throughput compared to optimal routing for each individual traffic scenario with very low overhead.
Abstract: Classical TE methods calculate the optimal routing based on a known traffic matrix. However, they are unable to handle unexpected traffic changes. Thus, various methods were proposed in recent years, such as online dynamic TE and robust static routing TE. However, online dynamic TE requires additional overhead on routers for information dissemination and suffers from the transient disruptions during routing protocol convergence, while using one robust static routing to accommodate a wide range of traffic scenarios is unable to ensure near optimality of performance for each individual traffic scenario. This paper presents an approach called dynamic hybrid routing (DHR) to achieve load balancing for a wide range of traffic scenarios. Our basic idea is to configure several routing policies in advance and then dynamically rebalance traffic by applying different preconfigured routing policy to react to traffic fluctuations. Each routing policy composes of a common basic destination-based routing and a few complementary explicit routing forwarding entries for a small set of selected ingress/egress node pairs. We design a method to find the near-optimal dynamic hybrid routing configuration. Extensive evaluation demonstrates the effectiveness of DHR. We show that DHR achieves nearoptimal load balancing and thus obtain about at least 96% throughput compared to optimal routing for each individual traffic scenario with very low overhead.

Proceedings ArticleDOI
26 May 2014
TL;DR: PAINT (Partial In-Network Transcoding) scheme to reduce the operational cost of delivering adaptive video streaming over ICN is proposed and results indicate PAINT can achieve significant cost savings.
Abstract: Information centric network (ICN) has emerged as a promising architecture to efficiently distribute content over the future Internet However, ICN proposals may still not be cost efficient enough for adaptive video streaming The problem is, each ICN node caches duplicated copies of the same content for each bitrate version in its limited storage space Thus the cache hit ratio drops, and the bandwidth cost of serving the cache missed requests increases This paper proposes PAINT (Partial In-Network Transcoding) scheme to reduce the operational cost of delivering adaptive video streaming over ICN Specifically, we consider both the in-network caching and transcoding services at each ICN node, where the storage and transcoding resources can be dynamically scheduled Then we formulate an optimization problem to balance the trade-off between the transcoding and bandwidth costs Next we analytically derive the optimal strategy, and quantify cost savings compared with existing schemes Finally, we verify our solution by intensive numerical evaluations The results indicate PAINT can achieve significant cost savings (eg, up to 50% in typical scenarios) Besides, we find the optimal strategy and the cost savings can be affected by the cache capacity, the unit price ratio, the hop distance to origin server, and the Zipf parameter of users' request patterns

Proceedings ArticleDOI
Xiaoying Zhang1, Lei Dong1, Hui Peng1, Hong Chen1, Deying Li1, Cuiping Li1 
26 May 2014
TL;DR: This paper proposes an efficient and secure range query protocol in two-tiered wireless sensor networks introducing master nodes that prevents adversaries from gaining sensitive information of both queries issued by users and data collected by sensor nodes but also allows the sink to verify whether results are valid.
Abstract: Wireless sensor network is an important part of the Internet of Things. Preservation of privacy and integrity in wireless sensor networks is extremely urgent and challenging. To address this problem, we propose in this paper an efficient and secure range query protocol in two-tiered wireless sensor networks introducing master nodes. Our proposal not only prevents adversaries from gaining sensitive information of both queries issued by users and data collected by sensor nodes but also allows the sink to verify whether results are valid. It offers confidentiality of queries and data by constructing a special code, provides integrity verification by the correlation among data and also enables efficient query processing. Finally, theoretical analysis and simulation results confirm the security and efficiency of our proposal.

Proceedings ArticleDOI
26 May 2014
TL;DR: A location-based crowdsensing framework, including online sensing and offline crowdsourcing, is proposed to retrieve the number and location of available RSU resources and ad hoc solutions to design a routing switch mechanism, which can guarantee quality of data dissemination under various network connectivity and deployment configurations.
Abstract: WiFi access points, mesh routers, wireless sensors and any other wireless routers along the road can serve as roadside unit (RSU), and these RSUs can provide infrastructural supports for wireless access and data dissemination in cyber-transportation systems. We present a hybrid routing scheme in vehicular networks for inter-vehicle, vehicle-to-roadside and inter-roadside data dissemination in urban hybrid networks. First, a location-based crowdsensing framework, including online sensing and offline crowdsourcing, is proposed to retrieve the number and location of available RSU resources. Then, we combine RSU resources and ad hoc solutions to design a routing switch mechanism, which can guarantee quality of data dissemination under various network connectivity and deployment configurations. The performance of our hybrid data dissemination scheme is evaluated using both simulation and real testbed experiments.

Proceedings ArticleDOI
26 May 2014
TL;DR: This paper proposes two algorithms to address the problem of computing the multipath multicast routes for streaming videos in Software-Defined Networks (SDNs), which adopt less expensive switches and reduce administrative overhead for lower CAPEX/OPEX.
Abstract: IP multicast dictates high-end routers and incurs high administrative overhead, which prevent them from being deployed in many video streaming scenarios. In this paper, we study the problem of computing the multipath multicast routes for streaming videos in Software-Defined Networks (SDNs), which adopt less expensive switches and reduce administrative overhead for lower CAPEX/OPEX. The objectives of the considered problem are robustness, load balance, SDN compatibility, and adaptiveness. We formulate this routing problem into a mathematical optimization problem, and propose two algorithms to address this problem. We implement the proposed algorithms on a popular OpenFlow controller to demonstrate its practicality, and we conduct extensive experiments to evaluate the proposed algorithms. The experiment results clearly show the merits of our algorithms over the IP multicast, e.g., we observe: (i) frame loss rate reduction between 19% and 95%, (ii) video quality improvement between 4 dB and 15 dB, (iii) sink throughput increase between 25% and 66%, and (iv) maximal link utilization reduction between 15% and 50%. We also show the tradeoff between optimality and run time of the two proposed algorithms: one of them is more suitable for smaller and more static networks, and the other one is more suitable for larger and more dynamic networks.

Proceedings ArticleDOI
26 May 2014
TL;DR: Numerical results show that the data offloading fraction is closely affected by the data volume of the flows and by vehicular link quality, and that best effort and background traffics have more priority to be offloaded than the video streaming traffics.
Abstract: Offloading a part of cellular traffic through other kinds of networks, such as Wi-Fi hotspots and femtoCells represents an interesting solution for operators to cope with the high user traffic demand increase. In this paper, we propose to use the Vehicular ad hoc Networks (VANETs) for the same purpose. We present an analytical study based on an optimization problem formulation to evaluate the potential of VANET to vehicle part of the cellular traffic. The offloading decision considers several constraints related to vehicle-to-infrastructure link availability, channel and medium contention, vehicle-to-vehicle link capacity and quality, data flows volume and the link connectivity duration between the vehicle and the road side unit. Moreover, the originality of this work is the consideration of flow's service class in the offloading decision. Numerical results show that the data offloading fraction is closely affected by the data volume of the flows and by vehicular link quality. Results show also that best effort and background traffics have more priority to be offloaded than the video streaming traffics.

Proceedings ArticleDOI
26 May 2014
TL;DR: This paper proposes a scalable framework where user can use his attribute values and a search query to locally derive a search capability, and a file can be retrieved only when its keywords match the query and the user's attribute values can pass the policy check.
Abstract: Cloud computing has become an increasingly popular service for data storage and processing. To keep users' data on the cloud from leaking to unauthorized users, probably including the cloud service providers, the data must be stored in an encrypted form. In the meantime, for data intended for sharing, an efficient access control must be provided. A common operation on the data is keyword search. Currently, search operation over encrypted search is performed at the cloud servers and access control for the in-cloud data is usually enforced by users. Separation of the two types of operations can lead to reduced efficiency and compromised privacy for users with a given set of access privileges to search over encrypted cloud data. In this paper, we study the problem of keyword search with access control over encrypted data in cloud computing. We first propose a scalable framework where user can use his attribute values and a search query to locally derive a search capability, and a file can be retrieved only when its keywords match the query and the user's attribute values can pass the policy check. Using this framework, we propose a novel scheme called KSAC. KSAC utilizes a recent cryptographic primitive called HPE to enforce fine-grained access control, perform multi-field query search, and support the derivation of the search capability. Intensive evaluations on real-world dataset are conducted to validate the applicability of the proposed scheme.

Proceedings ArticleDOI
26 May 2014
TL;DR: An up-and-down routing protocol is proposed for mobile opportunistic social networks, which exhibit a nested core-periphery structure, and space-efficient Bloom-filter-based hints are introduced to provide guidance for downloading messages from the network core to the destination.
Abstract: In this paper, an up-and-down routing protocol is proposed for mobile opportunistic social networks, which exhibit a nested core-periphery structure. In such a network, a few active nodes with large weighted degrees form the network core, while the network peripheries are composed of many inactive nodes with small weighted degrees. By nested, it means that the core-periphery structure is preserved, when periphery nodes are removed. Based on this structure, a message can be uploaded from the source to the network core, through iteratively forwarding the message to a relay that has a higher position in the nested network hierarchy. Then, space-efficient Bloom-filter-based hints are introduced to provide guidance for downloading messages from the network core to the destination. Through utilizing the network structure and space-efficient routing hints, subtle balances between the data delivery delay, ratio, and cost are achieved by our proposed approach. Finally, through extensive simulations, we show that the up-and-down routing scheme achieves a competitive performance on the data delivery delay and ratio, with a relatively small cost on the prior information maintenance and a relatively low forwarding cost.

Proceedings ArticleDOI
26 May 2014
TL;DR: A new MAC protocol Mizar is devised from a cross-layer design perspective that concentrates on improving the spatio-temporal efficiency in disseminating data from an RSU to the moving vehicles and seeks to catch the concurrent transmission opportunities generated in the process of the cooperative communication.
Abstract: In the infotainment applications over Vehicular Ad Hoc Networks(VANETs), the Roadside Units (RSUs) often play the roles of data switching centers. However, the RSUs tend to be the bottlenecks in the data dissemination because of the limited channel bandwidth and heavy traffic loads. In order to improve the throughput of the RSUs, we devise a new MAC protocol Mizar from a cross-layer design perspective. Mizar concentrates on improving the spatio-temporal efficiency in disseminating data from an RSU to the moving vehicles. Leveraging the space diversity of wireless signals, Mizar increases the channel utilization by the cooperative transmission. Moreover, motivated by the location distribution characteristics of the vehicles, Mizar seeks to catch the concurrent transmission opportunities generated in the process of the cooperative communication. It is shown from the experimental results that Mizar can increase the throughput and decrease the transmission delay significantly in comparison with the fixed data rate scheme of IEEE 802.11p, the variable data rate scheme and the cooperative transmission scheme in Mizar.

Proceedings ArticleDOI
26 May 2014
TL;DR: The idea behind Necklace is to minimize data migration among servers and alleviate the collisions in the insertion operation, and identifies the shortest path via indexing the auxiliary table, without retrieving the actual storage contents or carrying out the tentative “kick-out” operations.
Abstract: With the rapid growth of data, query performance is an important concern in cloud storage applications. To reduce the query response time, cuckoo hashing via d hash functions has been adopted to achieve O(1) query efficiency. However, in practice the cuckoo hashing consumes a large amount of system resources, since an item insertion may suffer from frequent “kick-out” operations or even endless loops. In order to address this problem, we propose an efficient loop-oblivious scheme, called Necklace, in the cloud. The idea behind Necklace is to minimize data migration among servers and alleviate the collisions in the insertion operation. We identify the shortest path via indexing the auxiliary table, without retrieving the actual storage contents or carrying out the tentative “kick-out” operations. We have implemented Necklace in a real cloud system and examined the performance by using a real-world trace. Extensive experimental results demonstrate the efficiency and efficacy of Necklace.

Proceedings ArticleDOI
26 May 2014
TL;DR: This paper proposes a multi-phase adaptive sensing algorithm with belief propagation protocol (ASBP), which can provide high data quality and reduce energy consumption by turning on only a small number of nodes in the network.
Abstract: Energy-efficient sensor selection for data quality and load balancing in wireless sensor networks

Proceedings ArticleDOI
26 May 2014
TL;DR: It is shown that regardless of the subset of frames selected for transmission, any optimal schedule has an equivalent canonical form that is a subsequence of a unique universal sequence containing all frames that leads to separable but jointly optimal frame selection and scheduling algorithms that have quadratic computational complexity in the number of frames.
Abstract: We present a jointly optimal selection and scheduling scheme for the lossy transmission of frames governed by a dependency relation and a delay constraint over a link with limited capacity. A main application for this is scalable video streaming. Our objective is to select a subset of frames and decide their transmission schedule such that the overall video quality at the receiver is maximized. The problem is solved for two of the most common classes of dependency structures for video encoding, which include as a special case the popular hierarchical dyadic structure. We formally characterize the structural properties of an optimal transmission schedule in terms of frame dependency. It is shown that regardless of the subset of frames selected for transmission, any optimal schedule has an equivalent canonical form that is a subsequence of a unique universal sequence containing all frames. The canonical form can be computed efficiently through the construction of a dependency tree. This leads to separable but jointly optimal frame selection and scheduling algorithms that have quadratic computational complexity in the number of frames. Simulation with video traces demonstrates that the optimal scheme can substantially outperform existing suboptimal alternatives.

Proceedings ArticleDOI
26 May 2014
TL;DR: This paper presents a novel distributed network coordinate system based on Robust Principal Component Analysis, RNC, that uses a few local distance measurements to calculate high-precision coordinates without convergence process and indicates that RNC outperforms the state-of-the-art NCS schemes.
Abstract: Network Coordinate System (NCS) has drawn much attention over the past years thanks to the increasing number of large-scale distributed systems that require the distance prediction service for each pair of network hosts. The existing schemes suffer seriously from either low prediction precision or unsatisfactory convergence speed. In this paper, we present a novel distributed network coordinate system based on Robust Principal Component Analysis, RNC, that uses a few local distance measurements to calculate high-precision coordinates without convergence process. To guarantee the non-negativity of predicted distances, we propose Robust Nonnegative Principal Component Analysis (RUN-PACE) which only involves convex optimization, consequently resulting in low computation complexity. Our experimental results indicate that RNC outperforms the state-of-the-art NCS schemes.

Proceedings ArticleDOI
26 May 2014
TL;DR: A downlink joint resource allocation with Adaptive Modulation and Coding (AMC) technique for LTE-based femtocell system, namely AMC-QRAP, and shows the outperformance of this method compare to different state-of-the-art methods using different evaluation metrics.
Abstract: Recently, LTE-based femtocell system has received significant attention as a promising solution offering high-speed services, enhanced indoor coverage and increased system capacity. Intelligently allocate resources in multi-user OFDMA-based network is the substantial aim towards interference mitigation and enhancing power and spectral efficiencies. In this paper, we propose a downlink joint resource allocation with Adaptive Modulation and Coding (AMC) technique for such system, namely AMC-QRAP. The proposal core is adjusting the transmission link to the channel status and users demand through the power control and suitable selection of the modulation/coding scheme. Clustered network is adopted and users differentiation is considered providing Quality of Service (QoS) in the network. Our resolution model is solved as an optimization problem using the linear programming. We show through extensive simulations the outperformance of our method compare to different state-of-the-art methods using different evaluation metrics.

Proceedings ArticleDOI
26 May 2014
TL;DR: This paper investigates how to employ handheld readers to improve the time efficiency of tag identification and provide flexibility to scan tags on different purposes and proposes two novel tag scanning protocols that progressively add new techniques on top of one another to improveThe time efficiency.
Abstract: Tag identification is the most fundamental problem in Radio Frequency Identification (RFID) systems. Time efficiency is the top quality of service (QoS) metric in RFID tag identification. Traditional tag scanning approaches suffer from low time efficiency because they need to transmit tag IDs that are usually very long (e.g., 96 bits). In this paper, we investigate how to employ handheld readers to improve the time efficiency of tag identification and provide flexibility to scan tags on different purposes. A fast and flexible tag scanning mechanism called LOCK is proposed, which combines both the information and the replying slot index of a tag's response. In LOCK, tags transmit only short responses instead of tag IDs. Based on LOCK, we propose two novel tag scanning protocols that progressively add new techniques on top of one another to improve the time efficiency. Compared to the state-of-the-art solution in literature, our best protocol reduces scanning time by up to 53 percent.

Proceedings ArticleDOI
26 May 2014
TL;DR: This paper addresses the shared relay assignment (SRA) problem for M21 traffic and formulates two new optimization problems: one is to maximize the minimum throughput among all the sources (hereafter called M21-SRA-MMT), and the other is to maximizing the total throughput over all the Sources while maintaining some degree of fairness.
Abstract: Relay assignment significantly affects the performance of cooperative communications. Previous studies in this area have mostly focused on assigning a dedicated relay to each source-destination pair for one-to-one (121) traffic. On the other hand, many-to-one (M21) traffic, which is also common in many situations (for example, several users associate with one access point in a wireless access network such as a WLAN), hasn't been well studied. This paper addresses the shared relay assignment (SRA) problem for M21 traffic. We formulate two new optimization problems: one is to maximize the minimum throughput among all the sources (hereafter called M21-SRA-MMT), and the other is to maximize the total throughput over all the sources while maintaining some degree of fairness (hereafter called M21-SRA-MTT). As both of these problems are NP-hard, we propose two approximation algorithms whose performance factors are 5.828 and 3, respectively, based on the rounding mechanism. Extensive simulation results show that our algorithm for M21-SRA-MMT can significantly improve the minimum throughput compared with existing algorithms, while our algorithm for M21-SRA-MTT can achieve the close-to-optimal performance.