scispace - formally typeset
Search or ask a question

Showing papers in "IEEE ACM Transactions on Networking in 2014"


Journal ArticleDOI
TL;DR: The design, implementation, and evaluation of OSA are presented, a novel Optical Switching Architecture for DCNs that dynamically changes its topology and link capacities, thereby achieving unprecedented flexibility to adapt to dynamic traffic patterns.
Abstract: A detailed examination of evolving traffic characteristics, operator requirements, and network technology trends suggests a move away from nonblocking interconnects in data center networks (DCNs). As a result, recent efforts have advocated oversubscribed networks with the capability to adapt to traffic requirements on-demand. In this paper, we present the design, implementation, and evaluation of OSA, a novel Optical Switching Architecture for DCNs. Leveraging runtime reconfigurable optical devices, OSA dynamically changes its topology and link capacities, thereby achieving unprecedented flexibility to adapt to dynamic traffic patterns. Extensive analytical simulations using both real and synthetic traffic patterns demonstrate that OSA can deliver high bisection bandwidth (60%-100% of the nonblocking architecture). Implementation and evaluation of a small-scale functional prototype further demonstrate the feasibility of OSA.

332 citations


Journal ArticleDOI
Abstract: The success of LTE heterogeneous networks (HetNets) with macrocells and picocells critically depends on efficient spectrum sharing between high-power macros and low-power picos. Two important challenges in this context are: 1) determining the amount of radio resources that macrocells should offer to picocells, and 2) determining the association rules that decide which user equipments (UEs) should associate with picos. In this paper, we develop a novel algorithm to solve these two coupled problems in a joint manner. Our algorithm has provable guarantee, and furthermore, it accounts for network topology, traffic load, and macro-pico interference map. Our solution is standard compliant and can be implemented using the notion of Almost Blank Subframes (ABS) and Cell Selection Bias (CSB) proposed by LTE standards. We also show extensive evaluations using RF plan from a real network and discuss self-optimized networking (SON)-based enhanced inter-cell interference coordination (eICIC) implementation.

315 citations


Journal ArticleDOI
TL;DR: This paper proposes RAN-aware reactive and proactive caching policies that utilize User Preference Profiles (UPPs) of active users in a cell and proposes video-aware backhaul and wireless channel scheduling techniques that ensure maximizing the number of concurrent video sessions that can be supported by the end-to-end network while satisfying their initial delay requirements and minimize stalling.
Abstract: In this paper, we introduce distributed caching of videos at the base stations of the Radio Access Network (RAN) to significantly improve the video capacity and user experience of mobile networks. To ensure effectiveness of the massively distributed but relatively small-sized RAN caches, unlike Internet content delivery networks (CDNs) that can store millions of videos in a relatively few large-sized caches, we propose RAN-aware reactive and proactive caching policies that utilize User Preference Profiles (UPPs) of active users in a cell. Furthermore, we propose video-aware backhaul and wireless channel scheduling techniques that, in conjunction with edge caching, ensure maximizing the number of concurrent video sessions that can be supported by the end-to-end network while satisfying their initial delay requirements and minimize stalling. To evaluate our proposed techniques, we developed a statistical simulation framework using MATLAB and performed extensive simulations under various cache sizes, video popularity and UPP distributions, user dynamics, and wireless channel conditions. Our simulation results show that RAN caches using UPP-based caching policies, together with video-aware backhaul scheduling, can improve capacity by 300% compared to having no RAN caches, and by more than 50% compared to RAN caches using conventional caching policies. The results also demonstrate that using UPP-based RAN caches can significantly improve the probability that video requests experience low initial delays. In networks where the wireless channel bandwidth may be constrained, application of our videoaware wireless channel scheduler results in significantly (up to 250%) higher video capacity with very low stalling probability.

272 citations


Journal ArticleDOI
TL;DR: A principled understanding of bit-rate adaptation is presented and a suite of techniques that can systematically guide the tradeoffs between stability, fairness, and efficiency are developed, which lead to a general framework for robust video adaptation.
Abstract: Modern video players today rely on bit-rate adaptation in order to respond to changing network conditions. Past measurement studies have identified issues with today's commercial players when multiple bit-rate-adaptive players share a bottleneck link with respect to three metrics: fairness, efficiency, and stability. Unfortunately, our current understanding of why these effects occur and how they can be mitigated is quite limited. In this paper, we present a principled understanding of bit-rate adaptation and analyze several commercial players through the lens of an abstract player model consisting of three main components: bandwidth estimation, bit-rate selection, and chunk scheduling. Using framework, we identify the root causes of several undesirable interactions that arise as a consequence of overlaying the video bit-rate adaptation over HTTP. Building on these insights, we develop a suite of techniques that can systematically guide the tradeoffs between stability, fairness, and efficiency and thus lead to a general framework for robust video adaptation. We pick one concrete instance from this design space and show that it significantly outperforms today's commercial players on all three key metrics across a range of experimental scenarios.

269 citations


Journal ArticleDOI
TL;DR: This paper detected what it believes were Libya's attempts to test firewall-based blocking before they executed more aggressive BGP-based disconnection during censorship episodes in Egypt and Libya.
Abstract: In the first months of 2011, Internet communications were disrupted in several North African countries in response to civilian protests and threats of civil war In this paper, we analyze episodes of these disruptions in two countries: Egypt and Libya Our analysis relies on multiple sources of large-scale data already available to academic researchers: BGP interdomain routing control plane data, unsolicited data plane traffic to unassigned address space, active macroscopic traceroute measurements, RIR delegation files, and MaxMind's geolocation database We used the latter two data sets to determine which IP address ranges were allocated to entities within each country, and then mapped these IP addresses of interest to BGP-announced address ranges (prefixes) and origin autonomous systems (ASs) using publicly available BGP data repositories in the US and Europe We then analyzed observable activity related to these sets of prefixes and ASs throughout the censorship episodes Using both control plane and data plane data sets in combination allowed us to narrow down which forms of Internet access disruption were implemented in a given region over time Among other insights, we detected what we believe were Libya's attempts to test firewall-based blocking before they executed more aggressive BGP-based disconnection Our methodology could be used, and automated, to detect outages or similar macroscopically disruptive events in other geographic or topological regions

150 citations


Journal ArticleDOI
TL;DR: In this paper, a unifying optimization framework for power allocation in both active and passive localization networks is established, where the functional properties of the localization accuracy metric are determined and the power allocation problems are transformed into second-order cone programs (SOCP).
Abstract: Reliable and accurate localization of mobile objects is essential for many applications in wireless networks. In range-based localization, the position of the object can be inferred using the distance measurements from wireless signals exchanged with active objects or reflected by passive ones. Power allocation for ranging signals is important since it affects not only network lifetime and throughput but also localization accuracy. In this paper, we establish a unifying optimization framework for power allocation in both active and passive localization networks. In particular, we first determine the functional properties of the localization accuracy metric, which enable us to transform the power allocation problems into second-order cone programs (SOCPs). We then propose the robust counterparts of the problems in the presence of parameter uncertainty and develop asymptotically optimal and efficient near-optimal SOCP-based algorithms. Our simulation results validate the efficiency and robustness of the proposed algorithms.

129 citations


Journal ArticleDOI
TL;DR: A tool called TCP Congestion Avoidance Algorithm Identification (CAAI) is proposed for actively identifying the TCP algorithm of a remote web server and measurement results show a strong sign that the majority of TCP flows are not controlled by AIMD anymore, and the Internet congestion control has already changed from homogeneous to highly heterogeneous.
Abstract: The Internet has recently been evolving from homogeneous congestion control to heterogeneous congestion control. Several years ago, Internet traffic was mainly controlled by the traditional RENO, whereas it is now controlled by multiple different TCP algorithms, such as RENO, CUBIC, and Compound TCP (CTCP). However, there is very little work on the performance and stability study of the Internet with heterogeneous congestion control. One fundamental reason is the lack of the deployment information of different TCP algorithms. In this paper, we first propose a tool called TCP Congestion Avoidance Algorithm Identification (CAAI) for actively identifying the TCP algorithm of a remote Web server. CAAI can identify all default TCP algorithms (e.g., RENO, CUBIC, and CTCP) and most non-default TCP algorithms of major operating system families. We then present the CAAI measurement result of about 30 000 Web servers. We found that only 3.31%-14.47% of the Web servers still use RENO, 46.92% of the Web servers use BIC or CUBIC, and 14.5%-25.66% of the Web servers use CTCP. Our measurement results show a strong sign that the majority of TCP flows are not controlled by RENO anymore, and a strong sign that the Internet congestion control has changed from homogeneous to heterogeneous.

119 citations


Journal ArticleDOI
TL;DR: An energy cost model is proposed and two efficient energy-aware virtual network embedding algorithms are proposed: a heuristic-based algorithm and a particle-swarm-optimization-technique- based algorithm.
Abstract: Virtual network embedding, which means mapping virtual networks requested by users to a shared substrate network maintained by an Internet service provider, is a key function that network virtualization needs to provide. Prior work on virtual network embedding has primarily focused on maximizing the revenue of the Internet service provider and did not consider the energy cost in accommodating such requests. As energy cost is more than half of the operating cost of the substrate networks, while trying to accommodate more virtual network requests, minimizing energy cost is critical for infrastructure providers. In this paper, we make the first effort toward energy-aware virtual network embedding. We first propose an energy cost model and formulate the energy-aware virtual network embedding problem as an integer linear programming problem. We then propose two efficient energy-aware virtual network embedding algorithms: a heuristic-based algorithm and a particle-swarm-optimization-technique-based algorithm. We implemented our algorithms in C++ and performed side-by-side comparison with prior algorithms. The simulation results show that our algorithms significantly reduce the energy cost by up to 50% over the existing algorithm for accommodating the same sequence of virtual network requests.

118 citations


Journal ArticleDOI
TL;DR: A load balancing and scheduling algorithm that is throughput-optimal, without assuming that job sizes are known or are upper-bounded is presented.
Abstract: We consider a stochastic model of jobs arriving at a cloud data center. Each job requests a certain amount of CPU, memory, disk space, etc. Job sizes (durations) are also modeled as random variables, with possibly unbounded support. These jobs need to be scheduled nonpreemptively on servers. The jobs are first routed to one of the servers when they arrive and are queued at the servers. Each server then chooses a set of jobs from its queues so that it has enough resources to serve all of them simultaneously. This problem has been studied previously under the assumption that job sizes are known and upper-bounded, and an algorithm was proposed that stabilizes traffic load in a diminished capacity region. Here, we present a load balancing and scheduling algorithm that is throughput-optimal, without assuming that job sizes are known or are upper-bounded.

109 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed an analytical framework for evaluating the capacity and delay performance of a wide range of routing algorithms in converged fiber-wireless (FiWi) broadband access networks based on different next-generation PONs and a Gigabit-class multiradio multichannel WLAN-mesh front end.
Abstract: Current Gigabit-class passive optical networks (PONs) evolve into next-generation PONs, whereby high-speed Gb/s time division multiplexing (TDM) and long-reach wavelength-broadcasting/routing wavelength division multiplexing (WDM) PONs are promising near-term candidates. On the other hand, next-generation wireless local area networks (WLANs) based on frame aggregation techniques will leverage physical-layer enhancements, giving rise to Gigabit-class very high throughput (VHT) WLANs. In this paper, we develop an analytical framework for evaluating the capacity and delay performance of a wide range of routing algorithms in converged fiber-wireless (FiWi) broadband access networks based on different next-generation PONs and a Gigabit-class multiradio multichannel WLAN-mesh front end. Our framework is very flexible and incorporates arbitrary frame size distributions, traffic matrices, optical/wireless propagation delays, data rates, and fiber faults. We verify the accuracy of our probabilistic analysis by means of simulation for the wireless and wireless-optical-wireless operation modes of various FiWi network architectures under peer-to-peer, upstream, uniform, and nonuniform traffic scenarios. The results indicate that our proposed optimized FiWi routing algorithm (OFRA) outperforms minimum (wireless) hop and delay routing in terms of throughput for balanced and unbalanced traffic loads, at the expense of a slightly increased mean delay at small to medium traffic loads.

102 citations


Journal ArticleDOI
TL;DR: A new general method based on variable increments to improve the efficiency of CBFs and their variants and can extend many variants of CBF that have been published in the literature.
Abstract: Counting Bloom Filters (CBFs) arewidely used in networking device algorithms. They implement fast set representations to support membership queries with limited error and support element deletions unlike Bloom Filters. However, they consume significant amounts of memory. In this paper, we introduce a new general method based on variable increments to improve the efficiency of CBFs and their variants. Unlike CBFs, at each element insertion, the hashed counters are incremented by a hashed variable increment instead of a unit increment. Then, to query an element, the exact value of a counter is considered and not just its positiveness. We present two simple schemes based on this method. We demonstrate that this method can always achieve a lower false positive rate and a lower overflow probability bound than CBF in practical systems. We also show how it can be easily implemented in hardware, with limited added complexity and memory overhead. We further explain how this method can extend many variants of CBF that have been published in the literature. We then suggest possible improvements of the presented schemes and provide lower bounds on their memory consumption. Lastly, using simulations with real-life traces and hash functions, we show how it can significantly improve the false positive rate of CBFs given the same amount of memory.

Journal ArticleDOI
TL;DR: This paper proposes a centralized algorithm Non-Linear Approximation Optimization for Proportional Fairness (NLAO-PF) to derive the user-AP association via relaxation and proposes a distributed heuristic Best Performance First (BPF) based on a novel performance revenue function, which provides an AP selection criterion for newcomers.
Abstract: In this paper, we investigate the problem of achieving proportional fairness via access point (AP) association in multirate WLANs. This problem is formulated as a nonlinear programming with an objective function of maximizing the total user bandwidth utilities in the whole network. Such a formulation jointly considers fairness and AP selection. We first propose a centralized algorithm Non-Linear Approximation Optimization for Proportional Fairness (NLAO-PF) to derive the user-AP association via relaxation. Since the relaxation may cause a large integrality gap, a compensation function is introduced to ensure that our algorithm can achieve at least half of the optimal in the worst case. This algorithm is assumed to be adopted periodically for resource management. To handle the case of dynamic user membership, we propose a distributed heuristic Best Performance First (BPF) based on a novel performance revenue function, which provides an AP selection criterion for newcomers. When an existing user leaves the network, the transmission times of other users associated with the same AP can be redistributed easily based on NLAO-PF. Extensive simulation study has been performed to validate our design and to compare the performance of our algorithms to those of the state of the art.

Journal ArticleDOI
TL;DR: This paper presents a model of the automatic video stream-switching employed by one of these leading video streaming services along with a description of the client-side communication and control protocol.
Abstract: Adaptive video streaming is a relevant advancement with respect to classic progressive download streaming a la YouTube. Among the different approaches, the video stream-switching technique is getting wide acceptance, being adopted by Microsoft, Apple, and popular video streaming services such as Akamai, Netflix, Hulu, Vudu, and Livestream. In this paper, we present a model of the automatic video stream-switching employed by one of these leading video streaming services along with a description of the client-side communication and control protocol. From the control architecture point of view, the automatic adaptation is achieved by means of two interacting control loops having the controllers at the client and the actuators at the server: One loop is the buffer controller, which aims at steering the client playout buffer to a target length by regulating the server sending rate; the other one implements the stream-switching controller and aims at selecting the video level. A detailed validation of the proposed model has been carried out through experimental measurements in an emulated scenario.

Journal ArticleDOI
TL;DR: This paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage, and develops load-adaptive algorithms that can pick the best code rate on a per-request basis by using offline computed queue backlog thresholds.
Abstract: Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76%, 80%, and 85% reductions in mean, 90th, and 99th percentiles for 2-MB files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper, we focus on analyzing the delay performance when chunking, forward error correction (FEC), and parallel connections are used together. Based on this analysis, we develop load-adaptive algorithms that can pick the best code rate on a per-request basis by using offline computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog-based and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog-based solutions achieve better delay performance at higher percentile values than the greedy solution.

Journal ArticleDOI
TL;DR: This work investigates the cost-effective massive viral marketing problem, taking into the consideration the limited influence propagation, and provides mathematical programming to find optimal seeding for medium-size networks and proposes VirAds, an efficient algorithm, to tackle the problem on large-scale networks.
Abstract: Online social networks (OSNs) have become one of the most effective channels for marketing and advertising. Since users are often influenced by their friends, “word-of-mouth” exchanges, so-called viral marketing, in social networks can be used to increase product adoption or widely spread content over the network. The common perception of viral marketing about being cheap, easy, and massively effective makes it an ideal replacement of traditional advertising. However, recent studies have revealed that the propagation often fades quickly within only few hops from the sources, counteracting the assumption on the self-perpetuating of influence considered in literature. With only limited influence propagation, is massively reaching customers via viral marketing still affordable? How do we economically spend more resources to increase the spreading speed? We investigate the cost-effective massive viral marketing problem, taking into the consideration the limited influence propagation. Both analytical analysis based on power-law network theory and numerical analysis demonstrate that the viral marketing might involve costly seeding. To minimize the seeding cost, we provide mathematical programming to find optimal seeding for medium-size networks and propose VirAds, an efficient algorithm, to tackle the problem on large-scale networks. VirAds guarantees a relative error bound of $O(1)$ from the optimal solutions in power-law networks and outperforms the greedy heuristics that realizes on the degree centrality. Moreover, we also show that, in general, approximating the optimal seeding within a ratio better than $O(\log n)$ is unlikely possible.

Journal ArticleDOI
TL;DR: A novel sleep-time sizing and scheduling framework for the implementation of green bandwidth allocation (GBA) in TDMA-PONs, namely Sort-And-Shift (SAS), in which the ONUs are sorted according to their expected transmission start times, and their sleep times are shifted to resolve any possible collision while ensuring maximum energy saving.
Abstract: Next-generation passive optical network (PON) has been considered in the past few years as a cost-effective broadband access technology. With the ever-increasing power saving concern, energy efficiency has been an important issue in its operations. In this paper, we propose a novel sleep-time sizing and scheduling framework for the implementation of green bandwidth allocation (GBA) in TDMA-PONs. The proposed framework leverages the batch-mode transmission feature of GBA to minimize the overhead due to frequent ONU on–off transitions. The optimal sleeping time sequence of each ONU is determined in every cycle without violating the maximum delay requirement. With multiple ONUs possibly accessing the shared media simultaneously, a collision may occur. To address this problem, we propose a new sleep-time sizing mechanism, namely Sort-And-Shift (SAS), in which the ONUs are sorted according to their expected transmission start times, and their sleep times are shifted to resolve any possible collision while ensuring maximum energy saving. Results show the effectiveness of the proposed framework and highlight the merits of our solutions .

Journal ArticleDOI
TL;DR: This paper proposes enhancing an OpenFlow Ethernet switch with per-port packet prediction circuitry in order to simultaneously reduce latency and power consumption without sacrificing rich policy-based forwarding enabled by the OpenFlow architecture.
Abstract: The Ethernet switch is a primary building block for today's enterprise networks and data centers. As network technologies converge upon a single Ethernet fabric, there is ongoing pressure to improve the performance and efficiency of the switch while maintaining flexibility and a rich set of packet processing features. The OpenFlow architecture aims to provide flexibility and programmable packet processing to meet these converging needs. Of the many ways to create an OpenFlow switch, a popular choice is to make heavy use of ternary content addressable memories (TCAMs). Unfortunately, TCAMs can consume a considerable amount of power and, when used to match flows in an OpenFlow switch, put a bound on switch latency. In this paper, we propose enhancing an OpenFlow Ethernet switch with per-port packet prediction circuitry in order to simultaneously reduce latency and power consumption without sacrificing rich policy-based forwarding enabled by the OpenFlow architecture. Packet prediction exploits the temporal locality in network communications to predict the flow classification of incoming packets. When predictions are correct, latency can be reduced, and significant power savings can be achieved from bypassing the full lookup process. Simulation studies using actual network traces indicate that correct prediction rates of 97% are achievable using only a small amount of prediction circuitry per port. These studies also show that prediction circuitry can help reduce the power consumed by a lookup process that includes a TCAM by 92% and simultaneously reduce the latency of a cut-through switch by 66%.

Journal ArticleDOI
TL;DR: LASTor, a new Tor client that addresses shortcomings in Tor with only client-side modifications and an efficient and accurate algorithm to identify paths on which an AS can compromise anonymity by traffic correlation, is developed.
Abstract: Though the widely used Tor anonymity network is designed to enable low-latency anonymous communication, interactive communications on Tor incur latencies over 5 greater than on the direct Internet path, and in many cases, autonomous systems (ASs) can compromise anonymity via correlations of network traffic. In this paper, we develop LASTor, a new Tor client that addresses these shortcomings in Tor with only client-side modifications. First, LASTor improves communication latencies by accounting for the inferred locations of Tor relays while choosing paths. Since the preference for shorter paths reduces the entropy of path selection, we design LASTor so that a user can choose an appropriate tradeoff between latency and anonymity. Second, we develop an efficient and accurate algorithm to identify paths on which an AS can compromise anonymity by traffic correlation. LASTor avoids such paths to improve a user's anonymity, and the low run-time of the algorithm ensures that the impact on end-to-end communication latencies is low. Our results show that, in comparison to the default Tor client, LASTor reduces median latencies by 25% while also reducing the false negative rate of not detecting a potential snooping AS from 57% to 11%.

Journal ArticleDOI
TL;DR: A system under which users are divided into clusters based on their channel conditions, and their requests are represented by different queues at logical front ends is studied, finding provably optimal policies that stabilize the request queues and reduce average deficit to zero at small cost.
Abstract: The rapid growth of wireless content access implies the need for content placement and scheduling at wireless base stations. We study a system under which users are divided into clusters based on their channel conditions, and their requests are represented by different queues at logical front ends. Requests might be elastic (implying no hard delay constraint) or inelastic (requiring that a delay target be met). Correspondingly, we have request queues that indicate the number of elastic requests, and deficit queues that indicate the deficit in inelastic service. Caches are of finite size and can be refreshed periodically from a media vault. We consider two cost models that correspond to inelastic requests for streaming stored content and real-time streaming of events, respectively. We design provably optimal policies that stabilize the request queues (hence ensuring finite delays) and reduce average deficit to zero [hence ensuring that the quality-of-service (QoS) target is met] at small cost. We illustrate our approach through simulations.

Journal ArticleDOI
TL;DR: In this article, a traffic engineering mathematical programming formulation based on integer linear programming is proposed to minimize the energy consumption of the network through a management strategy that selectively switches off devices according to the traffic level.
Abstract: Recent data confirm that the power consumption of the information and communications technologies (ICT) and of the Internet itself can no longer be ignored, considering the increasing pervasiveness and the importance of the sector on productivity and economic growth. Although the traffic load of communication networks varies greatly over time and rarely reaches capacity limits, its energy consumption is almost constant. Based on this observation, energy management strategies are being considered with the goal of minimizing the energy consumption, so that consumption becomes proportional to the traffic load either at the individual-device level or for the whole network. The focus of this paper is to minimize the energy consumption of the network through a management strategy that selectively switches off devices according to the traffic level. We consider a set of traffic scenarios and jointly optimize their energy consumption assuming a per-flow routing. We propose a traffic engineering mathematical programming formulation based on integer linear programming that includes constraints on the changes of the device states and routing paths to limit the impact on quality of service and the signaling overhead. We show a set of numerical results obtained using the energy consumption of real routers and study the impact of the different parameters and constraints on the optimal energy management strategy. We also present heuristic results to compare the optimal operational planning with online energy management operation .

Journal ArticleDOI
Wei Dong1, Yunhao Liu2, Yuan He2, Tong Zhu2, Chun Chen1 
TL;DR: This study deploys a large-scale WSN and proposes MAP, a step-by-step methodology to identify the losses, extract system events, and perform spatial-temporal correlation analysis by employing a carefully examined causal graph to get a closer look at the root causes of packet losses in a low-power ad hoc network.
Abstract: Understanding the packet delivery performance of a wireless sensor network (WSN) is critical for improving system performance and exploring future developments and applications of WSN techniques. In spite of many empirical measurements in the literature, we still lack in-depth understanding on how and to what extent different factors contribute to the overall packet losses for a complete stack of protocols at large scale. Specifically, very little is known about: 1) when, where, and under what kind of circumstances packet losses occur; 2) why packets are lost. As a step toward addressing those issues, we deploy a large-scale WSN and design a measurement system for retrieving important system metrics. We propose MAP, a step-by-step methodology to identify the losses, extract system events, and perform spatial-temporal correlation analysis by employing a carefully examined causal graph. MAP enables us to get a closer look at the root causes of packet losses in a low-power ad hoc network. This study validates some earlier conjectures on WSNs and reveals some new findings. The quantitative results also shed lights for future large-scale WSN deployments .

Journal ArticleDOI
TL;DR: A novel approach based on behavioral graph analysis to study the behavior similarity of Internet end-hosts using bipartite graphs to model host communications from network traffic and build one-mode projections of bipartITE graphs for discovering social-behavior similarity of end- hosts.
Abstract: As Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to understand behavior patterns of end-hosts and network applications. This paper presents a novel approach based on behavioral graph analysis to study the behavior similarity of Internet end-hosts. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of end-hosts. By applying simple and efficient clustering algorithms on the similarity matrices and clustering coefficient of one-mode projection graphs, we perform network-aware clustering of end-hosts in the same network prefixes into different end-host behavior clusters and discover inherent clustered groups of Internet applications. Our experiment results based on real datasets show that end-host and application behavior clusters exhibit distinct traffic characteristics that provide improved interpretations on Internet traffic. Finally, we demonstrate the practical benefits of exploring behavior similarity in profiling network behaviors, discovering emerging network applications, and detecting anomalous traffic patterns.

Journal ArticleDOI
TL;DR: This paper proposes a secure data retrieval scheme using CP-ABE for decentralized DTNs where multiple key authorities manage their attributes independently and demonstrates how to apply the proposed mechanism to securely and efficiently manage the confidential data distributed in the disruption-tolerant military network.
Abstract: Mobile nodes in military environments such as a battlefield or a hostile region are likely to suffer from intermittent network connectivity and frequent partitions. Disruption-tolerant network (DTN) technologies are becoming successful solutions that allow wireless devices carried by soldiers to communicate with each other and access the confidential information or command reliably by exploiting external storage nodes. Some of the most challenging issues in this scenario are the enforcement of authorization policies and the policies update for secure data retrieval. Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic solution to the access control issues. However, the problem of applying CP-ABE in decentralized DTNs introduces several security and privacy challenges with regard to the attribute revocation, key escrow, and coordination of attributes issued from different authorities. In this paper, we propose a secure data retrieval scheme using CP-ABE for decentralized DTNs where multiple key authorities manage their attributes independently. We demonstrate how to apply the proposed mechanism to securely and efficiently manage the confidential data distributed in the disruption-tolerant military network.

Journal ArticleDOI
TL;DR: An effective Benders decomposition (BD) approach is developed that incorporates an upper bound heuristic algorithm, strengthened cuts, and an ε-optimal framework for accelerated convergence to prolong network lifetime via efficient use of the limited energy at the sensors.
Abstract: Data-gathering wireless sensor networks (WSNs) are operated unattended over long time horizons to collect data in several applications such as those in climate monitoring and a variety of ecological studies. Typically, sensors have limited energy (e.g., an on-board battery) and are subject to the elements in the terrain. In-network operations, which largely involve periodically changing network flow decisions to prolong the network lifetime, are managed remotely, and the collected data are retrieved by a user via internet. In this paper, we study an integrated topology control and routing problem in cluster-based WSNs. To prolong network lifetime via efficient use of the limited energy at the sensors, we adopt a hierarchical network structure with multiple sinks at which the data collected by the sensors are gathered through the clusterheads (CHs). We consider a mixed-integer linear programming (MILP) model to optimally determine the sink and CH locations as well as the data flow in the network. Our model effectively utilizes both the position and the energy-level aspects of the sensors while selecting the CHs and avoids the highest-energy sensors or the sensors that are well-positioned sensors with respect to sinks being selected as CHs repeatedly in successive periods. For the solution of the MILP model, we develop an effective Benders decomposition (BD) approach that incorporates an upper bound heuristic algorithm, strengthened cuts, and an $\varepsilon$ -optimal framework for accelerated convergence. Computational evidence demonstrates the efficiency of the BD approach and the heuristic in terms of solution quality and time.

Journal ArticleDOI
TL;DR: A new estimation method for random service that is based on iterative constant-rate probes that take advantage of statistical methods is proposed and it is shown how the estimation method can be realized to achieve both good accuracy and confidence levels.
Abstract: Numerous methods for available bandwidth estimation have been developed for wireline networks, and their effectiveness is well-documented. However, most methods fail to predict bandwidth availability reliably in a wireless setting. It is accepted that the increased variability of wireless channel conditions makes bandwidth estimation more difficult. However, a (satisfactory) explanation why these methods are failing is missing. This paper seeks to provide insights into the problem of bandwidth estimation in wireless networks or, more broadly, in networks with random service. We express bandwidth availability in terms of bounding functions with a defined violation probability. Exploiting properties of a stochastic min-plus linear system theory, the task of bandwidth estimation is formulated as inferring an unknown bounding function from measurements of probing traffic. We present derivations showing that simply using the expected value of the available bandwidth in networks with random service leads to a systematic overestimation of the traffic departures. Furthermore, we show that in a multihop setting with random service at each node, available bandwidth estimates requires observations over (in principle infinitely) long time periods. We propose a new estimation method for random service that is based on iterative constant-rate probes that take advantage of statistical methods. We show how our estimation method can be realized to achieve both good accuracy and confidence levels. We evaluate our method for wired single-and multihop networks, as well as for wireless networks.

Journal ArticleDOI
TL;DR: A data-gathering protocol for multihop wireless sensor networks with energy-harvesting capabilities is studied whereby the sources measured by the sensors are correlated, and a close-to-optimal online scheme is proposed that has an explicit and controllable tradeoff between optimality gap and queue sizes.
Abstract: Energy-harvesting wireless sensor networking is an emerging technology with applications to various fields such as environmental and structural health monitoring. A distinguishing feature of wireless sensors is the need to perform both source coding tasks, such as measurement and compression, and transmission tasks. It is known that the overall energy consumption for source coding is generally comparable to that of transmission, and that a joint design of the two classes of tasks can lead to relevant performance gains. Moreover, the efficiency of source coding in a sensor network can be potentially improved via distributed techniques by leveraging the fact that signals measured by different nodes are correlated. In this paper, a data-gathering protocol for multihop wireless sensor networks with energy-harvesting capabilities is studied whereby the sources measured by the sensors are correlated. Both the energy consumptions of source coding and transmission are modeled, and distributed source coding is assumed. The problem of dynamically and jointly optimizing the source coding and transmission strategies is formulated for time-varying channels and sources. The problem consists in the minimization of a cost function of the distortions in the source reconstructions at the sink under queue stability constraints. By adopting perturbation-based Lyapunov techniques, a close-to-optimal online scheme is proposed that has an explicit and controllable tradeoff between optimality gap and queue sizes. The role of side information available at the sink is also discussed under the assumption that acquiring the side information entails an energy cost.

Journal ArticleDOI
TL;DR: A novel protocol design is proposed that achieves multifold reduction in both energy cost and execution time when compared to the best existing work, and a fundamental energy-time tradeoff in missing-tag detection is revealed.
Abstract: Radio frequency identification (RFID) technologies are poised to revolutionize retail, warehouse, and supply chain management. One of their interesting applications is to automatically detect missing tags in a large storage space, which may have to be performed frequently to catch any missing event such as theft in time. Because RFID systems typically work under low-rate channels, past research has focused on reducing execution time of a detection protocol to prevent excessively long protocol execution from interfering normal inventory operations. However, when active tags are used for a large spatial coverage, energy efficiency becomes critical in prolonging the lifetime of these battery-powered tags. Furthermore, much of the existing literature assumes that the channel between a reader and tags is reliable, which is not always true in reality because of noise/interference in the environment. Given these concerns, this paper makes three contributions. First, we propose a novel protocol design that considers both energy efficiency and time efficiency. It achieves multifold reduction in both energy cost and execution time when compared to the best existing work. Second, we reveal a fundamental energy-time tradeoff in missing-tag detection, which can be flexibly controlled through a couple of system parameters in order to achieve desirable performance. Third, we extend our protocol design to consider channel error under two different models. We find that energy/time cost will be higher in unreliable channel conditions, but the energy-time tradeoff relation persists.

Journal ArticleDOI
TL;DR: This paper investigates information-theoretic secrecy in large-scale networks and studies how capacity is affected by the secrecy constraint where the locations and channel state information of eavesdroppers are both unknown.
Abstract: Since wireless channel is vulnerable to eavesdroppers, the secrecy during message delivery is a major concern in many applications such as commercial, governmental, and military networks. This paper investigates information-theoretic secrecy in large-scale networks and studies how capacity is affected by the secrecy constraint where the locations and channel state information (CSI) of eavesdroppers are both unknown. We consider two scenarios: 1) noncolluding case where eavesdroppers can only decode messages individually; and 2) colluding case where eavesdroppers can collude to decode a message. For the noncolluding case, we show that the network secrecy capacity is not affected in order-sense by the presence of eavesdroppers. For the colluding case, the per-node secrecy capacity of $\Theta({1 \over \sqrt{n}})$ can be achieved when the eavesdropper density $\psi_e(n)$ is $O(n^{-\beta})$ , for any constant $\beta > 0$ and decreases monotonously as the density of eavesdroppers increases. The upper bounds on network secrecy capacity are derived for both cases and shown to be achievable by our scheme when $\psi_e(n)=O(n^{-\beta})$ or $\psi_e(n)=\Omega(\log^{\alpha-2 \over \alpha}n)$, where $\alpha$ is the path-loss gain. We show that there is a clear tradeoff between the security constraints and the achievable capacity. Furthermore, we also investigate the impact of secrecy constraint on the capacity of dense network, the impact of active attacks and other traffic patterns, as well as mobility models in the context.

Journal ArticleDOI
TL;DR: This paper presents a measurement study on three popular video telephony systems on the Internet: Google+, iChat, and Skype, and uncovers important information about their key design choices and performance.
Abstract: Video telephony requires high-bandwidth and low-delay voice and video transmissions between geographically distributed users. It is challenging to deliver high-quality video telephony to end-consumers through the best-effort Internet. In this paper, we present our measurement study on three popular video telephony systems on the Internet: Google+, iChat, and Skype. Through a series of carefully designed active and passive measurements, we uncover important information about their key design choices and performance, including application architecture, video generation and adaptation schemes, loss recovery strategies, end-to-end voice and video delays, resilience against random and bursty losses, etc. The obtained insights can be used to guide the design of applications that call for high-bandwidth and low-delay data transmissions under a wide range of "best-effort" network conditions.

Journal ArticleDOI
TL;DR: It is proved that as long as influences on the signal attenuation are constant, they affect the capacity only by a constant factor.
Abstract: In this paper, we address two basic questions in wireless communication. First, how long does it take to schedule an arbitrary set of communication requests? Second, given a set of communication requests, how many of them can be scheduled concurrently? Our results are derived in the signal-to-interference-plus-noise ratio (SINR) interference model with geometric path loss and consist of efficient algorithms that find a constant approximation for the second problem and a logarithmic approximation for the first problem. In addition, we show that the interference model is robust to various factors that can influence the signal attenuation. More specifically, we prove that as long as influences on the signal attenuation are constant, they affect the capacity only by a constant factor.