scispace - formally typeset
Search or ask a question

Showing papers presented at "Conference on Computer Communications Workshops in 2010"


Proceedings ArticleDOI
15 Mar 2010
TL;DR: A novel network-level strategy based on a modification of current link-state routing protocols, such as OSPF, is proposed; according to this strategy, IP routers are able to power off some network links during low traffic periods.
Abstract: In this paper we analyze the challenging problem of energy saving in IP networks. A novel network-level strategy based on a modification of current link-state routing protocols, such as OSPF, is proposed; according to this strategy, IP routers are able to power off some network links during low traffic periods. The proposed solution is a three-phases algorithm: in the first phase some routers are elected as "exporter" of their own Shortest Path Trees (SPTs); in the second one the neighbors of these routers perform a modified Dijkstra algorithm to detect links to power off; in the last one new network paths on a modified network topology are computed. Performance study shows that, in an actual IP network, even more than the 60% of links can be switched off.

200 citations


Proceedings ArticleDOI
15 Mar 2010
TL;DR: An in depth study of fundamental properties of video popularity in YouTube and finds a "magic number" in the average behavior of videos: for every 400 times a video is viewed, one has one of each of the following user actions: leaving a comment, rating the video and adding to one's favorite set.
Abstract: Being popular in YouTube is becoming a fundamental way of promoting one's self, services or products. In this paper, we conduct an in depth study of fundamental properties of video popularity in YouTube. We collect and study arguably the largest dataset of YouTube videos, roughly 37 million, accounting for 25% of all YouTube videos. We analyze popularity in a comprehensive fashion by looking at properties and patterns in time and considering various popularity metrics. We further study the relationship of the popularity metrics and we find that four of them are highly correlated (viewcount, #comments, #ratings, #favorites) while the fifth one, the average rating, exhibits very little correlation with the other metrics. We also find a "magic number" in the average behavior of videos: for every 400 times a video is viewed, we have one of each of the following user actions: leaving a comment, rating the video and adding to one's favorite set.

194 citations


Proceedings ArticleDOI
15 Mar 2010
TL;DR: This work designs a routing scheme for cognitive radio Ad Hoc networks, named Gymkhana, which is aware of the degree of connectivity of possible paths towards the destination, and uses Laplacian matrixes to derive a closed formula to measure the path connectivity.
Abstract: The topology of a cognitive radio Ad Hoc network is highly influenced by the behavior of both licensed (primary) and unlicensed (secondary) users. In fact, the network connectivity could be impaired by the activity of primary users. This aspect has a significant impact on the design of routing protocols. We design a routing scheme for cognitive radio Ad Hoc networks, named Gymkhana, which is aware of the degree of connectivity of possible paths towards the destination. Gymkhana routes the information across paths that avoid network zones that do not guarantee stable and high connectivity. To this aim we use a mathematical framework, based on the Laplacian spectrum of graphs, that allows a comprehensive evaluation of the different routing paths of the cognitive radio network. Laplacian matrixes are used to compute the connectivity of the different network paths. Gymkhana uses a distributed protocol to collect some key parameters related to candidate paths from an origin to a destination. The parameters are fed into a basic mathematical structure which is used to compute efficient routing paths. Besides the basic idea of Gymkhana, the use of Laplacian matrixes to derive a closed formula to measure the path connectivity is another contribution of ours.

120 citations


Proceedings ArticleDOI
15 Mar 2010
TL;DR: This work proposes two complementary approaches for social trust establishment: explicit social trust and implicit social trust that are more robust against manipulation attacks compared to state-of-the-art approaches such as PGP- like certification chains and distributed community detection algorithms.
Abstract: Opportunistic networks enable mobile users to participate in various social interactions with applications such as content distribution and micro-blogs. Because of their distributed nature, securing user interactions relies rather on trust than hard cryptography. Trust is often based on past user interactions such as in reputation systems relying on ratings. Yet, a more fundamental trust, social trust - assessing a user is genuine with honest intentions - must be established beforehand as many identities can be created easily (i.e., sybils). By leveraging the social network structure and its dynamics (conscious secure pairing and wireless contacts), we propose two complementary approaches for social trust establishment: explicit social trust and implicit social trust. Complexity, trust propagation and security issues are evaluated using real world complex graphs, synthetic mobility models and mobility traces. We show how our approach limits the maximum number of sybils independently of the network size and is more robust against manipulation attacks compared to state-of-the-art approaches such as PGP- like certification chains and distributed community detection algorithms.

118 citations


Proceedings ArticleDOI
15 Mar 2010
TL;DR: In this paper, the authors propose a naming scheme to name content and other objects that enables verification of data integrity as well as owner authentication and identification, which can also solve some of the main security problems of today's Internet.
Abstract: Several projects propose an information-centric approach to the network of the future. Such an approach makes efficient content distribution possible by making information retrieval host-independent and integrating into the network storage for caching information. Requests for particular content can, thus, be satisfied by any host or server holding a copy. The current security model based on host authentication is not applicable in this context. Basic security functionality must instead be attached directly to the data and its naming scheme. A naming scheme to name content and other objects that enables verification of data integrity as well as owner authentication and identification is here presented. The naming scheme is designed for flexibility and extensibility, e.g., to integrate other security properties like access control. At the same time, the naming scheme offers persistent IDs even though the content, content owner and/or owner's organizational structure, or location change. The requirements for the naming scheme and an analysis showing how the proposed scheme fulfills them are presented. Experience with prototyping the naming scheme is also discussed. The naming scheme builds the foundation for a secure information-centric network infrastructure that can also solve some of the main security problems of today's Internet.

117 citations


Proceedings ArticleDOI
15 Mar 2010
TL;DR: In this article, a large European ISP connecting more than 20,000 residential DSL customers to the Internet, collected in 2009, focused on the most prominent protocols in this environment -HTTP, BitTorrent (BT), eDonkey, and NNTP -and estimate the potential of caching for traffic reduction.
Abstract: Today's Internet traffic is dominated by users' demand for exchanging content. In particular, multi-media content, i. e., photos, music, and video, as well as software downloads and updates contribute substantially to today's Internet traffic. One option for reducing network costs is to use caches-exploiting the observation that content popularity is consistent with Zipf's law. Yet, Web caching became unprofitable due to the increase in popularity of dynamic Web content. However, since at this point rich content is not very dynamic caching appears to be worthwhile again. We base our analysis on anonymized traces from a large European ISP connecting more than 20,000 residential DSL customers to the Internet, collected in 2009. We focus on the most prominent protocols in this environment - HTTP, BitTorrent (BT), eDonkey, and NNTP - and estimate the potential of caching for traffic reduction. On the one hand, our results show that the potential for caching most client/server-based applications like HTTP and NNTP is small. On the other hand P2P-based applications such as BitTorrent and certain HTTP based applications have high content duplication ratios.

94 citations


Proceedings ArticleDOI
15 Mar 2010
TL;DR: A novel approach to traffic classification - named PortLoad - that takes the advantages of both worlds: the speed, simplicity and reduced invasiveness of port-based approaches, on one side, and the classification accuracy of DPI on the other one.
Abstract: Traffic classification approaches based on deep packet inspection (DPI) are considered very accurate, however, two major drawbacks are their invasiveness with respect to users privacy, and their significant computational cost. Both are a consequence of the amount of per-flow payload data - we refer to it as "deepness" - typically inspected by such algorithms. At the opposite side, the fastest and least data-eager traffic classification approach is based on transport-level ports, even though today it is mostly considered inaccurate. In this paper we propose a novel approach to traffic classification - named PortLoad - that takes the advantages of both worlds: the speed, simplicity and reduced invasiveness of port-based approaches, on a side, and the classification accuracy of DPI on the other one.

89 citations


Proceedings ArticleDOI
15 Mar 2010
TL;DR: The aim is to devise autonomous algorithms for small cells and femtocells to choose spectrum so that they can achieve high data rates without causing interference to users in the traditional macro cells.
Abstract: We consider the problem of sharing spectrum between different base stations in an OFDM network where some cells have a small radius. Such scenarios will become increasingly common in fourth generation networks where the need for ubiquitous high-speed coverage will lead to an increased use of small cells as well as indoor femtocells. Our aim is to devise autonomous algorithms for small cells and femtocells to choose spectrum so that they can achieve high data rates without causing interference to users in the traditional macro cells. We present a number of algorithms that perform combinations of frequency and time sharing based on the channel conditions reported by the mobile users. Our schemes bear some resemblance to the traditional 802.11 MAC algorithms. However, they differ in the fact that they are able to use better information about channel conditions from the mobiles and they are allowed to adjust the amount of spectrum that they are using. We evaluate our schemes using a platform that combines a physical-layer ray-tracing tool for indoor and outdoor environments with an upper layer OFDM simulation tool. We believe that this type of simulation capability will become increasingly important as cellular networks target the provision of high-speed performance in dense urban environments. Our results suggest that user channel quality measurements can be used to set the level of sharing between femtocells and macrocells and that finding the correct level of sharing is important for optimal network performance.

74 citations


Proceedings ArticleDOI
15 Mar 2010
TL;DR: Results obtained from ns-2 network simulator show that the proposed protocols have potential for significantly improving end-to-end throughput, and at 1% and 5% packet loss rates one of the proposed protocol has shown about 21% and 95% increase in end- to- end throughput for file transfer application.
Abstract: The cognitive radio networks or CogNets poses several new challenges to the transport layer protocols, because of many unique features of cognitive radio based devices used to build them. CogNets not only have inherited all features of wireless networks, but also their link connections are intermittent and discontinuous. Exiting transport layer protocols are too slow to respond quickly for utilizing available link capacity. Furthermore, existing self-timed transport layer protocols are neither designed for nor able to provide efficient reliable end-to-end transport service in CogNets, where wide round trip delay variations naturally occur. We identify (i) requirements of protocols for the transport layer of CogNets, (ii) propose a generic architecture for implementing a family of protocols that fulfill desired requirements, (iii) design, implement, and evaluate a family of best-effort transport protocols for serving delay-tolerant applications. Results obtained from ns-2 network simulator show that the proposed protocols have potential for significantly improving end-to-end throughput. For instance, at 1% and 5% packet loss rates one of the proposed protocol has shown about 21% and 95% increase in end-to-end throughput for file transfer application.

68 citations


Proceedings ArticleDOI
Jin Wei1, Xi Zhang1
15 Mar 2010
Abstract: NA

52 citations


Proceedings ArticleDOI
15 Mar 2010
TL;DR: The results indicate that the so-called Geyer saturation model can accurately reproduce the spatial structure of a large variety of wireless network types, arising from both planned or chaotic deployments.
Abstract: While modeling and analysis of network topology has been an active area of research in fixed networks, much less work has been done towards realistic modeling of wireless networks. The graph- based approach that has served as solid foundation for network science in the fixed domain is not natural for wireless communication networks, since their performance inherently depends on the spatial relationships between nodes. In this paper we apply techniques from spatial statistics literature to develop models of the spatial structure of the network for a variety of wireless network types. In particular, we construct models of television and radio transmitter distributions that have applications in, for example, cognitive wireless network applications. We use a stochastic approach based on fitting parametric location models to empirical data. Our results indicate that the so-called Geyer saturation model can accurately reproduce the spatial structure of a large variety of wireless network types, arising from both planned or chaotic deployments. The resulting models can be used in simulations or as basis of analytical calculations of different network properties, and we believe that the presented methodology can serve as a solid foundation for the emerging network science of wireless communication networks.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: It is found that although e2e routes are quite diverse, they are relatively stable, albeit with high variance between different vantage points, with strong dependency on the network type (academic vs. commercial), and longitudinal analysis shows consistency of the diversity and stability.
Abstract: The diversity of end-to-end (e2e) Internet routes has been studied for over a decade, dating back to Paxson's seminal work from 1995. This paper presents a measurement study of this issue and systematically evaluate the diversity of the Internet routes, while revisiting some of the conclusions previously made. Two large scale experiments are used for evaluation, one executed in late 2006 and the second in early 2009, both employ a set of more than 100 broadly distributed vantage points, actively measuring between each other. We find that although e2e routes are quite diverse, they are relatively stable, albeit with high variance between different vantage points, with strong dependency on the network type (academic vs. commercial). We show that while routes are mostly asymmetric, at the country level, which serves as a good indication for end-to-end propagation delays, the routes are highly similar. Finally, longitudinal analysis shows consistency of the diversity and stability, indicating trade-offs between the Internet growth and changing trends in its connectivity.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: A novel structural approach to automatically generate large scale PoP level maps using traceroute measurement from multiple locations using collaborated information from several geo-location databases is introduced.
Abstract: Inferring PoP level maps is gaining interest due to its importance to many areas, e.g., for tracking the Internet evolution and studying its properties. In this paper we introduce a novel structural approach to automatically generate large scale PoP level maps using traceroute measurement from multiple locations. The PoPs are first identified based on their structure, and then are assigned a location using collaborated information from several geo-location databases. Using this approach, we could evaluate the accuracy of these databases and suggest means to improve it. The PoP-PoP edges, which are extracted from the traceroutes, present a fairly rich AS-AS connectivity map.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: Simulation results show that the proposed proactive spectrum handoff protocol outperforms the conventional sensing-based reactive spectrum handoffs approach in terms of higher throughput and fewer collisions to licensed users.
Abstract: Cognitive radio (CR) is a promising solution to improve the spectrum utilization by enabling unlicensed users to exploit the spectrum in an opportunistic manner. However, because unlicensed users are considered as temporary visitors to the licensed spectrum, they are required to vacate the spectrum when a licensed user reuses the current spectrum. Due to the randomness of the reappearance of licensed users, disruptions to both licensed and unlicensed communications are difficult to prevent, which leads to high spectrum switching overhead. In this work, a proactive spectrum handoff framework in a CR ad hoc network scenario is proposed. Based on channel usage statistics, proactive spectrum handoff criteria and policies are devised. CR users proactively predict the future spectrum availability status and perform spectrum switching before a licensed user reuses the spectrum. In addition, a channel coordination scheme is investigated and incorporated into the spectrum handoff protocol design. To eliminate collisions among CR users, a novel distributed channel selection scheme in a multi-user scenario is proposed. Simulation results show that the proposed proactive spectrum handoff protocol outperforms the conventional sensing-based reactive spectrum handoff approach in terms of higher throughput and fewer collisions to licensed users. It is also shown that the proposed channel selection scheme outperforms the purely random channel selection scheme in terms of shorter average service time and higher packet delivery rate.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: In this paper, the status of an m-trail can be monitored by multiple monitoring nodes along the mtrail by tapping the optical supervisory signal, rather than only by the destination node.
Abstract: The concept of monitoring trail (m-trail) has been proposed for achieving Fast and Unambiguous Link-failure Localization (FULL) in all-optical WDM (Wavelength Division Multiplexing) mesh networks. Previous studies on m-trails assumed the presence of alarm dissemination at each node such that a remote routing entity can collect the flooded alarm bits and form the alarm code to localize the failed link. This obviously leads to additional delay and extra control complexity in the electronic domain process. In this paper, we propose a novel framework based on m-trails for FULL, aiming at avoiding any possible alarm flooding and electronic domain mechanism such that each individual monitoring node (MN) can localize a single link failure according to locally available alarm bits. To save the supervisory wavelength-links, the proposed framework enables that the status of an m-trail can be monitored by multiple MNs along the m-trail by tapping the optical supervisory signal, rather than only by the destination node of the m-trail. An ILP (Integer Linear Program) is formulated and solved in a case study to verify the ILP and show the effectiveness of the proposed framework. We demonstrate that the status sharing among MNs of a common m-trail can effectively suppress the increase of supervisory wavelength-links as the number of MNs increases.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: Stochastic network calculus is used to analyze a cognitive radio network, where influences of imperfect spectrum sensing and different retransmission schemes are considered, and numerical results are shown for different types of traffic.
Abstract: In this paper, we use stochastic network calculus to analyze a cognitive radio network, where influences of imperfect spectrum sensing and different retransmission schemes are considered. In particular, stochastic arrival curves for spectrum sensing error processes are derived firstly, based on which stochastic service curve for each class of users is obtained under different retransmission schemes, including without retransmission, retransmission until success and maximum-N-times retransmission. Then backlog and delay bounds for primary and secondary users are derived. Finally, numerical results are shown for different types of traffic, where the influence of different retransmission schemes is further discussed.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: A novel scheduling algorithm for multi-hop wireless networks is developed, which optimizes packet delivery for multiple audio, video and data flows according to user perceivable quality metrics and shows that distortion-aware scheduling can significantly increase the perceived quality of multimedia streaming under bandwidth constraints.
Abstract: Distributing multimedia content over wireless networks is challenging due to the limited resource availability and the unpredictability of wireless links. As more and more users demand wireless access to (real-time) multimedia services, the impact of constrained resources is different for different media types. Therefore, understanding this impact and developing mechanisms to optimize content delivery under resource constraints according to user perception will be key in improving user satisfaction. In this paper, we develop a novel scheduling algorithm for multi-hop wireless networks, which optimizes packet delivery for multiple audio, video and data flows according to user perceivable quality metrics. We formulate a multidimensional optimization problem to minimize the overall distortion while satisfying resource constraints for the wireless links. Our Quality-of-Experience (QoE)-optimized scheduler makes use of models to determine the user's perception of quality that are specific to the type of service being provided. Our experimental results, obtained with the NS-2 IEEE 802.16 MESH-mode simulator, show that distortion-aware scheduling can significantly increase the perceived quality of multimedia streaming under bandwidth constraints. As the scheduler allows the modeling of fairness constraints among multiple competing flows, we also demonstrate an improvement in fairness across different flows.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: A multi-channel MAC scheme for Vehicular Ad Hoc Networks (VANETs), which dynamically adjusts the intervals of Control Channel and Service Channels, which is able to help IEEE 1690.4 MAC improve the saturation throughput of SCHs significantly, while maintaining the prioritized transmission of critical safety information on the CCH.
Abstract: This paper proposes a multi-channel MAC scheme for Vehicular Ad Hoc Networks (VANETs), which dynamically adjusts the intervals of Control Channel (CCH) and Service Channels (SCHs) Markov modeling is conducted to optimize the intervals based on the traffic condition The scheme also introduces a multi-channel coordination mechanism to provide the contention-free access in SCHs Theoretical analysis and simulation results show that the proposed scheme is able to help IEEE 16904 MAC improve the saturation throughput of SCHs significantly, while maintaining the prioritized transmission of critical safety information on the CCH

Proceedings ArticleDOI
15 Mar 2010
TL;DR: This paper proposes a distributed Prediction-based Cognitive Topology Control scheme to provision cognition capability to routing in CR-MANETs and constructs an efficient and reliable topology, which is aimed at mitigating re-routing frequency and improving end-to-end network performance such as throughput and delay.
Abstract: Cognitive radio (CR) technology will have significant impacts on upper layer performance in mobile ad hoc networks (MANETs). In this paper, we study topology control and routing in CR-MANETs. We propose a distributed Prediction-based Cognitive Topology Control (PCTC) scheme to provision cognition capability to routing in CR-MANETs. PCTC is a midware-like cross-layer module residing between CR module and routing. The proposed PCTC scheme uses cognitive link availability prediction, which is aware of the interference to primary users, to predict the available duration of links in CR-MANETs. Based on the link prediction, PCTC constructs an efficient and reliable topology, which is aimed at mitigating re-routing frequency and improving end-to-end network performance such as throughput and delay. Simulation results are presented to show the effectiveness of the proposed scheme.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: It is shown that for stationary target scenario the proposed mobility model can achieve a desired detection probability with a significantly lower number of mobile nodes especially when the detection requirements are highly stringent.
Abstract: In this work, we study the target detection and tracking problem in mobile sensor networks, where the performance metrics of interest are probability of detection and tracking coverage, when the target can be stationary or mobile and its duration is finite. We propose a physical coverage-based mobility model, where the mobile sensor nodes move such that the overlap between the covered areas by different mobile nodes is small. It is shown that for stationary target scenario the proposed mobility model can achieve a desired detection probability with a significantly lower number of mobile nodes especially when the detection requirements are highly stringent. Similarly, when the target is mobile the coverage-based mobility model produces a consistently higher detection probability compared to other models under investigation.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: In this article, the authors study the activity span of MySpace accounts and its connection to the distribution of the number of friends, showing that the inflection point resembles the double pareto geometric Brownian motion with exponential stopping times model.
Abstract: In this work we study the activity span of MySpace accounts and its connection to the distribution of the number of friends. The activity span is the time elapsed since the creation of the account until the user's last login time. We observe exponentially distributed activity spans. We also observe that the distribution of the number of friends over accounts with the same activity span is well approximated by a lognormal with a fairly light tail. These two findings shed light into the puzzling (yet unexplained) inflection point (knee) in the distribution of friends in MySpace when plotted in log-log scale. We argue that the inflection point resembles the inflection point of Reed's (Double Pareto) Geometric Brownian Motion with Exponential Stopping Times model. We also present evidence against the Dunbar number hypothesis of online social networks, which argues, without proof, that the inflection point is due to the Dunbar number (a theoretical limit on the number of people that a human brain can sustain active social contact with). While we answer many questions, we leave many others open.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: A generalized DCell framework is presented so that structures with different connecting rules can be constructed and it is shown that these structures still preserve the desirable properties of the original DCell structure.
Abstract: DCell~\cite{guo} has been proposed as a server centric network structure for data centers. DCell can support millions of servers with high network capacity and provide good fault tolerance by only using commodity mini- switches. However, the traffic in DCell is imbalanced in that links at different layers carry very different number of flows. In this paper, we present a generalized DCell framework so that structures with different connecting rules can be constructed. We show that these structures still preserve the desirable properties of the original DCell structure. Furthermore, we show that the new structures are more symmetric and provide much better load- balancing when using shortest-path routing. We demonstrate the load- balancing property of the new structures by extensive simulations.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: A novel game theoretic framework that uses the potentialities of the new IEEE 802.22 Standard to guarantee self-coexistence among Wireless Regional Area Networks to address the channel assignment problem as a multi-player non-cooperative repeated potential game.
Abstract: Although the proliferation of wireless applications operating in unlicensed spectrum bands has resulted in overcrowding, recent analysis has shown that license bands are still underutilized. Cognitive Radio is seen as the key enabling technology to address the spectrum shortage problem, opportunistically using the spectrum allocated for TV bands. In this paper, we present a novel game theoretic framework that uses the potentialities of the new IEEE 802.22 Standard to guarantee self-coexistence among Wireless Regional Area Networks. We address this problem as a channel assignment problem where each WRAN acquires a chunk of spectrum free of interference in a dynamic and distributed way. Using a novel technique to compute backoff windows, we show that the channel assignment problem can be formulated as a multi-player non-cooperative repeated potential game that converges to a Nash Equilibrium point. We consider each WRAN as a player of our game and we use two different types of utility functions to maximize the spatial reuse and minimize the interference. An extensive simulation study shows that having the interference minimization as objective is not necessarily the best solution with selfish players.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: This paper investigates the potential benefits of MultiCache, an overlay network architecture aiming at handing control back to network operators, and studies crucial aspects of the architecture, paying special attention to the properties of the distributed caching scheme.
Abstract: It has long been realized that the use of the Internet has moved away from its original end-host centric model. The vast majority of services and applications is nowadays focused on information itself rather on the end-points providing/consuming it. However, the underlying network architecture still focuses on enabling the communication between pairs of end-hosts, leading to a series of problems, such as the inefficient utilization of network resources, demonstrated by the proliferation of peer-to-peer (P2P) and file sharing applications. In essence, the prevailing end-to-end nature of the current Internet architecture prohibits network operators from controlling the traffic carried by their networks, leaving this control entirely to end users and their applications. In this paper, we investigate the potential benefits of MultiCache, an overlay network architecture aiming at handing control back to network operators. In MultiCache proxy overlay routers enable the delivery of data either via direct multicast, or via multicast fed caches residing at the leaves of multicast delivery trees. We study crucial aspects of our architecture, paying special attention to the properties of our distributed caching scheme, and investigate the feasibility of a progressive deployment of the proposed functionality over the existing Internet.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: This paper presents Content-Oriented Network with Indexed Caching (CONIC), a deployable and self-scaling architecture to exploit spare storage and bandwidth from end- systems to eliminate redundant traffic and enable efficient and fast access to content.
Abstract: In this paper, we present Content-Oriented Network with Indexed Caching (CONIC), a deployable and self-scaling architecture to exploit spare storage and bandwidth from end-systems to eliminate redundant traffic and enable efficient and fast access to content. Our trace-driven simulation indicates that CONIC can reduce 25% to 50% traffic volume and can cumulatively halve the latency of content access in the real-world network environment. Also, our prototype implementation verifies the deployability of the CONIC architecture.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: A novel solution, FemtoHaul, is proposed, which efficiently exploits the potential of femtocells to bear the macrocell backhaul traffic by using relays, enhancing the data rates of cellular subscribers.
Abstract: The ever increasing user demand for highly data- intensive applications is motivating cellular operators to provide more data services. However, the operators are suffering from the heavy budgetary burden of upgrading their infrastructure. Most macrocell Base Stations still connect to backhauls with capacities of less than 8 Mbps, much too low to be able to serve all voice and data users in the cell. This so-called macrocell backhaul bandwidth shortage problem is encumbering the growth of cellular data services. In this paper, we propose a novel solution, FemtoHaul, which efficiently exploits the potential of femtocells to bear the macrocell backhaul traffic by using relays, enhancing the data rates of cellular subscribers. We design a system architecture and its related signaling and scheduling strategies. Extensive simulations demonstrate that FemtoHaul can effectively serve more users and support higher data demand with the existing macrocell backhaul capacity.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: In this article, the availability and capacity of FSO/RF ad hoc mesh networks were investigated and the closed-form expressions for system capacity were derived for different availability cases of channel state information.
Abstract: Even if a line-of-sight condition of Free Space Optics (FSO) is satisfied, atmospheric-induced fading, scattering, and attenuation may severely deteriorate the availability of the communication link. This argument is true except in reconfigurable FSO ad hoc networks, a path reconfiguration scheme replaces a severed FSO link with an operational one. Reconfigurability, as a property of our hybrid FSO/RF (Radio Frequency) work in progress, provides connection reliability and network throughput. The traffic can be directed to a different FSO link or even an RF link as backup. Hence, node failure and outage probabilities due to link failure will be reduced, thus resulting in a higher availability of nodes. Mathematical investigation and statistical consideration of availability and capacity of FSO/RF ad hoc mesh network is the focus of this paper. We assume normalized scintillation fading channels in our analysis. We apply different availability cases of channel state information to derive closed-form expressions for system capacity.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: The forwarding architecture inherits the flexibility of MPLS and gives operators the opportunity to offer Multicast VPN services while avoiding the difficult process of fine-tuning the trade-off between bandwidth usage and state.
Abstract: The Multiprotocol Label Switching (MPLS) architecture has become a true success story in the world of telecommunications. However, MPLS becomes cumbersome if multicast communication is needed, as aggregating of labels is not easy. Because of that, when providing Multicast VPNs, operators need to trade-off bandwidth usage with the amount of multicast state, sacrificing efficiency. Forwarding with Bloom filters in the packet headers offers the opportunity to have quasi-stateless network elements; the amount of forwarding plane state does not depend on the number of paths/trees the node participates in. In this paper, we propose Multiprotocol Stateless Switching (MPSS), the marriage of MPLS and Bloom filter based forwarding. The forwarding architecture inherits the flexibility of MPLS and gives operators the opportunity to offer Multicast VPN services while avoiding the difficult process of fine-tuning the trade-off between bandwidth usage and state.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: While cooperative routing can achieve considerable energy savings, it results in a sharp reduction in network throughput compared to non-cooperative routing, and some potential causes are identified and two solutions are proposed by exploring recent developments in multi-beam cooperative beamforming to increase parallelism in the network in order to improve throughput.
Abstract: This paper studies energy and throughput performance of cooperative routing in wireless networks that support cooperative beamforming at the physical layer. Cooperative beamforming is a form of cooperative communication in which multiple nodes each equipped with a single omnidirectional antenna coordinate their transmissions in such a way that the individual signals constructively combine at the intended receiver, resulting in significant power gains compared to independent signal transmissions. It has been recently shown that significant energy savings can be achieved by jointly optimizing the network-layer routing and physical-layer cooperation, i.e., cooperative routing. While energy efficiency of cooperative routing has been extensively studied in literature, its impact on network throughput is surprisingly overlooked. In this paper, we show that while cooperative routing can achieve considerable energy savings, it results in a sharp reduction in network throughput compared to non-cooperative routing. We then identify some potential causes of this problem and propose two solutions by exploring recent developments in multi-beam cooperative beamforming to increase parallelism in the network in order to improve throughput.

Proceedings ArticleDOI
15 Mar 2010
TL;DR: In this article, a prediction error-aware dynamic spectrum access (DSA) scheme is proposed to improve the performance of DSA by leveraging spectrum usage prediction, and the prediction error can be well approximated by Beta distribution.
Abstract: To solve the scarcity of wireless spectrum, cognitive radio (CR) is proposed to let unlicensed wireless users (secondary users) dynamically sense and access unused channels without generating interference to licensed users (primary users). The performance of CR-based dynamic spectrum access (DSA) mechanism can be dramatically improved if the wireless spectrum usage is predictable, and many works have been conducted based on the assumption that wireless spectrum usage can be predicted by Markov model based methods. To verify and study the predictability of real world wireless spectrum usage, we study the result of a large scale measurement and find that the wireless spectrum is non-stationary, which means its background probabilistic model is varying all the time, therefore the error of prediction is not avoidable. Therefore, to improve the performance of DSA by leveraging spectrum usage prediction, the prediction error must be considered. In this paper, we study the error of prediction, and figure out that the distribution of prediction error can be well approximated by Beta Distribution. We then design a prediction error-aware dynamic spectrum access scheme. The study result shows that it outperforms those prediction-based DSA schemes which are not aware of prediction errors.