scispace - formally typeset
Search or ask a question
Author

Asad Awan

Bio: Asad Awan is an academic researcher from Purdue University. The author has contributed to research in topics: Wireless sensor network & Scalability. The author has an hindex of 9, co-authored 15 publications receiving 1018 citations.

Papers
More filters
Proceedings ArticleDOI
15 Aug 2011
TL;DR: This paper uses a unique dataset that spans different content types, including short video on demand, long VoD, and live content from popular video con- tent providers, to measure quality metrics such as the join time, buffering ratio, average bitrate, rendering quality, and rate of buffering events.
Abstract: As the distribution of the video over the Internet becomes main- stream and its consumption moves from the computer to the TV screen, user expectation for high quality is constantly increasing. In this context, it is crucial for content providers to understand if and how video quality affects user engagement and how to best invest their resources to optimize video quality. This paper is a first step towards addressing these questions. We use a unique dataset that spans different content types, including short video on demand (VoD), long VoD, and live content from popular video con- tent providers. Using client-side instrumentation, we measure quality metrics such as the join time, buffering ratio, average bitrate, rendering quality, and rate of buffering events.We quantify user engagement both at a per-video (or view) level and a per-user (or viewer) level. In particular, we find that the percentage of time spent in buffering (buffering ratio) has the largest impact on the user engagement across all types of content. However, the magnitude of this impact depends on the content type, with live content being the most impacted. For example, a 1% increase in buffering ratio can reduce user engagement by more than three minutes for a 90-minute live video event. We also see that the average bitrate plays a significantly more important role in the case of live content than VoD content.

687 citations

Proceedings ArticleDOI
31 Aug 2005
TL;DR: The authors proved analytically, and demonstrated experimentally, that this scheme provides high probabilistic guarantees of success, while incurring minimal overhead, and performed no worse than the best known access-frequency based protocols.
Abstract: Search is a fundamental service in peer-to-peer (P2P) networks. However, despite numerous research efforts, efficient algorithms for guaranteed location of shared content in unstructured P2P networks are yet to be devised. In this paper, the authors presented a simple but highly effective protocol for object location that gives probabilistic guarantees of finding even rare objects independently of the network topology. The protocol relies on randomized techniques for replication of objects (or their references) and for query propagation. The authors proved analytically, and demonstrated experimentally, that this scheme provides high probabilistic guarantees of success, while incurring minimal overhead. The performance of this scheme was quantified in terms of network messages, probability of success, and response time. The robustness of this protocol was also evaluated in the presence of node failures (departures). Using simulation, it is shown that this scheme performs no worse than the best known access-frequency based protocols, without compromising access to rare objects.

82 citations

Proceedings ArticleDOI
04 Jan 2006
TL;DR: This paper prescribes necessary conditions for uniform sampling in such networks and presents distributed algorithms that satisfy these requirements and empirically evaluates the performance of the algorithm in comparison to known algorithms.
Abstract: Uniform sampling in networks is at the core of a wide variety of randomized algorithms. Random sampling can be performed by modeling the system as an undirected graph with associated transition probabilities and defining a corresponding Markov chain (MC). A random walk of prescribed minimum length, performed on this graph, yields a stationary distribution, and the corresponding random sample. This sample, however, is not uniform when network nodes have a non-uniform degree distribution. This poses a significant practical challenge since typical large scale real-world unstructured networks tend to have non-uniform degree distributions, e.g., power-law degree distribution in unstructured peer-to-peer networks. In this paper, we present a distributed algorithm that enables efficient uniform sampling in large unstructured non-uniform networks. Specifically, we prescribe necessary conditions for uniform sampling in such networks and present distributed algorithms that satisfy these requirements. We empirically evaluate the performance of our algorithm in comparison to known algorithms. The performance parameters include computational complexity, length of random walk, and uniformity of the sampling. Simulation results support our claims of performance improvements due to our algorithm.

55 citations

Proceedings ArticleDOI
21 Mar 2007
TL;DR: An important and novel aspect of COSMOS is the ability to easily extend its component basis library to add rich macroprogramming abstractions to mPL, tailored to domain and resource constraints, without modifications to the OS.
Abstract: In this paper, we present COSMOS, a novel architecture for macroprogramming heterogeneous sensor network systems. Macroprogramming specifies aggregate system behavior, as opposed to device-specific programs that code distributed behavior using explicit messaging. COSMOS is comprised of a macroprogramming language, mPL, and an operating system, mOS. mPL macroprograms are statically verifiable compositions of reusable user-specified, or system supported functional components. The mOS node/network operating system provides component management and a lean execution environment for mPL programs in heterogeneous resource-constrained sensor networks. It provides runtime application instantiation, with over-the-air reprogramming of the network. COSMOS facilitates composition of complex real-world applications that are robust, scalable and adaptive in dynamic data-driven sensor network environments. An important and novel aspect of COSMOS is the ability to easily extend its component basis library to add rich macroprogramming abstractions to mPL, tailored to domain and resource constraints, without modifications to the OS. Applications built on COSMOS are currently in use at the Bowen Labs for Structural Engineering, in Purdue University, for high-fidelity structural monitoring. We present a detailed description of the COSMOS architecture, its various components, and a comprehensive experimental evaluation using macro- and micro- benchmarks to demonstrate performance characteristics of COSMOS.

42 citations


Cited by
More filters
Proceedings ArticleDOI
24 Oct 2007
TL;DR: This paper examines data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut, and reports that the indegree of user nodes tends to match the outdegree; the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree node at the fringes of the network.
Abstract: Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks.This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale.Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems.

3,266 citations

Proceedings ArticleDOI
07 Aug 2017
TL;DR: P Pensieve is proposed, a system that generates ABR algorithms using reinforcement learning (RL), and outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%.
Abstract: Client-side video players employ adaptive bitrate (ABR) algorithms to optimize user quality of experience (QoE). Despite the abundance of recently proposed schemes, state-of-the-art ABR algorithms suffer from a key limitation: they use fixed control rules based on simplified or inaccurate models of the deployment environment. As a result, existing schemes inevitably fail to achieve optimal performance across a broad set of network conditions and QoE objectives.We propose Pensieve, a system that generates ABR algorithms using reinforcement learning (RL). Pensieve trains a neural network model that selects bitrates for future video chunks based on observations collected by client video players. Pensieve does not rely on pre-programmed models or assumptions about the environment. Instead, it learns to make ABR decisions solely through observations of the resulting performance of past decisions. As a result, Pensieve automatically learns ABR algorithms that adapt to a wide range of environments and QoE metrics. We compare Pensieve to state-of-the-art ABR algorithms using trace-driven and real world experiments spanning a wide variety of network conditions, QoE metrics, and video properties. In all considered scenarios, Pensieve outperforms the best state-of-the-art scheme, with improvements in average QoE of 12%--25%. Pensieve also generalizes well, outperforming existing schemes even on networks for which it was not explicitly trained.

946 citations

Proceedings ArticleDOI
17 Aug 2014
TL;DR: This work suggests an alternative approach: rather than presuming that capacity estimation is required, it is perhaps better to begin by using only the buffer, and then ask whencapacity estimation is needed, which allows us to reduce the rebuffer rate by 10-20% compared to Netflix's then-default ABR algorithm, while delivering a similar average video rate.
Abstract: Existing ABR algorithms face a significant challenge in estimating future capacity: capacity can vary widely over time, a phenomenon commonly observed in commercial services. In this work, we suggest an alternative approach: rather than presuming that capacity estimation is required, it is perhaps better to begin by using only the buffer, and then ask when capacity estimation is needed. We test the viability of this approach through a series of experiments spanning millions of real users in a commercial service. We start with a simple design which directly chooses the video rate based on the current buffer occupancy. Our own investigation reveals that capacity estimation is unnecessary in steady state; however using simple capacity estimation (based on immediate past throughput) is important during the startup phase, when the buffer itself is growing from empty. This approach allows us to reduce the rebuffer rate by 10-20% compared to Netflix's then-default ABR algorithm, while delivering a similar average video rate, and a higher video rate in steady state.

931 citations

Proceedings ArticleDOI
17 Aug 2015
TL;DR: A principled control-theoretic model is developed that can optimally combine throughput and buffer occupancy information to outperform traditional approaches in bitrate adaptation in client-side players and is presented as a novel model predictive control algorithm.
Abstract: User-perceived quality-of-experience (QoE) is critical in Internet video applications as it impacts revenues for content providers and delivery systems. Given that there is little support in the network for optimizing such measures, bottlenecks could occur anywhere in the delivery system. Consequently, a robust bitrate adaptation algorithm in client-side players is critical to ensure good user experience. Previous studies have shown key limitations of state-of-art commercial solutions and proposed a range of heuristic fixes. Despite the emergence of several proposals, there is still a distinct lack of consensus on: (1) How best to design this client-side bitrate adaptation logic (e.g., use rate estimates vs. buffer occupancy); (2) How well specific classes of approaches will perform under diverse operating regimes (e.g., high throughput variability); or (3) How do they actually balance different QoE objectives (e.g., startup delay vs. rebuffering). To this end, this paper makes three key technical contributions. First, to bring some rigor to this space, we develop a principled control-theoretic model to reason about a broad spectrum of strategies. Second, we propose a novel model predictive control algorithm that can optimally combine throughput and buffer occupancy information to outperform traditional approaches. Third, we present a practical implementation in a reference video player to validate our approach using realistic trace-driven emulations.

851 citations

Proceedings ArticleDOI
10 Dec 2012
TL;DR: A principled understanding of bit-rate adaptation is presented and a suite of techniques that can systematically guide the tradeoffs between stability, fairness, and efficiency are developed, which lead to a general framework for robust video adaptation.
Abstract: Many commercial video players rely on bitrate adaptation logic to adapt the bitrate in response to changing network conditions. Past measurement studies have identified issues with today's commercial players with respect to three key metrics---efficiency, fairness, and stability---when multiple bitrate-adaptive players share a bottleneck link. Unfortunately, our current understanding of why these effects occur and how they can be mitigated is quite limited.In this paper, we present a principled understanding of bitrate adaptation and analyze several commercial players through the lens of an abstract player model. Through this framework, we identify the root causes of several undesirable interactions that arise as a consequence of overlaying the video bitrate adaptation over HTTP. Building on these insights, we develop a suite of techniques that can systematically guide the tradeoffs between stability, fairness and efficiency and thus lead to a general framework for robust video adaptation. We pick one concrete instance from this design space and show that it significantly outperforms today's commercial players on all three key metrics across a range of experimental scenarios.

806 citations