scispace - formally typeset
Search or ask a question
Author

Feng Qian

Other affiliations: Indiana University, Association for Computing Machinery, AT&T  ...read more
Bio: Feng Qian is an academic researcher from University of Minnesota. The author has contributed to research in topics: Mobile device & Computer science. The author has an hindex of 33, co-authored 154 publications receiving 5042 citations. Previous affiliations of Feng Qian include Indiana University & Association for Computing Machinery.


Papers
More filters
Proceedings ArticleDOI
25 Jun 2012
TL;DR: This paper develops the first empirically derived comprehensive power model of a commercial LTE network with less than 6% error rate and state transitions matching the specifications, and identifies that the performance bottleneck for web-based applications lies less in the network, compared to the previous study in 3G.
Abstract: With the recent advent of 4G LTE networks, there has been increasing interest to better understand the performance and power characteristics, compared with 3G/WiFi networks. In this paper, we take one of the first steps in this direction.Using a publicly deployed tool we designed for Android called 4GTest attracting more than 3000 users within 2 months and extensive local experiments, we study the network performance of LTE networks and compare with other types of mobile networks. We observe LTE generally has significantly higher downlink and uplink throughput than 3G and even WiFi, with a median value of 13Mbps and 6Mbps, respectively. We develop the first empirically derived comprehensive power model of a commercial LTE network with less than 6% error rate and state transitions matching the specifications. Using a comprehensive data set consisting of 5-month traces of 20 smartphone users, we carefully investigate the energy usage in 3G, LTE, and WiFi networks and evaluate the impact of configuring LTE-related parameters. Despite several new power saving improvements, we find that LTE is as much as 23 times less power efficient compared with WiFi, and even less power efficient than 3G, based on the user traces and the long high power tail is found to be a key contributor. In addition, we perform case studies of several popular applications on Android in LTE and identify that the performance bottleneck for web-based applications lies less in the network, compared to our previous study in 3G [24]. Instead, the device's processing power, despite the significant improvement compared to our analysis two years ago, becomes more of a bottleneck.

1,029 citations

Proceedings ArticleDOI
27 Aug 2013
TL;DR: It is observed that LTE has significantly shorter state promotion delays and lower RTTs than those of 3G networks, and various inefficiencies in TCP over LTE such as undesired slow start and limited TCP receive window are discovered.
Abstract: With lower latency and higher bandwidth than its predecessor 3G networks, the latest cellular technology 4G LTE has been attracting many new users. However, the interactions among applications, network transport protocol, and the radio layer still remain unexplored. In this work, we conduct an in-depth study of these interactions and their impact on performance, using a combination of active and passive measurements. We observed that LTE has significantly shorter state promotion delays and lower RTTs than those of 3G networks. We discovered various inefficiencies in TCP over LTE such as undesired slow start. We further developed a novel and lightweight passive bandwidth estimation technique for LTE networks. Using this tool, we discovered that many TCP connections significantly under-utilize the available bandwidth. On average, the actually used bandwidth is less than 50% of the available bandwidth. This causes data downloads to be longer, and incur additional energy overhead. We found that the under-utilization can be caused by both application behavior and TCP parameter setting. We found that 52.6% of all downlink TCP flows have been throttled by limited TCP receive window, and that data transfer patterns for some popular applications are both energy and network unfriendly. All these findings highlight the need to develop transport protocol mechanisms and applications that are more LTE-friendly.

392 citations

Proceedings ArticleDOI
03 Oct 2016
TL;DR: This paper proposes a cellular-friendly streaming scheme that delivers only 360 videos' visible portion based on head movement prediction, which can reduce bandwidth consumption by up to 80% based on a trace-driven simulation.
Abstract: As an important component of the virtual reality (VR) technology, 360-degree videos provide users with panoramic view and allow them to freely control their viewing direction during video playback. Usually, a player displays only the visible portion of a 360 video. Thus, fetching the entire raw video frame wastes bandwidth. In this paper, we consider the problem of optimizing 360 video delivery over cellular networks. We first conduct a measurement study on commercial 360 video platforms. We then propose a cellular-friendly streaming scheme that delivers only 360 videos' visible portion based on head movement prediction. Using viewing data collected from real users, we demonstrate the feasibility of our approach, which can reduce bandwidth consumption by up to 80% based on a trace-driven simulation.

391 citations

Proceedings ArticleDOI
28 Jun 2011
TL;DR: ARO, the mobile Application Resource Optimizer, is the first tool that efficiently and accurately exposes the cross-layer interaction among various layers including radio resource channel state, transport layer, application layer, and the user interaction layer to enable the discovery of inefficient resource usage for smartphone applications.
Abstract: Despite the popularity of mobile applications, their performance and energy bottlenecks remain hidden due to a lack of visibility into the resource-constrained mobile execution environment with potentially complex interaction with the application behavior. We design and implement ARO, the mobile Application Resource Optimizer, the first tool that efficiently and accurately exposes the cross-layer interaction among various layers including radio resource channel state, transport layer, application layer, and the user interaction layer to enable the discovery of inefficient resource usage for smartphone applications. To realize this, ARO provides three key novel analyses: (i) accurate inference of lower-layer radio resource control states, (ii) quantification of the resource impact of application traffic patterns, and (iii) detection of energy and radio resource bottlenecks by jointly analyzing cross-layer information. We have implemented ARO and demonstrated its benefit on several essential categories of popular Android applications to detect radio resource and energy inefficiencies, such as unacceptably high (46%) energy overhead of periodic audience measurements and inefficient content prefetching behavior.

310 citations

Proceedings ArticleDOI
01 Nov 2010
TL;DR: This work is the first to accurately infer, for any UMTS network, the state machine that guides the radio resource allocation policy through a light-weight probing scheme, and explores the optimal state machine settings in terms of several critical timer values evaluated using real network traces.
Abstract: 3G cellular data networks have recently witnessed explosive growth. In this work, we focus on UMTS, one of the most popular 3G mobile communication technologies. Our work is the first to accurately infer, for any UMTS network, the state machine (both transitions and timer values) that guides the radio resource allocation policy through a light-weight probing scheme. We systematically characterize the impact of operational state machine settings by analyzing traces collected from a commercial UMTS network, and pinpoint the inefficiencies caused by the interplay between smartphone applications and the state machine behavior. Besides basic characterizations, we explore the optimal state machine settings in terms of several critical timer values evaluated using real network traces. Our findings suggest that the fundamental limitation of the current state machine design is its static nature of treating all traffic according to the same inactivity timers, making it difficult to balance tradeoffs among radio resource usage efficiency, network management overhead, device radio energy consumption, and performance. To the best of our knowledge, our work is the first empirical study that employs real cellular traces to investigate the optimality of UMTS state machine configurations. Our analysis also demonstrates that traffic patterns impose significant impact on radio resource and energy consumption. In particular, We propose a simple improvement that reduces YouTube streaming energy by 80% by leveraging an existing feature called fast dormancy supported by the 3GPP specifications.

299 citations


Cited by
More filters
Proceedings ArticleDOI
25 Jun 2012
TL;DR: This paper develops the first empirically derived comprehensive power model of a commercial LTE network with less than 6% error rate and state transitions matching the specifications, and identifies that the performance bottleneck for web-based applications lies less in the network, compared to the previous study in 3G.
Abstract: With the recent advent of 4G LTE networks, there has been increasing interest to better understand the performance and power characteristics, compared with 3G/WiFi networks. In this paper, we take one of the first steps in this direction.Using a publicly deployed tool we designed for Android called 4GTest attracting more than 3000 users within 2 months and extensive local experiments, we study the network performance of LTE networks and compare with other types of mobile networks. We observe LTE generally has significantly higher downlink and uplink throughput than 3G and even WiFi, with a median value of 13Mbps and 6Mbps, respectively. We develop the first empirically derived comprehensive power model of a commercial LTE network with less than 6% error rate and state transitions matching the specifications. Using a comprehensive data set consisting of 5-month traces of 20 smartphone users, we carefully investigate the energy usage in 3G, LTE, and WiFi networks and evaluate the impact of configuring LTE-related parameters. Despite several new power saving improvements, we find that LTE is as much as 23 times less power efficient compared with WiFi, and even less power efficient than 3G, based on the user traces and the long high power tail is found to be a key contributor. In addition, we perform case studies of several popular applications on Android in LTE and identify that the performance bottleneck for web-based applications lies less in the network, compared to our previous study in 3G [24]. Instead, the device's processing power, despite the significant improvement compared to our analysis two years ago, becomes more of a bottleneck.

1,029 citations

Journal ArticleDOI
TL;DR: Propagation parameters and channel models for understanding mmWave propagation, such as line-of-sight (LOS) probabilities, large-scale path loss, and building penetration loss, as modeled by various standardization bodies are compared over the 0.5–100 GHz range.
Abstract: This paper provides an overview of the features of fifth generation (5G) wireless communication systems now being developed for use in the millimeter wave (mmWave) frequency bands. Early results and key concepts of 5G networks are presented, and the channel modeling efforts of many international groups for both licensed and unlicensed applications are described here. Propagation parameters and channel models for understanding mmWave propagation, such as line-of-sight (LOS) probabilities, large-scale path loss, and building penetration loss, as modeled by various standardization bodies, are compared over the 0.5–100 GHz range.

943 citations

Journal ArticleDOI
TL;DR: The paper presents a brief overview of smart cities, followed by the features and characteristics, generic architecture, composition, and real-world implementations ofSmart cities, and some challenges and opportunities identified through extensive literature survey on smart cities.

925 citations

Proceedings ArticleDOI
04 Apr 2017
TL;DR: Neurosurgeon, a lightweight scheduler to automatically partition DNN computation between mobile devices and datacenters at the granularity of neural network layers is designed, finding that a fine-grained, layer-level computation partitioning strategy based on the data and computation variations of each layer within a DNN has significant latency and energy advantages over the status quo approach.
Abstract: The computation for today's intelligent personal assistants such as Apple Siri, Google Now, and Microsoft Cortana, is performed in the cloud. This cloud-only approach requires significant amounts of data to be sent to the cloud over the wireless network and puts significant computational pressure on the datacenter. However, as the computational resources in mobile devices become more powerful and energy efficient, questions arise as to whether this cloud-only processing is desirable moving forward, and what are the implications of pushing some or all of this compute to the mobile devices on the edge.In this paper, we examine the status quo approach of cloud-only processing and investigate computation partitioning strategies that effectively leverage both the cycles in the cloud and on the mobile device to achieve low latency, low energy consumption, and high datacenter throughput for this class of intelligent applications. Our study uses 8 intelligent applications spanning computer vision, speech, and natural language domains, all employing state-of-the-art Deep Neural Networks (DNNs) as the core machine learning technique. We find that given the characteristics of DNN algorithms, a fine-grained, layer-level computation partitioning strategy based on the data and computation variations of each layer within a DNN has significant latency and energy advantages over the status quo approach.Using this insight, we design Neurosurgeon, a lightweight scheduler to automatically partition DNN computation between mobile devices and datacenters at the granularity of neural network layers. Neurosurgeon does not require per-application profiling. It adapts to various DNN architectures, hardware platforms, wireless networks, and server load levels, intelligently partitioning computation for best latency or best mobile energy. We evaluate Neurosurgeon on a state-of-the-art mobile development platform and show that it improves end-to-end latency by 3.1X on average and up to 40.7X, reduces mobile energy consumption by 59.5% on average and up to 94.7%, and improves datacenter throughput by 1.5X on average and up to 6.7X.

899 citations

Book ChapterDOI
06 Jan 2000
TL;DR: Methods of numerical integration will lead you to always think more and more, and this book will be always right for you.
Abstract: Want to get experience? Want to get any ideas to create new things in your life? Read methods of numerical integration now! By reading this book as soon as possible, you can renew the situation to get the inspirations. Yeah, this way will lead you to always think more and more. In this case, this book will be always right for you. When you can observe more about the book, you will know why you need this.

784 citations