scispace - formally typeset
Search or ask a question
Author

Katrina LaCurts

Bio: Katrina LaCurts is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Cloud computing & Bandwidth (computing). The author has an hindex of 6, co-authored 9 publications receiving 1251 citations. Previous affiliations of Katrina LaCurts include University of Maryland, College Park.

Papers
More filters
Proceedings ArticleDOI
04 Nov 2009
TL;DR: It is shown that VTrack can tolerate significant noise and outages in these location estimates, and still successfully identify delay-prone segments, and provide accurate enough delays for delay-aware routing algorithms.
Abstract: Traffic delays and congestion are a major source of inefficiency, wasted fuel, and commuter frustration. Measuring and localizing these delays, and routing users around them, is an important step towards reducing the time people spend stuck in traffic. As others have noted, the proliferation of commodity smartphones that can provide location estimates using a variety of sensors---GPS, WiFi, and/or cellular triangulation---opens up the attractive possibility of using position samples from drivers' phones to monitor traffic delays at a fine spatiotemporal granularity. This paper presents VTrack, a system for travel time estimation using this sensor data that addresses two key challenges: energy consumption and sensor unreliability. While GPS provides highly accurate location estimates, it has several limitations: some phones don't have GPS at all, the GPS sensor doesn't work in "urban canyons" (tall buildings and tunnels) or when the phone is inside a pocket, and the GPS on many phones is power-hungry and drains the battery quickly. In these cases, VTrack can use alternative, less energy-hungry but noisier sensors like WiFi to estimate both a user's trajectory and travel time along the route. VTrack uses a hidden Markov model (HMM)-based map matching scheme and travel time estimation method that interpolates sparse data to identify the most probable road segments driven by the user and to attribute travel times to those segments. We present experimental results from real drive data and WiFi access point sightings gathered from a deployment on several cars. We show that VTrack can tolerate significant noise and outages in these location estimates, and still successfully identify delay-prone segments, and provide accurate enough delays for delay-aware routing algorithms. We also study the best sampling strategies for WiFi and GPS sensors for different energy cost regimes.

898 citations

Journal ArticleDOI
17 Aug 2008
TL;DR: An auction-based model to study and improve upon BitTorrent's incentives is proposed, and it is proved that a proportional-share client is strategy-proof and counter-intuitively, that BitTorrent peers have incentive to intelligently under-report what pieces of the file they have to their neighbors.
Abstract: Incentives play a crucial role in BitTorrent, motivating users to upload to others to achieve fast download times for all peers. Though long believed to be robust to strategic manipulation, recent work has empirically shown that BitTorrent does not provide its users incentive to follow the protocol. We propose an auction-based model to study and improve upon BitTorrent's incentives. The insight behind our model is that BitTorrent uses, not tit-for-tat as widely believed, but an auction to decide which peers to serve. Our model not only captures known, performance-improving strategies, it shapes our thinking toward new, effective strategies. For example, our analysis demonstrates, counter-intuitively, that BitTorrent peers have incentive to intelligently under-report what pieces of the file they have to their neighbors. We implement and evaluate a modification to BitTorrent in which peers reward one another with proportional shares of bandwidth. Within our game-theoretic model, we prove that a proportional-share client is strategy-proof. With experiments on PlanetLab, a local cluster, and live downloads, we show that a proportional-share unchoker yields faster downloads against BitTorrent and BitTyrant clients, and that under-reporting pieces yields prolonged neighbor interest.

196 citations

Proceedings ArticleDOI
23 Oct 2013
TL;DR: Choreo reduces application completion time by an average of 8%-14% when applications are placed all at once, and 22%-43% when they arrive in real-time, compared to alternative placement schemes.
Abstract: Cloud computing infrastructures are increasingly being used by network-intensive applications that transfer significant amounts of data between the nodes on which they run. This paper shows that tenants can do a better job placing applications by understanding the underlying cloud network as well as the demands of the applications. To do so, tenants must be able to quickly and accurately measure the cloud network and profile their applications, and then use a network-aware placement method to place applications. This paper describes Choreo, a system that solves these problems. Our experiments measure Amazon's EC2 and Rackspace networks and use three weeks of network data from applications running on the HP Cloud network. We find that Choreo reduces application completion time by an average of 8%-14% (max improvement: 61%) when applications are placed all at once, and 22%-43% (max improvement: 79%) when they arrive in real-time, compared to alternative placement schemes.

88 citations

Proceedings Article
17 Jun 2014
TL;DR: This work develops a prediction algorithm usable by a cloud provider to suggest an appropriate bandwidth guarantee to a tenant, and finds that the inter-rack network utilization in certain datacenter topologies can be more than doubled.
Abstract: In cloud-computing systems, network-bandwidth guarantees have been shown to improve predictability of application performance and cost [1, 28] Most previous work on cloud-bandwidth guarantees has assumed that cloud tenants know what bandwidth guarantees they want [1, 17] However, as we show in this work, application bandwidth demands can be complex and time-varying, and many tenants might lack sufficient information to request a guarantee that is well-matched to their needs, which can lead to over-provisioning (and thus reduced cost-efficiency) or under-provisioning (and thus poor user experience) We analyze traffic traces gathered over six months from an HP Cloud Services datacenter, finding that application bandwidth consumption is both time-varying and spatially inhomogeneous This variability makes it hard to predict requirements To solve this problem, we develop a prediction algorithm usable by a cloud provider to suggest an appropriate bandwidth guarantee to a tenant With tenant VM placement using these predictive guarantees, we find that the inter-rack network utilization in certain datacenter topologies can be more than doubled

56 citations

Proceedings ArticleDOI
01 Nov 2010
TL;DR: Analysis of data collected from 1407 access points in 110 different commercially deployed Meraki wireless mesh networks, constituting perhaps the largest study of real-world 802.11 networks to date, finds that the SNR of a link is a good indicator of the optimal bit rate for that link, but that one cannot make an SNR-to-bit rate look-up table that was accurate for an entire network.
Abstract: Despite many years of work in wireless mesh networks built using 802.11 radios, the performance and behavior of these networks in the wild is not well-understood. This lack of understanding is due in part to the lack of access to data from a wide range of these networks; most researchers have access to only one or two testbeds at any time. In recent years, however, 802.11 mesh networks networks have been deployed commercially and have real users who use the networks in a wide range of conditions. This paper analyzes data collected from 1407 access points in 110 different commercially deployed Meraki wireless mesh networks, constituting perhaps the largest study of real-world 802.11 networks to date. After analyzing a 24-hour snapshot of data collected from these networks, we answer questions from a variety of active research topics, such as the accuracy of SNR-based bit rate adaptation, the impact of opportunistic routing, and the prevalence of hidden terminals. The size and diversity of our data set allows us to analyze claims previously only made in small-scale studies. In particular, we find that the SNR of a link is a good indicator of the optimal bit rate for that link, but that one could not make an SNR-to-bit rate look-up table that was accurate for an entire network. We also find that an ideal opportunistic routing protocol provides little to no benefit on most paths, and that "hidden triples"---network topologies that can lead to hidden terminals--are more common than suggested in previous work, and increase in proportion as the bit rate increases.

49 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This article surveys existing mobile phone sensing algorithms, applications, and systems, and discusses the emerging sensing paradigms, and formulates an architectural framework for discussing a number of the open issues and challenges emerging in the new area ofMobile phone sensing research.
Abstract: Mobile phones or smartphones are rapidly becoming the central computer and communication device in people's lives. Application delivery channels such as the Apple AppStore are transforming mobile phones into App Phones, capable of downloading a myriad of applications in an instant. Importantly, today's smartphones are programmable and come with a growing set of cheap powerful embedded sensors, such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera, which are enabling the emergence of personal, group, and communityscale sensing applications. We believe that sensor-equipped mobile phones will revolutionize many sectors of our economy, including business, healthcare, social networks, environmental monitoring, and transportation. In this article we survey existing mobile phone sensing algorithms, applications, and systems. We discuss the emerging sensing paradigms, and formulate an architectural framework for discussing a number of the open issues and challenges emerging in the new area of mobile phone sensing research.

2,316 citations

Journal ArticleDOI
TL;DR: The concept of urban computing is introduced, discussing its general framework and key challenges from the perspective of computer sciences, and the typical technologies that are needed in urban computing are summarized into four folds.
Abstract: Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community.

1,290 citations

Proceedings ArticleDOI
22 Aug 2012
TL;DR: This work designs an auction-based incentive mechanism for mobile phone sensing that is computationally efficient, individually rational, profitable, and truthful, and shows how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized.
Abstract: Mobile phone sensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. In a mobile phone sensing system, the platform recruits smartphone users to provide sensing service. Existing mobile phone sensing applications and systems lack good incentive mechanisms that can attract more user participation. To address this issue, we design incentive mechanisms for mobile phone sensing. We consider two system models: the platform-centric model where the platform provides a reward shared by participating users, and the user-centric model where users have more control over the payment they will receive. For the platform-centric model, we design an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. We show how to compute the unique Stackelberg Equilibrium, at which the utility of the platform is maximized, and none of the users can improve its utility by unilaterally deviating from its current strategy. For the user-centric model, we design an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful. Through extensive simulations, we evaluate the performance and validate the theoretical properties of our incentive mechanisms.

967 citations

Posted Content
TL;DR: The InterPlanetary File System is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files, which forms a generalized Merkle DAG, a data structure upon which one can build versioned file systems, blockchains, and even a Permanent Web.
Abstract: The InterPlanetary File System (IPFS) is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. In some ways, IPFS is similar to the Web, but IPFS could be seen as a single BitTorrent swarm, exchanging objects within one Git repository. In other words, IPFS provides a high throughput content-addressed block storage model, with content-addressed hyper links. This forms a generalized Merkle DAG, a data structure upon which one can build versioned file systems, blockchains, and even a Permanent Web. IPFS combines a distributed hashtable, an incentivized block exchange, and a self-certifying namespace. IPFS has no single point of failure, and nodes do not need to trust each other.

818 citations

Proceedings ArticleDOI
21 Aug 2011
TL;DR: A Cloud-based system computing customized and practically fast driving routes for an end user using (historical and real-time) traffic conditions and driver behavior, which accurately estimates the travel time of a route for a user; hence finding the fastest route customized for the user.
Abstract: This paper presents a Cloud-based system computing customized and practically fast driving routes for an end user using (historical and real-time) traffic conditions and driver behavior. In this system, GPS-equipped taxicabs are employed as mobile sensors constantly probing the traffic rhythm of a city and taxi drivers' intelligence in choosing driving directions in the physical world. Meanwhile, a Cloud aggregates and mines the information from these taxis and other sources from the Internet, like Web maps and weather forecast. The Cloud builds a model incorporating day of the week, time of day, weather conditions, and individual driving strategies (both of the taxi drivers and of the end user for whom the route is being computed). Using this model, our system predicts the traffic conditions of a future time (when the computed route is actually driven) and performs a self-adaptive driving direction service for a particular user. This service gradually learns a user's driving behavior from the user's GPS logs and customizes the fastest route for the user with the help of the Cloud. We evaluate our service using a real-world dataset generated by over 33,000 taxis over a period of 3 months in Beijing. As a result, our service accurately estimates the travel time of a route for a user; hence finding the fastest route customized for the user.

758 citations