scispace - formally typeset
Search or ask a question

Showing papers by "Aman Kansal published in 2015"


Patent
Aman Kansal1, Jie Liu1
29 Jun 2015
TL;DR: In this article, a description of computing resource requirements for execution of an application associated with a publicly available service is obtained, where access to computing resources is opportunistically obtained from a computing entity that includes a private computing device that is external to, and separate from, the publicly available services.
Abstract: A description of computing resource requirements for execution of an application associated with a publicly available service is obtained. Access to computing resources is opportunistically obtained from a computing entity that includes a private computing device that is external to, and separate from, the publicly available service. The computing resource requirements are intelligently matched to available computing resources of the computing entity with private computing resources that are temporarily available from a private computing device source. The intelligent matching is performed using an optimization analysis.

11 citations


Proceedings Article
18 May 2015
TL;DR: Connected sensing devices, such as cameras, thermostats, in-home motion, door-window, energy, water sensors, are rapidly permeating the authors' living environments and enable a wide variety of applications spanning security, efficiency, healthcare, and others.
Abstract: Connected sensing devices, such as cameras, thermostats, in-home motion, door-window, energy, water sensors [1], collectively dubbed as the Internet of Things (IoT), are rapidly permeating our living environments [2], with an estimated 50 billion such devices in use by 2020 [8]. They enable a wide variety of applications spanning security, efficiency, healthcare, and others. Typically, these applications collect data using sensing devices to draw inferences about the environment or the user, and use the inferences to perform certain actions. For example, Nest [10] uses motion sensor data to infer home occupancy and adjusts the thermostat.

10 citations


Posted Content
TL;DR: The DSN-44 paper is the first to understand how tolerant different data-intensive applications are to memory errors and design a new memory system organization that matches hardware reliability to application tolerance in order to reduce system cost.
Abstract: 1. Summary Recent studies estimate that server cost contributes to as much as 57% of the total cost of ownership (TCO) of a datacenter [1]. One key contributor to this high server cost is the procurement of memory devices such as DRAMs, especially for data-intensive datacenter cloud applications that need low latency (such as web search, in-memory caching, and graph traversal). Such memory devices, however, may be prone to hardware errors that occur due to unintended bit flips during device operation [40, 33, 41, 20]. To protect against such errors, traditional systems uniformly employ devices with highquality chips and error correction techniques, both of which increase device cost. At the same time, we make the observations that 1) data-intensive applications exhibit a diverse spectrum of tolerance to memory errors, and 2) traditional one-size-fits-all memory reliability techniques are inefficient in terms of cost. Our DSN-44 paper [30] is the first to 1) understand how tolerant different data-intensive applications are to memory errors and 2) design a new memory system organization that matches hardware reliability to application tolerance in order to reduce system cost. The main idea of our approach is to classify applications based on their memory error tolerance, and map applications to heterogeneous-reliability memory system designs managed cooperatively between hardware and software to reduce system cost. Our DSN-44 paper provides the following contributions: 1. A new methodology to quantify the tolerance of applications to memory errors. Our approach measures the effect of memory errors on application correctness and quantifies an application’s ability to mask or recover from memory errors. 2. A comprehensive characterization of the memory error tolerance of three data-intensive workloads: an interactive web search application [30, 39], an in-memory key‐value store [30, 3], and a graph mining framework [30, 29]. We find that there exists an order of magnitude difference in memory error tolerance across these three applications. 3. An exploration of the design space of new memory system organizations, called heterogeneous-reliability memory, which combines a heterogeneous mix of reliability techniques that leverage application error tolerance to reduce system cost. We show that our techniques can reduce server hardware cost by 4.7%, while achieving 99.90% single server availability.

2 citations


01 Oct 2015
TL;DR: In this article, the authors present Beam, a framework and runtime for distributed inference-driven applications that break down application silos by decoupling their inference logic from other functionality, which simplifies applications by letting them specify "what should be sensed or inferred, without worrying about "how it is sensed" or "where it is inferred".
Abstract: The proliferation of connected sensing devices (or Internet of Things) can in theory enable a range of “smart” applications that make rich inferences about users and their environment. But in practice, developing such applications today is arduous because they are constructed as monolithic silos, tightly coupled to sensing devices, and must implement all data sensing and inference logic, even as devices move or are temporarily disconnected. We present Beam, a framework and runtime for distributed inference-driven applications that breaks down application silos by decoupling their inference logic from other functionality. It simplifies applications by letting them specify “what should be sensed or inferred,” without worrying about “how it is sensed or inferred.” We discuss the challenges and opportunities in building such an inference framework.

1 citations