scispace - formally typeset
Search or ask a question

Showing papers by "Chris Harrison published in 2018"


Proceedings ArticleDOI
21 Apr 2018
TL;DR: Through a series of evaluations, it is demonstrated Wall++ can enable robust room-scale interactive and context-aware applications and can track users' touch and gestures, as well as estimate body pose if they are close.
Abstract: Human environments are typified by walls, homes, offices, schools, museums, hospitals and pretty much every indoor context one can imagine has walls. In many cases, they make up a majority of readily accessible indoor surface area, and yet they are static their primary function is to be a wall, separating spaces and hiding infrastructure. We present Wall++, a low-cost sensing approach that allows walls to become a smart infrastructure. Instead of merely separating spaces, walls can now enhance rooms with sensing and interactivity. Our wall treatment and sensing hardware can track users' touch and gestures, as well as estimate body pose if they are close. By capturing airborne electromagnetic noise, we can also detect what appliances are active and where they are located. Through a series of evaluations, we demonstrate Wall++ can enable robust room-scale interactive and context-aware applications.

90 citations


Proceedings ArticleDOI
11 Oct 2018
TL;DR: This work describes a novel, real-time, sound-based activity recognition system that starts by taking an existing, state-of-the-art sound labeling model, which is then tuned to classes of interest by drawing data from professional sound effect libraries traditionally used in the entertainment industry.
Abstract: Despite sound being a rich source of information, computing devices with microphones do not leverage audio to glean useful insights about their physical and social context. For example, a smart speaker sitting on a kitchen countertop cannot figure out if it is in a kitchen, let alone know what a user is doing in a kitchen - a missed opportunity. In this work, we describe a novel, real-time, sound-based activity recognition system. We start by taking an existing, state-of-the-art sound labeling model, which we then tune to classes of interest by drawing data from professional sound effect libraries traditionally used in the entertainment industry. These well-labeled and high-quality sounds are the perfect atomic unit for data augmentation, including amplitude, reverb, and mixing, allowing us to exponentially grow our tuning data in realistic ways. We quantify the performance of our approach across a range of environments and device categories and show that microphone-equipped computing devices already have the requisite capability to unlock real-time activity recognition comparable to human accuracy.

80 citations


Journal ArticleDOI
27 Dec 2018
TL;DR: GymCam advances the field of real-time exercise tracking by filling some crucial gaps, such as tracking whole body motion, handling occlusion, and enabling single-point sensing for a multitude of users.
Abstract: Worn sensors are popular for automatically tracking exercises. However, a wearable is usually attached to one part of the body, tracks only that location, and thus is inadequate for capturing a wide range of exercises, especially when other limbs are involved. Cameras, on the other hand, can fully track a user's body, but suffer from noise and occlusion. We present GymCam, a camera-based system for automatically detecting, recognizing and tracking multiple people and exercises simultaneously in unconstrained environments without any user intervention. We collected data in a varsity gym, correctly segmenting exercises from other activities with an accuracy of 84.6%, recognizing the type of exercise at 93.6% accuracy, and counting the number of repetitions to within ± 1.7 on average. GymCam advances the field of real-time exercise tracking by filling some crucial gaps, such as tracking whole body motion, handling occlusion, and enabling single-point sensing for a multitude of users.

58 citations


Proceedings ArticleDOI
19 Apr 2018
TL;DR: This work presents the first fully functional and self-contained projection smartwatch implementation, containing the requisite compute, power, projection and touch-sensing capabilities.
Abstract: Compact, worn computers with projected, on-skin touch interfaces have been a long-standing yet elusive goal, largely written off as science fiction. Such devices offer the potential to mitigate the significant human input/output bottleneck inherent in worn devices with small screens. In this work, we present the first fully functional and self-contained projection smartwatch implementation, containing the requisite compute, power, projection and touch-sensing capabilities. Our watch offers roughly 40 sq. cm of interactive surface area -- more than five times that of a typical smartwatch display. We demonstrate continuous 2D finger tracking with interactive, rectified graphics, transforming the arm into a touchscreen. We discuss our hardware and software implementation, as well as evaluation results regarding touch accuracy and projection visibility.

36 citations


Proceedings ArticleDOI
11 Oct 2018
TL;DR: This work presents Vibrosight, a new approach to sense activities across entire rooms using long-range laser vibrometry, which can sense physical vibrations at one specific point, making it robust to interference from other activities and noisy environments.
Abstract: Smart and responsive environments rely on the ability to detect physical events, such as appliance use and human activities. Currently, to sense these types of events, one must either upgrade to "smart" appliances, or attach aftermarket sensors to existing objects. These approaches can be expensive, intrusive and inflexible. In this work, we present Vibrosight, a new approach to sense activities across entire rooms using long-range laser vibrometry. Unlike a microphone, our approach can sense physical vibrations at one specific point, making it robust to interference from other activities and noisy environments. This property enables detection of simultaneous activities, which has proven challenging in prior work. Through a series of evaluations, we show that Vibrosight can offer high accuracies at long range, allowing our sensor to be placed in an inconspicuous location. We also explore a range of additional uses, including data transmission, sensing user input and modes of appliance operation, and detecting human movement and activities on work surfaces.

34 citations


Proceedings ArticleDOI
19 Apr 2018
TL;DR: A new technical approach is presented for bringing the digital and paper worlds closer together, by enabling paper to track finger input and also drawn input with writing implements and showing that the method can be sufficiently low-cost and accurate to enable new interactive opportunities with this pervasive and venerable material.
Abstract: Paper continues to be a versatile and indispensable material in the 21st century. Of course, paper is a passive medium with no inherent interactivity, precluding us from computationally-enhancing a wide variety of paper-based activities. In this work, we present a new technical approach for bringing the digital and paper worlds closer together, by enabling paper to track finger input and also drawn input with writing implements. Importantly, for paper to still be considered paper, our method had to be very low cost. This necessitated research into materials, fabrication methods and sensing techniques. We describe the outcome of our investigations and show that our method can be sufficiently low-cost and accurate to enable new interactive opportunities with this pervasive and venerable material.

30 citations


Journal ArticleDOI
05 Jul 2018
TL;DR: EyeSpyVR, a software-only eye sensing approach for smartphone-based VR, which uses a phone's front facing camera as a sensor and its display as a passive illuminator, enables four sensing modalities: detecting when the VR head set is worn, detecting blinks, recognizing the wearer's identity, and coarse gaze tracking.
Abstract: Low cost virtual reality (VR) headsets powered by smartphones are becoming ubiquitous. Their unique position on the user's face opens interesting opportunities for interactive sensing. In this paper, we describe EyeSpyVR, a software-only eye sensing approach for smartphone-based VR, which uses a phone's front facing camera as a sensor and its display as a passive illuminator. Our proof-of-concept system, using a commodity Apple iPhone, enables four sensing modalities: detecting when the VR head set is worn, detecting blinks, recognizing the wearer's identity, and coarse gaze tracking - features typically found in high-end or specialty VR headsets. We demonstrate the utility and accuracy of EyeSpyVR in a series of studies with 70 participants, finding a worn detection of 100%, blink detection rate of 95.3%, family user identification accuracy of 81.4%, and mean gaze tracking error of 10.8° when calibrated to the wearer (12.9° without calibration). These sensing abilities can be used by developers to enable new interactive features and more immersive VR experiences on existing, off-the-shelf hardware.

25 citations


Journal ArticleDOI
18 Sep 2018
TL;DR: This work describes the iterative development of Zensors++, a full-stack crowd-AI camera-based sensing system that moves significantly beyond prior work in terms of scale, question diversity, accuracy, latency, and economic feasibility.
Abstract: Smart appliances with built-in cameras, such as the Nest Cam and Amazon Echo Look, are becoming pervasive. They hold the promise of bringing high fidelity, contextually rich sensing into our homes, workplaces and other environments. Despite recent and impressive advances, computer vision systems are still limited in the types of sensing questions they can answer, and more importantly, do not easily generalize across diverse human environments. In response, researchers have investigated hybrid crowd- and AI-powered methods that collect human labels to bootstrap automatic processes. However, deployments have been small and mostly confined to institutional settings, leaving open questions about the scalability and generality of the approach. In this work, we describe our iterative development of Zensors++, a full-stack crowd-AI camera-based sensing system that moves significantly beyond prior work in terms of scale, question diversity, accuracy, latency, and economic feasibility. We deployed Zensors++ in the wild, with real users, over many months and environments, generating 1.6 million answers for nearly 200 questions created by our participants, costing roughly 6/10ths of a cent per answer delivered. We share lessons learned, insights gleaned, and implications for future crowd-AI vision systems.

22 citations


Patent
24 Apr 2018
TL;DR: In this paper, a virtual sensor is trained as a classifier for an event that is correlated to the data from one or more sensor streams within the featurized sensor data, and the virtual sensor can then subscribe to the relevant sensor feeds from the sensor assembly and monitor for subsequent occurrences of the event.
Abstract: A sensing system includes a sensor assembly that is communicably connected to a computer system, such as a server or a cloud computing system. The sensor assembly includes a plurality of sensors that sense a variety of different physical phenomena. The sensor assembly featurizes the raw sensor data and transmits the featurized data to the computer system. Through machine learning, the computer system then trains a classifier to serve as a virtual sensor for an event that is correlated to the data from one or more sensor streams within the featurized sensor data. The virtual sensor can then subscribe to the relevant sensor feeds from the sensor assembly and monitor for subsequent occurrences of the event. Higher order virtual sensors can receive the outputs from lower order virtual sensors to infer nonbinary details about the environment in which the sensor assemblies are located.

15 citations


Journal ArticleDOI

7 citations


Patent
Chris Harrison1, Joshua Dale Stone1
01 Feb 2018
TL;DR: In this article, an object classification component can determine a contact classification for an object with respect to the touch sensitive surface, based on a result(s) of the analyzing, and in response to determining a contact state of the object in relation to the sensitive surface.
Abstract: Techniques enabling improved classification of objects against a touch sensitive surface of a touch sensitive device are presented. An object classification component can analyze touch screen data associated with a touch sensitive surface associated with a device and motion information relating to motion along at least one axis associated with the device. The object classification component can determine a contact classification for an object with respect to the touch sensitive surface, based on a result(s) of the analyzing, and in response to determining a contact state of the object with respect to the touch sensitive surface. An event determination component can control a function associated with the touch sensitive surface based on the contact classification for the object, the function comprising enabling or disabling a touch sensitive function of the touch sensitive surface of a display screen of the device or switching the display screen on or off.

Patent
15 Nov 2018
TL;DR: In this article, the edge contact of a touch screen is treated as a separate class of touch events such that any touches occurring near the edge of the touch screen are to be processed by a classifier that is configured to process edge contacts as compared to other contacts that may occur in the approximate middle of touch screen which may be wholly digitized.
Abstract: Methods and apparatus of embodiments of the present invention include a classification system configured to treat edge contact of a touch screen as a separate class of touch events such that any touches occurring near the edge of the touch screen are to be processed by a classifier that is configured to process edge contacts as compared to a classifier that is configured to process other contacts that may occur in the approximate middle of the touch screen which may be wholly digitized. An apparatus may employ two separate and distinct classifiers, including a full touch classifier and an edge touch classifier. The touch screen may be configured to have two different sensing regions to determine which of the two classifiers is appropriate for a touch event.

Patent
04 Jan 2018
TL;DR: In this paper, a system for classifying touch events of different interaction layers includes a touch screen configured to display an interactive element, one or more vibro-acoustic sensors coupled to the touch screen, a touch event detector configured to monitor the sensors and save vibroacoustic signals sensed by the sensors, wherein the touch event detectors are further configured to detect touch events in which the interactive element is touched by a first or a second finger part of a user.
Abstract: A system for classifying touch events of different interaction layers includes a touch screen configured to display an interactive element, one or more vibro-acoustic sensors coupled to the touch screen, a touch event detector configured to monitor the one or more vibro-acoustic sensors and to save vibro-acoustic signals sensed by the one or more vibro acoustic sensors, wherein the touch event detector is further configured to detect touch events in which the interactive element is touched by a first or a second finger part of a user, and wherein the touch events result in generating the vibro-acoustic signals, and a vibro-acoustic classifier is configured to classify the vibro-acoustic signals and activate corresponding functions in the different layers dependent upon which finger part is used.

Patent
05 Apr 2018
TL;DR: In this article, a system for classifying touch events includes a touch screen configured to display an interactive element, one or more acoustic sensors coupled to the touch screen, a touch event detector configured to monitor the acoustic sensors and to save acoustic signals sensed by the one or multiple acoustic sensors.
Abstract: A system for classifying touch events includes a touch screen configured to display an interactive element, one or more acoustic sensors coupled to the touch screen, a touch event detector configured to monitor the one or more acoustic sensors and to save acoustic signals sensed by the one or more acoustic sensors, wherein the touch event detector is further configured to detect touch events in which the interactive element is touched by a first or a second finger part of a user, and wherein the touch events result in generating the acoustic signals, and an acoustic classifier configured to classify the acoustic signals.