scispace - formally typeset
C

Chris Harrison

Researcher at Carnegie Mellon University

Publications -  176
Citations -  9846

Chris Harrison is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Touchscreen & Mobile device. The author has an hindex of 47, co-authored 175 publications receiving 8457 citations. Previous affiliations of Chris Harrison include AT&T & M&Co..

Papers
More filters
Proceedings ArticleDOI

Expanding the input expressivity of smartwatches with mechanical pan, twist, tilt and click

TL;DR: This work developed a proof of concept smartwatch that supports continuous 2D panning and twist, as well as binary tilt and click, and proposes a complementary input approach: using the watch face as a multi-degree-of-freedom, mechanical interface.
Proceedings ArticleDOI

ZoomBoard: a diminutive qwerty soft keyboard using iterative zooming for ultra-small devices

TL;DR: This work presents a soft keyboard interaction technique called ZoomBoard that enables text entry on ultra-small devices and based the design on a QWERTY layout, so that it is immediately familiar to users and leverages existing skill.
Proceedings ArticleDOI

Synthetic Sensors: Towards General-Purpose Sensing

TL;DR: This work explores the notion of general purpose sensing, wherein a single enhanced sensor can indirectly monitor a large context, without direct instrumentation of objects, through what it is called Synthetic Sensors.
Proceedings ArticleDOI

Electrick: Low-Cost Touch Sensing Using Electric Field Tomography

TL;DR: It is shown that Electrick can enable new interactive opportunities on a diverse set of objects and surfaces that were previously static, and is compatible with commonplace manufacturing methods, such as spray/brush coating, vacuum forming, and casting/molding enabling a wide range of possible uses and outputs.
Proceedings ArticleDOI

Air+touch: interweaving touch & in-air gestures

TL;DR: Air+Touch, a new class of interactions that interweave touch events with in-air gestures, is presented, offering a unified input modality with expressiveness greater than each inputmodality alone.