scispace - formally typeset
Search or ask a question

Showing papers by "Chris Harris published in 2022"


Journal ArticleDOI
TL;DR: In this paper , the authors combined geochemical and mineralogical analyses, rock physical property measurements, drone-based photogrammetry, and geoinformatics to explain the locations of dome instabilities at Merapi volcano, Indonesia, showing that a horseshoe-shaped alteration zone that formed in 2014 was subsequently buried by renewed lava extrusion in 2018.
Abstract: Catastrophic lava dome collapse is considered an unpredictable volcanic hazard because the physical properties, stress conditions, and internal structure of lava domes are not well understood and can change rapidly through time. To explain the locations of dome instabilities at Merapi volcano, Indonesia, we combined geochemical and mineralogical analyses, rock physical property measurements, drone-based photogrammetry, and geoinformatics. We show that a horseshoe-shaped alteration zone that formed in 2014 was subsequently buried by renewed lava extrusion in 2018. Drone data, as well as geomechanical, mineralogical, and oxygen isotope data suggest that this zone is characterized by high-porosity hydrothermally altered materials that are mechanically weak. We additionally show that the new lava dome is currently collapsing along this now-hidden weak alteration zone, highlighting that a detailed understanding of dome architecture, made possible using the monitoring techniques employed here, is essential for assessing hazards associated with dome and edifice failure at volcanoes worldwide.

14 citations


Proceedings ArticleDOI
29 Apr 2022
TL;DR: In this paper , a beamforming array of ultrasonic transducers is used to render haptic effects onto the mouth, including point impulses, swipes, and persistent vibrations, which can be incorporated into new and interesting VR experiences.
Abstract: Today’s consumer virtual reality (VR) systems offer limited haptic feedback via vibration motors in handheld controllers. Rendering haptics to other parts of the body is an open challenge, especially in a practical and consumer-friendly manner. The mouth is of particular interest, as it is a close second in tactile sensitivity to the fingertips, offering a unique opportunity to add fine-grained haptic effects. In this research, we developed a thin, compact, beamforming array of ultrasonic transducers, which can render haptic effects onto the mouth. Importantly, all components are integrated into the headset, meaning the user does not need to wear an additional accessory, or place any external infrastructure in their room. We explored several effects, including point impulses, swipes, and persistent vibrations. Our haptic sensations can be felt on the lips, teeth and tongue, which can be incorporated into new and interesting VR experiences.

11 citations


Journal ArticleDOI

[...]

TL;DR: This work first makes use of power and compute-optimized IMUs sampled at 50 Hz to act as a trigger for detecting activity events and uses a multimodal deep learning model that augments the motion data with audio data captured on a smartwatch, achieving recognition accuracy of 92.2% across 26 daily activities in four indoor environments.
Abstract: Despite advances in audio- and motion-based human activity recognition (HAR) systems, a practical, power-efficient, and privacy-sensitive activity recognition system has remained elusive. State-of-the-art activity recognition systems often require power-hungry and privacy-invasive audio data. This is especially challenging for resource-constrained wearables, such as smartwatches. To counter the need for an always-on audio-based activity classification system, we first make use of power and compute-optimized IMUs sampled at 50 Hz to act as a trigger for detecting activity events. Once detected, we use a multimodal deep learning model that augments the motion data with audio data captured on a smartwatch. We subsample this audio to rates ≤ 1 kHz, rendering spoken content unintelligible, while also reducing power consumption on mobile devices. Our multimodal deep learning model achieves a recognition accuracy of 92.2% across 26 daily activities in four indoor environments. Our findings show that subsampling audio from 16 kHz down to 1 kHz, in concert with motion data, does not result in a significant drop in inference accuracy. We also analyze the speech content intelligibility and power requirements of audio sampled at less than 1 kHz and demonstrate that our proposed approach can improve the practicality of human activity recognition systems.

7 citations


Proceedings ArticleDOI
27 Apr 2022
TL;DR: A new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist.
Abstract: We present a new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist. By virtue of the hands operating in front of the user during many VR interactions, our controller-borne cameras can capture a superior view of the body for digitization. Our pipeline composites multiple camera views together, performs 3D body pose estimation, uses this data to control a rigged human model with inverse kinematics, and exposes the resulting user avatar to end user applications. We developed a series of demo applications illustrating the potential of our approach and more leg-centric interactions, such as balancing games and kicking soccer balls. We describe our proof-of-concept hardware and software, as well as results from our user study, which point to imminent feasibility.

7 citations


DOI
TL;DR: In this paper , the authors examined three exposures of exhumed plate boundary on Kyushu, Japan, which contain subducted sediments and hydrated oceanic crust deformed at ∼300 to ∼500°C.
Abstract: Geophysical observations indicate that patches of localized fracturing occur within otherwise viscous regions of subduction plate boundaries. These observations place uncertainty on the possible down‐dip extent of the seismogenic zone, and as a result the maximum magnitude of subduction thrust earthquakes. However, the processes controlling where and how localized fracturing occurs within otherwise viscous shear zones are unclear. We examined three exposures of exhumed plate boundary on Kyushu, Japan, which contain subducted sediments and hydrated oceanic crust deformed at ∼300 to ∼500°C. These exposures preserve subduction‐related viscous deformation, which in two of the studied exposures has a mutually overprinting relationship with quartz veins, indicating localized cyclical embrittlement. Where observed, fractures are commonly near lithological contacts that form viscosity contrasts. Mineral equilibrium calculations for a metabasalt composition indicate that exposures showing cyclical embrittlement deformed at pressure‐temperature conditions near dehydration reactions that consume prehnite and chlorite. In contrast, dominantly viscous deformation occurred at intervening pressure‐temperature conditions. We infer that at conditions close to metamorphic dehydration reactions, only small stress perturbations are required for transient embrittlement, driven by localized dehydration reactions reducing effective stress, and/or locally increased shear stresses along rheological contrasts. Our results show that the protolith composition of the subducting oceanic lithosphere controls the locations and magnitudes of dehydration reactions, and the viscosity of metamorphosed oceanic crust. Therefore, compositional variations might drive substantial variations in slip style.

6 citations


Journal ArticleDOI
TL;DR: To counter the need audio-based activity system, this work makes use of compute-optimized IMUs sampled 50 Hz to act for detecting activity events and multimodal deep recognition of 92 2% 26 activities.
Abstract: Despite in and human activity recognition systems, a practical, power-efficient, and privacy-sensitive activity recognition system has remained elusive. State-of-the-art activity recognition systems often require power-hungry and privacy-invasive audio data. This is especially challenging for resource-constrained wearables, such as smartwatches. To counter the need audio-based activity system, we make use of compute-optimized IMUs sampled 50 Hz to act for detecting activity events. detected, multimodal deep augments the data captured on a smartwatch. subsample this 1 spoken unintelligible, power consumption on mobile devices. multimodal deep recognition of 92 2% 26 activities

6 citations



Journal ArticleDOI
TL;DR: In this paper , the oxygen isotope compositions of four apatite reference materials (chlorapatite MGMH#133648 and fluoraphatite specimens MGMH #128441A, MZ•TH and ES•MM) were reported.
Abstract: Here we report on the oxygen isotope compositions of four proposed apatite reference materials (chlorapatite MGMH#133648 and fluorapatite specimens MGMH#128441A, MZ‐TH and ES‐MM). The samples were initially screened for 18O/16O homogeneity using secondary ion mass spectrometry (SIMS) followed by δ18O determinations in six gas source isotope ratio mass spectrometry laboratories (GS‐IRMS) using a variety of analytical protocols for determining either phosphate‐bonded or “bulk” oxygen compositions. We also report preliminary δ17O and Δ’17O data, major and trace element compositions collected using EPMA, as well as CO32− and OH− contents in the apatite structure assessed using thermogravimetric analysis and infrared spectroscopy. The repeatability of our SIMS measurements was better than ± 0.25 ‰ (1s) for all four materials that cover a wide range of 103δ18O values between +5.8 and +21.7. The GS‐IRMS results show, however, a significant offset of 103δ18O values between the “phosphate” and “bulk” analyses that could not be correlated with chemical characteristics of the studied samples. Therefore, we provide two sets of working values specific to these two classes of analytical methodologies as well as current working values for SIMS data calibration.

5 citations


Journal ArticleDOI
TL;DR: In this paper , the authors explore the possibility of using whole-rock δ18O and δD values and water contents, metrics that can potentially track alteration, to estimate the strength (compressive and tensile) and Young's modulus (i.e. "stiffness") of altered (acid-sulphate) volcanic rocks from La Soufrière de Guadeloupe (Eastern Caribbean).
Abstract: Hydrothermal alteration is considered to increase the likelihood of dome or flank collapse by compromising stability. Understanding how such alteration influences rock properties, and providing independent metrics for alteration that can be used to estimate these parameters, is therefore important to better assess volcanic hazards and mitigate risk. We explore the possibility of using whole-rock δ18O and δD values and water contents, metrics that can potentially track alteration, to estimate the strength (compressive and tensile) and Young’s modulus (i.e. “stiffness”) of altered (acid-sulphate) volcanic rocks from La Soufrière de Guadeloupe (Eastern Caribbean). The δ18O values range from 5.8 to 13.2‰, δD values from − 151 to − 44‰, and water content from 0.3 to 5.1 wt%. We find that there is a good correlation between δ18O values and laboratory-measured strength and Young’s modulus, but that these parameters do not vary systematically with δD or water content (likely due to their pre-treatment at 200 °C). Empirical linear relationships that allow strength and Young’s modulus to be estimated using δ18O values are provided using our new data and published data for Merapi volcano (Indonesia). Our study highlights that δ18O values can be used to estimate the strength and Young’s modulus of volcanic rocks, and could therefore be used to provide parameters for volcano stability modelling. One advantage of this technique is that δ18O only requires a small amount of material, and can therefore provide rock property estimates in scenarios where material is limited, such as borehole cuttings or when sampling large blocks is impracticable.

3 citations


Journal ArticleDOI
TL;DR: The Miocene Fataga Group in Gran Canaria, a silica-undersaturated to mildly saturated alkaline volcanic sequence consisting of trachytic to phonolitic extra-caldera ignimbrites and lavas, was classified as a new low-δ18O felsic locality as mentioned in this paper .
Abstract: The origins of felsic low-δ18O melts (< +5.5 ‰) are usually attributed to assimilation of high-temperature hydrothermally altered (HTHA) rocks. Very few alkaline (silica undersaturated and/or peralkaline) examples are known. Here, we classify the Miocene Fataga Group in Gran Canaria, a silica-undersaturated to mildly saturated alkaline volcanic sequence consisting of trachytic to phonolitic extra-caldera ignimbrites and lavas, as a new low-δ18O felsic locality. We provide new mineral, glass and bulk geochemical data linked to a well-constrained stratigraphy to assess the processes involved in the magma reservoir that fed the Fataga eruptions. New high-precision single crystal feldspar 40Ar/39Ar ages of the study area span 13.931 ± 0.034 Ma to 10.288 ± 0.016 Ma. Fractional crystallisation at shallow depths of sanidine/anorthoclase, biotite, augite/diopside, titanite, ilmenite and titanomagnetite is the main driving process to produce phonolitic magmas from trachytic melts. Evidence of mafic hotter recharge is not found in the field, but some units exhibit trachytic compositions characterised by positive Eu/Eu* anomalies and high Ba contents, interpreted as melts of feldspar-dominated cumulates, the solid remnants of fractional crystallisation. Hence, recharge magmas halted in the crystal mush and provided the heat needed to sustain cumulate melting and volcanic activity. This cumulate signature might be lost if fractional crystallisation continues before the eruption. The interplay among meteoric water, the caldera-fault system, intra-caldera ignimbrites (Mogán Group) and the Fataga magma reservoir favoured assimilation of up to ca. 30% of HTHA rocks. Such assimilation is variable through time and recorded by δ18Omelt values down to +4.73 ‰. We did not find any direct relation between assimilation and silica saturation of the Fataga volcanic deposits.

2 citations


Proceedings ArticleDOI
21 Mar 2022
TL;DR: In this paper , a speaker in a ported enclosure is used to deliver air pulses to the skin for haptic actuation, achieving 10 mN time averaged thrusts at an air velocity of 10.4 m/s (4.3W input power).
Abstract: We propose a new scalable, non-contact haptic actuation technique based on a speaker in a ported enclosure which can deliver air pulses to the skin. The technique is low cost, low voltage, and uses existing electronics. We detail a prototype device's design and construction, and validate a multiple domain impedance model with current, voltage, and pressure measurements. A non-linear phenomenon at the port creates pulsed zero-net-mass-flux flows, so-called “synthetic jets”. Our prototype is capable of 10 mN time averaged thrusts at an air velocity of 10.4 m/s (4.3W input power). A perception study reveals that tactile effects can be detected 25 mm away with only 380 mVrms applied voltage, and 19 mWrms input power.

Proceedings ArticleDOI
28 Oct 2022
TL;DR: The DiscoBand as discussed by the authors uses eight distributed depth sensors imaging the hand from different viewpoints, creating a sparse 3D point cloud, and an additional eight depth sensors image outwards from the band to track the user's body and surroundings.
Abstract: Real-time tracking of a user’s hands, arms and environment is valuable in a wide variety of HCI applications, from context awareness to virtual reality. Rather than rely on fixed and external tracking infrastructure, the most flexible and consumer-friendly approaches are mobile, self-contained, and compatible with popular device form factors (e.g., smartwatches). In this vein, we contribute DiscoBand, a thin sensing strap not exceeding 1 cm in thickness. Sensors operating so close to the skin inherently face issues with occlusion. To help overcome this, our strap uses eight distributed depth sensors imaging the hand from different viewpoints, creating a sparse 3D point cloud. An additional eight depth sensors image outwards from the band to track the user’s body and surroundings. In addition to evaluating arm and hand pose tracking, we also describe a series of supplemental applications powered by our band’s data, including held object recognition and environment mapping.

Proceedings ArticleDOI
29 Apr 2022
TL;DR: This research demonstrates how the addition of a thin, 2D micro-patterned surface with 5 micron spaced features can be used to reduce motor-visual touchscreen latency and makes accurate predictions of real time touch location.
Abstract: Touchscreen tracking latency, often 80ms or more, creates a rubber-banding effect in everyday direct manipulation tasks such as dragging, scrolling, and drawing. This has been shown to decrease system preference, user performance, and overall realism of these interfaces. In this research, we demonstrate how the addition of a thin, 2D micro-patterned surface with 5 micron spaced features can be used to reduce motor-visual touchscreen latency. When a finger, stylus, or tangible is translated across this textured surface frictional forces induce acoustic vibrations which naturally encode sliding velocity. This acoustic signal is sampled at 192kHz using a conventional audio interface pipeline with an average latency of 28ms. When fused with conventional low-speed, but high-spatial-accuracy 2D touch position data, our machine learning model can make accurate predictions of real time touch location.

Proceedings ArticleDOI
28 Oct 2022
TL;DR: EtherPose as mentioned in this paper is a continuous hand pose tracking system employing two wrist-worn antennas, from which they measure the real-time dielectric loading resulting from different hand geometries (i.e., poses).
Abstract: EtherPose is a continuous hand pose tracking system employing two wrist-worn antennas, from which we measure the real-time dielectric loading resulting from different hand geometries (i.e., poses). Unlike worn camera-based methods, our RF approach is more robust to occlusion from clothing and avoids capturing potentially sensitive imagery. Through a series of simulations and empirical studies, we designed a proof-of-concept, worn implementation built around compact vector network analyzers. Sensor data is then interpreted by a machine learning backend, which outputs a fully-posed 3D hand. In a user study, we show how our system can track hand pose with a mean Euclidean joint error of 11.6 mm, even when covered in fabric. We also studied 2DOF wrist angle and micro-gesture tracking. In the future, our approach could be miniaturized and extended to include more and different types of antennas, operating at different self resonances.

Proceedings ArticleDOI
28 Sep 2022
TL;DR: This keynote talk will attempt to unpick the forces at play behind this substantive issue of real-world impact of innovation, drawing on both historical examples and my own experiences as an academic and entrepreneur.
Abstract: Human-Computer Interaction (HCI) is very often described as a practical and applied field, tackling real problems faced by real users [1, 2, 5, 7]. And yet, our real-world orientation seems to rarely translate into real-world impact. There are surprisingly few startups coming out of the large HCI community, and very few people among us can rightly claim "that feature [or product] came from my paper!" This naturally begs the question: "Why are we doing all this research if it’s never adopted?!" In this keynote talk, I will attempt to unpick the forces at play behind this substantive issue, drawing on both historical examples and my own experiences as an academic and entrepreneur. In short, there is a constellation of factors – big and small – that contribute to a sort of innovation "fog of war". The fact is, you might be having impact, but never know it or live to see it. The first issue is simple: time. Inventions, and more importantly productization, take a lot of it. With time comes many contribu-tors, which evolve ideas and obfuscate provenance. We also tend to mythologize key inventors, elevating them as lone geniuses (a la Thomas Edison), which simplifies and distorts the story of innovation [6]. On top of this, companies rarely speak about where or how ideas started, and who

Journal ArticleDOI

[...]

TL;DR: Mites as discussed by the authors is a scalable end-to-end hardware-software system for supporting and managing distributed general-purpose sensors in buildings, which includes robust primitives for privacy and security, essential features for scalable data management, as well as machine learning to support diverse applications in buildings.
Abstract: There is increasing interest in deploying building-scale, general-purpose, and high-fidelity sensing to drive emerging smart building applications. However, the real-world deployment of such systems is challenging due to the lack of system and architectural support. Most existing sensing systems are purpose-built, consisting of hardware that senses a limited set of environmental facets, typically at low fidelity and for short-term deployment. Furthermore, prior systems with high-fidelity sensing and machine learning fail to scale effectively and have fewer primitives, if any, for privacy and security. For these reasons, IoT deployments in buildings are generally short-lived or done as a proof of concept. We present the design of Mites, a scalable end-to-end hardware-software system for supporting and managing distributed general-purpose sensors in buildings. Our design includes robust primitives for privacy and security, essential features for scalable data management, as well as machine learning to support diverse applications in buildings. We deployed our Mites system and 314 Mites devices in Tata Consultancy Services (TCS) Hall at Carnegie Mellon University (CMU), a fully occupied, five-story university building. We present a set of comprehensive evaluations of our system using a series of microbenchmarks and end-to-end evaluations to show how we achieved our stated design goals. We include five proof-of-concept applications to demonstrate the extensibility of the Mites system to support compelling IoT applications. Finally, we discuss the real-world challenges we faced and the lessons we learned over the five-year journey of our stack's iterative design, development, and deployment.

Proceedings ArticleDOI
07 Nov 2022
TL;DR: In this article , the pull gestures, a multimodal interaction combining on-screen touch input with in-air movement, are proposed to take advantage of the unique geometry of dual-touch devices.
Abstract: A new class of dual-touchscreen device is beginning to emerge, either constructed as two screens hinged together, or as a single display that can fold. The interactive experience on these devices is simply that of two 2D touchscreens, with little to no synergy between the interactive areas. In this work, we consider how this unique, emerging form factor creates an interesting 3D niche, in which out-of-plane interactions on one screen can be supported with coordinated graphics in the other orthogonal screen. Following insights from an elicitation study, we focus on "pull gestures", a multimodal interaction combining on-screen touch input with in air movement. These naturally complement traditional multitouch gestures such as tap and pinch, and are an intriguing and useful way to take advantage of the unique geometry of dual-screen devices.

Proceedings ArticleDOI
07 Nov 2022
TL;DR: In this article , the authors explore an approach between these two extremes: one that retains the simple, low-cost nature of printed markers, yet has some of the expressive capabilities of dynamic tags.
Abstract: Printed fiducial markers are inexpensive, easy to deploy, robust and deservedly popular. However, their data payload is also static, unable to express any state beyond being present. For this reason, more complex electronic tagging technologies exist, which can sense and change state, but either require special equipment to read or are orders of magnitude more expensive than printed markers. In this work, we explore an approach between these two extremes: one that retains the simple, low-cost nature of printed markers, yet has some of the expressive capabilities of dynamic tags. Our “DynaTags” are simple mechanisms constructed from paper that express multiple payloads, allowing practitioners and researchers to create new and compelling physical-digital experiences. We describe a library of 23 mechanisms that can be read by standard smartphone reader apps. Through a series of demo applications (augmenting reality through e.g., sounds, environmental lighting and graphics) we show how our tags can bring new interactivity to previously static experiences.

Proceedings ArticleDOI
07 Nov 2022
TL;DR: In this paper , the authors present a gaze tracking system that makes use of today's smartphone depth camera technology to adapt to the changes in distance and orientation relative to the user's face.
Abstract: Tracking a user’s gaze on smartphones offers the potential for accessible and powerful multimodal interactions. However, phones are used in a myriad of contexts and state-of-the-art gaze models that use only the front-facing RGB cameras are too coarse and do not adapt adequately to changes in context. While prior research has showcased the efficacy of depth maps for gaze tracking, they have been limited to desktop-grade depth cameras, which are more capable than the types seen in smartphones, that must be thin and low-powered. In this paper, we present a gaze tracking system that makes use of today’s smartphone depth camera technology to adapt to the changes in distance and orientation relative to the user’s face. Unlike prior efforts that used depth sensors, we do not constrain the users to maintain a fixed head position. Our approach works across different use contexts in unconstrained mobile settings. The results show that our multimodal ML model has a mean gaze error of 1.89 cm; a 16.3% improvement over using RGB data alone (2.26 cm error). Our system and dataset offer the first benchmark of gaze tracking on smartphones using RGB+Depth data under different use contexts.


Proceedings ArticleDOI
29 Apr 2022
TL;DR: The technique is compatible with industrial and hobbyist cutting processes, from die and laser cutting to handheld exacto-knives and scissors, and can create self-actuating 3D objects for just a few cents, opening new uses in low-cost consumer goods.
Abstract: We describe how sheets of metalized mylar can be cut and then “inflated” into complex 3D forms with electrostatic charge for use in digitally-controlled, shape-changing displays. This is achieved by placing and nesting various cuts, slits and holes such that mylar elements repel from one another to reach an equilibrium state. Importantly, our technique is compatible with industrial and hobbyist cutting processes, from die and laser cutting to handheld exacto-knives and scissors. Given that mylar film costs <$1 per m2, we can create self-actuating 3D objects for just a few cents, opening new uses in low-cost consumer goods. We describe a design vocabulary, interactive simulation tool, fabrication guide, and proof-of-concept electrostatic actuation hardware. We detail our technique’s performance metrics along with qualitative feedback from a design study. We present numerous examples generated using our pipeline to illustrate the rich creative potential of our method.