scispace - formally typeset
Search or ask a question

Showing papers presented at "International Symposium on Wearable Computers in 2018"


Proceedings ArticleDOI
08 Oct 2018
TL;DR: Wang et al. as mentioned in this paper proposed two attention models for human activity recognition, namely, temporal attention and sensor attention, which adaptively focus on important signals and sensor modalities.
Abstract: Deep neural networks, including recurrent networks, have been successfully applied to human activity recognition. Unfortunately, the final representation learned by recurrent networks might encode some noise (irrelevant signal components, unimportant sensor modalities, etc.). Besides, it is difficult to interpret the recurrent networks to gain insight into the models' behavior. To address these issues, we propose two attention models for human activity recognition: temporal attention and sensor attention. These two mechanisms adaptively focus on important signals and sensor modalities. To further improve the understandability and mean Fl score, we add continuity constraints, considering that continuous sensor signals are more robust than discrete ones. We evaluate the approaches on three datasets and obtain state-of-the-art results. Furthermore, qualitative analysis shows that the attention learned by the models agree well with human intuition.

136 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: This paper introduces attention models into HAR research as a data driven approach for exploring relevant temporal context and constructs attention models for HAR by adding attention layers to a state-of-the-art deep learning HAR model (DeepConvLSTM).
Abstract: Deep Learning methods have become very attractive in the wider, wearables-based human activity recognition (HAR) research community. The majority of models are based on either convolutional or explicitly temporal models, or combinations of both. In this paper we introduce attention models into HAR research as a data driven approach for exploring relevant temporal context. Attention models learn a set of weights over input data, which we leverage to weight the temporal context being considered to model each sensor reading. We construct attention models for HAR by adding attention layers to a state-of-the-art deep learning HAR model (DeepConvLSTM) and evaluate our approach on benchmark datasets achieving significant increase in performance. Finally, we visualize the learned weights to better understand what constitutes relevant temporal context.

86 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: The PhysioHMD is introduced, a software and hardware modular interface built for collecting affect and physiological data from users wearing a head-mounted display that enables researchers and developers to aggregate and interprets signals in real-time.
Abstract: Virtual and augmented reality headsets are unique as they have access to our facial area: an area that presents an excellent opportunity for always-available input and insight into the user's state. Their position on the face makes it possible to capture bio-signals as well as facial expressions. This paper introduces the PhysioHMD, a software and hardware modular interface built for collecting affect and physiological data from users wearing a head-mounted display. The PhysioHMD platform is a flexible architecture enables researchers and developers to aggregate and interprets signals in real-time, and use those to develop novel, personalized interactions and evaluate virtual experiences. Offering an interface that is not only easy to extend but also is complemented by a suite of tools for testing and analysis. We hope that PhysioHMD can become a universal, publicly available testbed for VR and AR researchers.

33 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: It is shown that by visualising each child's engagement over the course of a performance, it is possible to highlight subtle moments of social coordination that might otherwise be lost when reviewing video footage alone.
Abstract: We introduce a method of using wrist-worn accelerometers to measure non-verbal social coordination within a group that includes autistic children. Our goal was to record and chart the children's social engagement - measured using interpersonal movement synchrony - as they took part in a theatrical workshop that was specifically designed to enhance their social skills. Interpersonal synchrony, an important factor of social engagement that is known to be impaired in autism, is calculated using a cross-wavelet similarity comparison between participants' movement data. We evaluate the feasibility of the approach over 3 live performances, each lasting 2 hours, using 6 actors and a total of 10 autistic children. We show that by visualising each child's engagement over the course of a performance, it is possible to highlight subtle moments of social coordination that might otherwise be lost when reviewing video footage alone. This is important because it points the way to a new method for people who work with autistic children to be able to monitor the development of those in their care, and to adapt their therapeutic activities accordingly.

30 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: This work demonstrates application examples using the texture-tunable skin overlay as wearable, interactive protection for scenarios including: a carpal tunnel splint for rehabilitation, a protective layer for joints when engaging in high impact activities, and foot pads when wearing uncomfortable shoes.
Abstract: SkinMorph is an on-skin interface which can selectively transition between soft and rigid states to serve as a texture-tunable wearable skin output. This texture change is made possible through the material design of smart hydrophillic gels. These gels are soft in resting state, yet when activate by heat (>36°C), they generate a micro-level structural change which results in observable stiffening. These gels are encapsulated in thin silicone patterned with resistive wires through a sew-and-transfer fabrication approach. We demonstrate application examples using the texture-tunable skin overlay as wearable, interactive protection for scenarios including: a carpal tunnel splint for rehabilitation, a protective layer for joints when engaging in high impact activities, and foot pads when wearing uncomfortable shoes. Our evaluation shows that the gel is 10 times stiffer when activated, and that users find the device skin-conformable.

28 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this paper, a multi-sensory haptic device called MISSIVE, which can be worn on the upper arm and is capable of producing brief cues, sufficient in quantity to encode the full English phoneme set.
Abstract: In our daily lives, we rely heavily on our visual and auditory channels to receive information from others. In the case of impairment, or when large amounts of information are already transmitted visually or aurally, alternative methods of communication are needed. A haptic language offers the potential to provide information to a user when visual and auditory channels are unavailable. Previously created haptic languages include deconstructing acoustic signals into features and displaying them through a haptic device, and haptic adaptations of Braille or Morse code; however, these approaches are unintuitive, slow at presenting language, or require a large surface area. We propose using a multi-sensory haptic device called MISSIVE, which can be worn on the upper arm and is capable of producing brief cues, sufficient in quantity to encode the full English phoneme set. We evaluated our approach by teaching subjects a subset of 23 phonemes, and demonstrated an 86% accuracy in a 50 word identification task after 100 minutes of training.

27 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this paper, the authors explore four variants of adding temporal structure to distribution based features and demonstrate their potential for statistically significant improvements of activity recognition in general, but the addition of temporal structure comes with a moderate increase in computational complexity rendering the proposed methods applicable to mobile and embedded scenarios.
Abstract: Feature extraction is a critical step in sliding-window based standard activity recognition chains. Recently, distribution based features have been introduced that showed excellent generalization capabilities across a wide range of application domains in human activity recognition scenarios based on body-worn sensors. These features capture the data distribution of individual analysis frames, yet they ignore temporal structure inherent to the signal of a frame. We explore four variants of adding temporal structure to distribution based features and demonstrate their potential for statistically significant improvements of activity recognition in general. The addition of temporal structure comes with a moderate increase in computational complexity rendering the proposed methods applicable to mobile and embedded scenarios.

25 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: A solution for tracking gaits and jumps using a smartphone attached to the horse's saddle is proposed and an event detection algorithm based on Discrete Wavelet Transform and a peak detection to detect jumps and canter strides between fences is proposed.
Abstract: In modern showjumping and cross-country riding, the success of the horse-rider-pair is measured by the ability to finish a given course of obstacles without penalties within a given time. A horse performs a successful (penalty-free) jump, if no element of the fence falls during the jump. The success of each jump is determined by the correct take-off point of the horse in front of the fence and the amount of strides a horse does between fences. This paper proposes a solution for tracking gaits and jumps using a smartphone attached to the horse's saddle. We propose an event detection algorithm based on Discrete Wavelet Transform and a peak detection to detect jumps and canter strides between fences. We segment the signal to find gait and jump sections, evaluate statistical and heuristic features and classify the segments using different machine learning algorithms. We show that horse jumps and canter strides are detected with a precision of 94.6% and 89.8% recall. All gaits and jumps are further classified with an accuracy of up to 95.4% and a Kappa coefficient (KC) of up to 93%.

23 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: Touch-Sense as discussed by the authors uses a neural network architecture to classify the finger touches using EMG data and estimate their force on a smartphone in real time based on data recorded from the sensors of an inexpensive and wireless EMG armband.
Abstract: Identifying the finger used for touching and measuring the force of the touch provides valuable information on manual interactions. This information can be inferred from electromyography (EMG) of the forearm, measuring the activation of the muscles controlling the hand and fingers. We present Touch-Sense, which classifies the finger touches using a novel neural network architecture and estimates their force on a smartphone in real time based on data recorded from the sensors of an inexpensive and wireless EMG armband. Using data collected from 18 participants with force ground truth, we evaluate our system's performance and limitations. Our system could allow for new interaction paradigms with appliances and objects, which we exemplarily showcase in four applications.

23 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: A wrist-worn sensing platform that integrates an inertial measurement unit and a Hidden Markov Model-based analysis method that enables automated assessments of handwashing routines according to recommendations provided by the World Health Organization.
Abstract: Washing hands is one of the easiest yet most effective ways to prevent spreading illnesses and diseases. However, not adhering to thorough handwashing routines is a substantial problem worldwide. For example, in hospital operations lack of hygiene leads to healthcare associated infections. We present WristWash, a wrist-worn sensing platform that integrates an inertial measurement unit and a Hidden Markov Model-based analysis method that enables automated assessments of handwashing routines according to recommendations provided by the World Health Organization (WHO). We evaluated Wrist-Wash in a case study with 12 participants. WristWash is able to successfully recognize the 13 steps of the WHO handwashing procedure with an average accuracy of 92% with user-dependent models, and with 85% for user-independent modeling. We further explored the system's robustness by conducting another case study with six participants, this time in an unconstrained environment, to test variations in the hand-washing routine and to show the potential for real-world deployments.

23 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this article, a Gaussian Mixture Hidden Markov Models (GMM-HMMs) based spotting network was proposed to detect sparse natural gestures in free living, which achieved an average F1 score of over 74% and clearly outperformed an HMM-based threshold model approach.
Abstract: We present a spotting network composed of Gaussian Mixture Hidden Markov Models (GMM-HMMs) to detect sparse natural gestures in free living. The key technical features of our approach are (1) a method to mine non-gesture patterns that deals with the arbitrary data (Null Class), and (2) an optimisation based on multipopulation genetic programming to approximate spotting network's parameters across target and non-target models. We evaluate our GMM-HMMs spotting network in a novel free living dataset, including totally 35 days of annotated inertial sensor's recordings from seven participants. Drinking was chosen as target gesture. Our method reached an average F1-score of over 74% and clearly outperformed an HMM-based threshold model approach. The results suggest that our spotting network approach is viable for sparse natural pattern spotting.

Proceedings ArticleDOI
George Chernyshov1, Benjamin Tag1, Cedric Caremel1, Feier Cao1, Gemma Liu1, Kai Kunze1 
08 Oct 2018
TL;DR: In this article, a new approach to implement wearable haptic devices using Shape Memory Alloy (SMA) wires is presented, which allows building silent, soft, flexible and lightweight wearable devices, capable of producing the sense of pressure on the skin without any bulky mechanical actuators.
Abstract: This paper presents a new approach to implement wearable haptic devices using Shape Memory Alloy (SMA) wires. The proposed concept allows building silent, soft, flexible and lightweight wearable devices, capable of producing the sense of pressure on the skin without any bulky mechanical actuators. We explore possible design considerations and applications for such devices, present user studies proving the feasibility of delivering meaningful information and use nonlinear autoregressive neural networks to compensate for SMA inherent drawbacks, such as delayed onset, enabling us to characterize and predict the physical behavior of the device.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this article, the authors focus on utilizing motion information of known daily activities gathered from wearable sensors for recognizing the person and demonstrate that different fundamental classification factors have an impact on person recognition success-rates.
Abstract: With the fast evolution in the area of processing units and sensors, wearable devices are becoming more popular among people of all ages. Recently, there has been renewed interest in exploiting the capabilities of wearable sensors for person recognition while undertaking their normal daily activities. In this paper, we focus on utilizing motion information of known daily activities gathered from wearable sensors for recognizing the person. The analysis of the results demonstrates that different fundamental classification factors have an impact on person recognition success-rates. Furthermore, the results of comparison among subjects prove that some subjects have high classification results and can be easily identifiable compared to other subjects which have high confusability rates. Lastly, a significant improvement in subject classification success rate was found for activities with little or no movement which can successfully distinguish among persons and hence producing higher classification results compared to activities with large movement.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: The Idle Stripes shirt as discussed by the authors is a smart business-wear for office workers, which creates awareness of immobility periods during typical sitting-intensive office work, encouraging the wearer to break up their office desk work with walking breaks.
Abstract: We present the design and prototype of the Idle Stripes shirt, which is an aesthetic, clothing-integrated display, reflecting the wearer's physical activity in an ambient manner. The design is targeted to be smart business-wear for an office worker, which creates awareness of immobility periods during typical sitting-intensive office work. Long periods of such sitting are known to be health risks. The Idle Stripes shirt promotes healthy working, encouraging the wearer to break up their office desk work with walking breaks. The design prototype is constructed of a fabric with integrated optical fibers, which are illuminated based on the sitting time detected by an app running on the wearer's mobile phone.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: Movelet as mentioned in this paper is a self-actuated bracelet that can move along the user's forearm to convey feedback via its movement and positioning, which allows to continuously inform the user about the changing state of information utilizing their haptic perception.
Abstract: We present Movelet, a self-actuated bracelet that can move along the user's forearm to convey feedback via its movement and positioning. In contrast to other eyes-free modalities such as vibro-tactile feedback, that only works momentarily, Movelet is able to provide sustained feedback via its spatial position on the forearm, in addition to momentary feedback by movement. This allows to continuously inform the user about the changing state of information utilizing their haptic perception. In a user study using the Movelet prototype, we found that users can blindly estimate the device's position on the forearm with an average deviation of 1.20cm to the actual position and estimate the length of a movemement with an average deviation of 1.44cm. This shows the applicability of position-based feedback using haptic perception.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: DeepAuth as mentioned in this paper leverages the unique motion patterns when users entering passwords as behavioral biometrics, and employs a novel loss function to learn deep feature representations that are robust to noise, unseen passwords, and malicious imposters even with limited training data.
Abstract: This paper proposes DeepAuth, an in-situ authentication framework that leverages the unique motion patterns when users entering passwords as behavioural biometrics. It uses a deep recurrent neural network to capture the subtle motion signatures during password input, and employs a novel loss function to learn deep feature representations that are robust to noise, unseen passwords, and malicious imposters even with limited training data. DeepAuth is by design optimised for resource constrained platforms, and uses a novel split-RNN architecture to slim inference down to run in real-time on off-the-shelf smartwatches. Extensive experiments with real-world data show that DeepAuth outperforms the state-of-the-art significantly in both authentication performance and cost, offering real-time authentication on a variety of smartwatches.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: This paper proposes a method that optimizes the window length - individually for each target activity - based on a generic window length and combines individually optimized activity detectors into an Ensemble based recognition approach.
Abstract: Sliding window based activity recognition chains represent the state-of-the-art for many mobile and embedded scenarios as they are common in wearable computing. The length of the analysis frames is a crucial system parameter that directly influences the effectiveness of the overall approach. In this paper we present a method that optimizes the window length - individually for each target activity. Instead of employing a single, multi-class recognition system that is based on a generic window length, we combine individually optimized activity detectors into an Ensemble based recognition approach. We demonstrate the effectiveness of the approach through an experimental evaluation on eight benchmark datasets. The proposed method leads to significant improvements across a range of activity recognition application domains.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this article, wearable RFID scanners, worn on the wrists, scan passive RFID tags mounted on an item's bin as the item is picked; this method is used in conjunction with a head-up display (HUD) to guide the user to the correct item.
Abstract: Order picking accounts for 55% of the annual $60 billion spent on warehouse operations in the United States. Reducing human-induced errors in the order fulfillment process can save warehouses and distributors significant costs. We investigate a radio-frequency identification (RFID)-based verification method wherein wearable RFID scanners, worn on the wrists, scan passive RFID tags mounted on an item's bin as the item is picked; this method is used in conjunction with a head-up display (HUD) to guide the user to the correct item. We compare this RFID verification method to pick-to-light with button verification, pick-to-paper with barcode verification, and pick-to-paper with no verification. We find that pick-to-HUD with RFID verification enables significantly faster picking, provides the lowest error rate, and provides the lowest task workload.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: YAWN is proposed, a bus-based, modular wearable toolkit that simplifies the interconnection by relying on a pre-fabricated three-wire fabric band that allows quick reconfiguration, ensures washability, and reduces the number of connection problems.
Abstract: Wearable toolkits simplify the integration of micro-electronics into fabric. They require basic knowledge about electronics for part interconnections. This technical aspect might be perceived as a barrier. We propose YAWN, a bus-based, modular wearable toolkit that simplifies the interconnection by relying on a pre-fabricated three-wire fabric band. This allows quick reconfiguration, ensures washability, and reduces the number of connection problems.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this paper, the Sony Smartwatch 3 was used to teach users Morse code while they wear the watch but focus on unrelated tasks, and they found significant improvements in six participants using the technique.
Abstract: Haptic technology can be used as a tool for learning. Can even the haptic elements in a smartwatch teach a new skill? Here we present a case of using a smartwatch for passive tactile learning. We use the Sony Smartwatch 3 to teach users Morse code while they wear the watch but focus on unrelated tasks. An initial hypothesis forecasted that the stimulation from the smartwatch, typically used for message alerts, would be too subtle to enable haptic learning; however, we find significant improvements in six participants using the technique. Furthermore, we expose participants to two different durations of stimulation and find different results.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: This work investigated the subjective comfort and emotional effects of applied on-body compression, specifically on the torso and upper arms, through a pilot user study incorporating a novel, low-profile, and actively-controllable compression garment.
Abstract: The sensation of touch is integral to everyday life. Current haptics research focuses mainly on vibrations, tap, and point pressures, but the sensation of distributed pressures such as compression are often overlooked. We investigated the subjective comfort and emotional effects of applied on-body compression, specifically on the torso and upper arms, through a pilot user study incorporating a novel, low-profile, and actively-controllable compression garment. The active compression garment was embedded with contractile shape memory alloys (SMAs) to create dynamic compression on the body. Qualitative interview data collected (n=8) were used to generate a list of findings to inform the future creation of a computer-mediated compression garment that is wearable, comfortable, and safe for use.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this paper, the authors investigated the effects of using passive haptic learning to train the skill of comprehending text from vibrotactile patterns and found that it offers the possibility to learn in the background while performing another primary task.
Abstract: This paper investigates the effects of using passive haptic learning to train the skill of comprehending text from vibrotactile patterns. The method of transmitting messages, skin-reading, is effective at conveying rich information but its active training method requires full user attention, is demanding, time-consuming, and tedious. Passive haptic learning offers the possibility to learn in the background while performing another primary task. We present a study investigating the use of passive haptic learning to train for skin-reading.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this paper, the authors evaluate the robustness to home laundering of a previously-developed cut-and-sew technique for assembling e-textile circuits for wearable technology, in that functionality, power and networking can be spread over a much larger area while preserving hand-feel and wearability.
Abstract: E-textiles that enable distribution of electronic components have advantages for wearable technology, in that functionality, power, and networking can be spread over a much larger area while preserving hand-feel and wearability. However, textile-embedded circuitry often must be machine-washable to conform to user expectations for care and maintenance, particularly for garments. In this study, we evaluate the robustness to home laundering of a previously-developed cut-and-sew technique for assembling e-textile circuits. Alternative surface insulation materials, textile substrate properties, and soldered component joints are evaluated. After around 1000 minutes (16.67 hours) of rigorous washing and drying, we measured a best-case 0% failure rate for component solder joints, and a best-case 0.38 ohm/m maximum increase in trace resistance. Liquid silicone seam sealer was effective in protecting 100% of solder joints. Two tape-type alternative surface insulation materials were effective in protecting bare traces and component attachment points respectively. Overall, results demonstrate the feasibility of producing insulated, washable cut-and-sew circuits for smart garment manufacturing.

Proceedings ArticleDOI
Alex Olwal1, Bernard C. Kress1
08 Oct 2018
TL;DR: This work develops a set of transmissive, reflective, and steerable optical configurations that can be embedded in conventional eyewear designs that enables high resolution symbolic display in discreet digital eyewears.
Abstract: 1D Eyewear uses 1D arrays of LEDs and pre-recorded holographic symbols to enable minimal head-worn displays. Our approach uses computer-generated holograms (CGHs) to create diffraction gratings which project a pre-recorded static image when illuminated with coherent light. Specifically, we develop a set of transmissive, reflective, and steerable optical configurations that can be embedded in conventional eyewear designs. This approach enables high resolution symbolic display in discreet digital eyewear.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this paper, a domain adaptation framework was proposed to reduce the training cost of users in wearable photo reflective sensor (PRS) devices by adapting a pre-trained CNN for both inter-user and intra-user setups to maintain the recognition accuracy.
Abstract: The photo reflective sensor (PRS), a tiny distant-measurement module, is a popular electronic component widely used in wearable user-interfaces. An unavoidable issue of such wearable PRS devices in practical use is the need of user-independent training to have high gesture recognition accuracy. Each new user has to re-train a device by providing new training data (we call the inter-user setup). Even worse, re-training is also necessary ideally every time when the same user re-wears the device (we call the intra-user setup). In this paper, we propose a domain adaptation framework to reduce this training cost of users. Specifically, we adapt a pre-trained convolutional neural network (CNN) for both inter-user and intra-user setups to maintain the recognition accuracy high. We demonstrate, with an actual PRS device, that our framework significantly improves the average classification accuracy of the intra-user and inter-user setups up to 87.43% and 80.06% against the baseline (non-adapted) setups with the accuracy 68.96% and 63.26% respectively.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: B Buccal is presented, a simple yet effective approach to inferring continuous lip and jaw motions by measuring deformations of the cheeks and temples with only 5 infrared proximity sensors embedded in a mobile VR headset.
Abstract: Teleconferencing is touted to be one of the main and most powerful uses of virtual reality (VR). While subtle facial movements play a large role in human-to-human interactions, current work in the VR space has focused on identifying discrete emotions and expressions through coarse facial cues and gestures. By tracking and representing the fluid movements of facial elements as continuous range values, users are able to more fully express themselves. In this work, we present Buccal, a simple yet effective approach to inferring continuous lip and jaw motions by measuring deformations of the cheeks and temples with only 5 infrared proximity sensors embedded in a mobile VR headset. The signals from these sensors are mapped to facial movements through a regression model trained with ground truth labels recorded from a webcam. For a streamlined user experience, we train a user independent model that requires no setup process. Finally, we demonstrate the use of our technique to manipulate the lips and jaw of a 3D face model in real-time.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: In this article, a 3D printed cover was used to alter the sensitivity of the microphone and the characteristics of the obtained Doppler effect to improve in-air gesture recognition.
Abstract: We propose a method to improve ultrasound-based in-air gesture recognition by altering the acoustic characteristics of a microphone. The Doppler effect is often utilized to recognize ultrasound-based gestures. However, increasing the number of gestures is difficult because of the limited information obtained from the Doppler effect. In this study, we partially shield a microphone with a 3D-printed cover. The cover alters the sensitivity of the microphone and the characteristics of the obtained Doppler effect. Since the proposed method utilizes a 3D-printed cover with a single microphone and speaker embedded in a device, it does not require additional electronic devices to improve gesture recognition. We design four different microphone covers and evaluate the performance of the proposed method on six gestures with eight participants. The evaluation results confirm that recognition accuracy is increased by 15.3% by utilizing the proposed method.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: This paper performs an extensive parameter characterization for through-body power transfer and based on the empirical findings, a design trade-off visualization is presented to aid designers looking to integrate the CASPER system.
Abstract: We present CASPER, a charging solution to enable a future of wearable devices that are much more distributed on the body. Instead of having to charge every device we want to adorn our bodies with, may it be distributed health sensors or digital jewelry, we can instead augment everyday objects such as beds, seats, and frequently worn clothing to provide convenient charging base stations that will charge devices on our body serendipitously as we go about our day. Our system works by treating the human body as a conductor and capacitively charging devices worn on the body whenever a well coupled electrical path is created during natural use of everyday objects. In this paper, we performed an extensive parameter characterization for through-body power transfer and based on our empirical findings, we present a design trade-off visualization to aid designers looking to integrate our system. Furthermore, we demonstrate how we utilized this design process in the development of our own smart bandage device and a LED adorned temporary tattoo that charges at hundreds of micro-watts using our system.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: It is empirically demonstrated that interacting with the smartwatch on the wrist leads to fatigue after only a few minutes, placing an upper bound for smartwatch usage that needs to be considered in application and interaction design.
Abstract: Glanceability and low access time are arguably the key assets of a smartwatch. Smartwatches are designed for, and excel at micro-interactions- simple tasks that only take seconds to complete. However, if a user desires to transition to a task requiring sustained usage, we show that there are additional factors that prevent possible longer usage of the smartwatch. In this paper, we conduct a study with 18 participants to empirically demonstrate that interacting with the smartwatch on the wrist leads to fatigue after only a few minutes. In our study, users performed three tasks in two different poses while using a smartwatch. We demonstrate that only after three minutes of use, the change in perceived exertion of the user was anchored as "somewhat strong" on the Borg CR10 survey scale. These results place an upper bound for smartwatch usage that needs to be considered in application and interaction design.

Proceedings ArticleDOI
08 Oct 2018
TL;DR: A system to acquire the temperature in the nostrils using small temperature sensors connected to glasses and can detect workload at an accuracy of 96.4% is developed.
Abstract: We can benefit from various services with context recognition using wearable sensors. In this study, we focus on the contexts acquired from sensor data in the nostrils. Nostrils can provide various contexts on breathing, nasal congestion, and higher level contexts including psychological and health states. In this paper, we propose a context recognition method using the information in the nostril. We develop a system to acquire the temperature in the nostrils using small temperature sensors connected to glasses. As a result of the evaluations, the proposed system can detect workload at an accuracy of 96.4%.