scispace - formally typeset
Search or ask a question

Showing papers by "Jamie A. Ward published in 2018"


Proceedings ArticleDOI
08 Oct 2018
TL;DR: It is shown that by visualising each child's engagement over the course of a performance, it is possible to highlight subtle moments of social coordination that might otherwise be lost when reviewing video footage alone.
Abstract: We introduce a method of using wrist-worn accelerometers to measure non-verbal social coordination within a group that includes autistic children. Our goal was to record and chart the children's social engagement - measured using interpersonal movement synchrony - as they took part in a theatrical workshop that was specifically designed to enhance their social skills. Interpersonal synchrony, an important factor of social engagement that is known to be impaired in autism, is calculated using a cross-wavelet similarity comparison between participants' movement data. We evaluate the feasibility of the approach over 3 live performances, each lasting 2 hours, using 6 actors and a total of 10 autistic children. We show that by visualising each child's engagement over the course of a performance, it is possible to highlight subtle moments of social coordination that might otherwise be lost when reviewing video footage alone. This is important because it points the way to a new method for people who work with autistic children to be able to monitor the development of those in their care, and to adapt their therapeutic activities accordingly.

30 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: This work investigates what happens to a person's looking behavior when the person with whom they are speaking is also wearing an eye-tracker, and shows that people tend to look less to the eyes of people who are wearing a tracker, than they do to the Eyes of those who are not.
Abstract: Looking is a two-way process: we use our eyes to perceive the world around us, but we also use our eyes to signal to others. Eye contact in particular reveals much about our social interactions, and as such can be a rich source of information for context-aware wearable applications. But when designing these applications, it is useful to understand the effects that the head-worn eye-trackers might have on our looking behavior. Previous studies have shown that we moderate our gaze when we know our eyes are being tracked, but what happens to our gaze when we see others wearing eye trackers? Using gaze recordings from 30 dyads, we investigate what happens to a person's looking behavior when the person with whom they are speaking is also wearing an eye-tracker. In the preliminary findings reported here, we show that people tend to look less to the eyes of people who are wearing a tracker, than they do to the eyes of those who are not. We discuss possible reasons for this and suggest future directions of study.

8 citations


Proceedings ArticleDOI
08 Oct 2018
TL;DR: This exploration of the semi-structured and repeatable nature of theatre can provide a useful laboratory for neuroscience, and that wearable sensing is a promising method to achieve this, points to a new way of researching the brain in a more natural, and social, environment than traditional lab-based methods.
Abstract: This paper introduces the idea of using wearable, multi-modal body and brain sensing, in a theatrical setting, for neuroscientific research. Wearable motion capture suits are used to track the body movements of two actors while they enact a sequence of scenes together. One actor additionally wears a functional near-infrared spectroscopy (fNIRS)-based headgear to record the activation patterns on his prefrontal cortex. Repetitions in the movement data are then used to automatically segment the fNIRS data for further analysis. This exploration reveals that the semi-structured and repeatable nature of theatre can provide a useful laboratory for neuroscience, and that wearable sensing is a promising method to achieve this. This is important because it points to a new way of researching the brain in a more natural, and social, environment than traditional lab-based methods.

5 citations


Journal Article
TL;DR: This paper explores the use of wearable eye-tracking to detect physical activities and location information during assembly and construction tasks involving small groups of up to four people, and applies state-ofthe-art computer vision methods like object recognition, scene recognition, and face detection, to generate features from the eye-trackers’ egocentric videos.
Abstract: This paper explores the use of wearable eye-tracking to detect physical activities and location information during assembly and construction tasks involving small groups of up to four people. Large physical activities, like carrying heavy items and walking, are analysed alongside more precise, hand-tool activities, like using a drill, or a screwdriver. In a first analysis, gazeinvariant features from the eye-tracker are classified (using Naive Bayes) alongside features obtained from wrist-worn accelerometers and microphones. An evaluation is presented using data from an 8-person dataset containing over 600 physical activity events, performed under real-world (noisy) conditions. Despite the challenges of working with complex, and sometimes unreliable, data we show that event-based precision and recall of 0.66 and 0.81 respectively can be achieved by combining all three sensing modalities (using experiment independent training, and temporal smoothing). In a further analysis, we apply state-ofthe-art computer vision methods like object recognition, scene recognition, and face detection, to generate features from the eye-trackers’ egocentric videos. Activity recognition trained on the output of an object recognition model (e.g., VGG16 trained on ImageNet) could predict Precise activities with an (overall average) f-measure of 0.45. Location of participants was similarly obtained using visual scene recognition, with average precision and recall of 0.58 and 0.56.

1 citations


Proceedings Article
01 Jan 2018
TL;DR: In this paper, wearable motion capture suits are used to track the body movements of two actors while they enact a sequence of scenes together, and one actor additionally wears a functional near-infrared spectroscopy (fNIRS)-based headgear to record the activation patterns on his prefrontal cortex.
Abstract: This paper introduces the idea of using wearable, multi-modal body and brain sensing, in a theatrical setting, for neuroscientific research. Wearable motion capture suits are used to track the body movements of two actors while they enact a sequence of scenes together. One actor additionally wears a functional near-infrared spectroscopy (fNIRS)-based headgear to record the activation patterns on his prefrontal cortex. Repetitions in the movement data are then used to automatically segment the fNIRS data for further analysis. This exploration reveals that the semi-structured and repeatable nature of theatre can provide a useful laboratory for neuroscience, and that wearable sensing is a promising method to achieve this. This is important because it points to a new way of researching the brain in a more natural, and social, environment than traditional lab-based methods.