Author
Dagmar S. Fraser
Other affiliations: University of Stirling
Bio: Dagmar S. Fraser is an academic researcher from University of Birmingham. The author has contributed to research in topics: Psychology & Facial expression. The author has an hindex of 4, co-authored 9 publications receiving 63 citations. Previous affiliations of Dagmar S. Fraser include University of Stirling.
Papers
More filters
••
TL;DR: A biologically inspired technique for detecting onsets in sound using Outputs from a cochlea-like filter are spike coded, in a way similar to the auditory nerve, with essentially zero latency relative to these AN spikes.
Abstract: A biologically inspired technique for detecting onsets in sound is presented. Outputs from a cochlea-like filter are spike coded, in a way similar to the auditory nerve (AN). These AN-like spikes are presented to a leaky integrate-and-fire neuron through a depressing synapse. Onsets are detected with essentially zero latency relative to these AN spikes. Onset detection results for a tone burst, musical sounds and the DARPA/NIST TIMIT speech corpus are presented.
42 citations
••
TL;DR: In this article, the authors demonstrate that the kinematics of facial movements provide added value, and an independent contribution to emotion recognition by quantifying the speed of changes in distance between key facial landmarks.
Abstract: The kinematics of peoples' body movements provide useful cues about emotional states: for example, angry movements are typically fast and sad movements slow. Unlike the body movement literature, studies of facial expressions have focused on spatial, rather than kinematic, cues. This series of experiments demonstrates that speed comprises an important facial emotion expression cue. In Experiments 1a-1c we developed (N = 47) and validated (N = 27) an emotion-induction procedure, and recorded (N = 42) posed and spontaneous facial expressions of happy, angry, and sad emotional states. Our novel analysis pipeline quantified the speed of changes in distance between key facial landmarks. We observed that happy expressions were fastest, sad were slowest, and angry expressions were intermediate. In Experiment 2 (N = 67) we replicated our results for posed expressions and introduced a novel paradigm to index communicative emotional expressions. Across Experiments 1 and 2, we demonstrate differences between posed, spontaneous, and communicative expression contexts. Whereas mouth and eyebrow movements reliably distinguished emotions for posed and communicative expressions, only eyebrow movements were reliable for spontaneous expressions. In Experiments 3 and 4 we manipulated facial expression speed and demonstrated a quantifiable change in emotion recognition accuracy. That is, in a discovery (N = 29) and replication sample (N = 41), we showed that speeding up facial expressions promotes anger and happiness judgments, and slowing down expressions encourages sad judgments. This influence of kinematics on emotion recognition is dissociable from the influence of spatial cues. These studies demonstrate that the kinematics of facial movements provide added value, and an independent contribution to emotion recognition. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
24 citations
••
TL;DR: In this paper, autistic and non-autistic adults matched on age, gender, non-verbal reasoning ability and alexithymia, completed an emotion recognition task, which employed dynamic point light displays of emotional facial expressions manipulated in terms of speed and spatial exaggeration.
Abstract: To date, studies have not established whether autistic and non-autistic individuals differ in emotion recognition from facial motion cues when matched in terms of alexithymia. Here, autistic and non-autistic adults (N = 60) matched on age, gender, non-verbal reasoning ability and alexithymia, completed an emotion recognition task, which employed dynamic point light displays of emotional facial expressions manipulated in terms of speed and spatial exaggeration. Autistic participants exhibited significantly lower accuracy for angry, but not happy or sad, facial motion with unmanipulated speed and spatial exaggeration. Autistic, and not alexithymic, traits were predictive of accuracy for angry facial motion with unmanipulated speed and spatial exaggeration. Alexithymic traits, in contrast, were predictive of the magnitude of both correct and incorrect emotion ratings.
15 citations
••
TL;DR: Cerebellar patients with parietal or cerebellar lesions showed some similar and some contrasting deficits in reach-to-grasp coordination, and the cerebellum was more dominant in controlling temporal coupling between transport and grasp components.
Abstract: Background: The differential contributions of the cerebellum and parietal lobe to coordination between hand transport and hand shaping to an object, have not been clearly identified. Objective: To contrast impairments in reach to grasp coordination, in response to object location perturbation, in patients with right parietal and cerebellar lesions, in order to further elucidate the role of each area in reach to grasp coordination. Method: A two-factor design with one between subject factor (right parietal stroke; cerebellar stroke; controls) and one within subject factor (presence or absence of object location perturbation) examined correction processes used to maintain coordination between transport to grasp in the presence of perturbation. Sixteen chronic stroke participants (8 with right parietal lesions and 8 with cerebellar lesions) were matched in age (mean=61years; standard deviation=12) and hand dominance with 16 healthy controls. Hand and arm movements were recorded during unperturbed baseline trials (10) and unpredictable trials (60) in which the target was displaced to the left (10) or right (10) or remained fixed (40). Results: Cerebellar patients had a slowed response to perturbation with anticipatory hand opening, an increased number of aperture peaks and disruption to temporal coordination, and greater variability. Parietal participants also exhibited slowed movements, with increased number of aperture peaks, but in addition, increased the number of velocity peaks and had a longer wrist path trajectory due to difficulties planning the new transport goal and thus relying more on feedback control. Conclusion: Patients with parietal or cerebellar lesions showed some similar and some contrasting deficits. The cerebellum was more dominant in controlling temporal coupling between transport and grasp components, and the parietal area was more concerned with using sensation to relate arm and hand state to target position.
5 citations
••
27 Mar 2018
TL;DR: This chapter provides a comprehensive over - view of methods developed to capture, process, analyse, and model individual and group timing in music performance and methods of analysis.
Abstract: Accurate timing of movement in the hundreds of milliseconds range is a hallmark of human activities such as music and dance. Its study requires accurate measurement of the times of events (often called responses) based on the movement or acoustic record. This chapter provides a comprehensive over - view of methods developed to capture, process, analyse, and model individual and group timing [...] This chapter is structured in five main sections, as follows. We start with a review of data capture methods, working, in turn, through a low cost system to research simple tapping, complex movements, use of video, inertial measurement units, and dedicated sensorimotor synchronisation software. This is followed by a section on music performance, which includes topics on the selection of music materials, sound recording, and system latency. The identification of events in the data stream can be challenging and this topic is treated in the next section, first for movement then for music. Finally, we cover methods of analysis, including alignment of the channels, computation of between channel asynchrony errors and modelling of the data set.
4 citations
Cited by
More filters
27 Jul 1996
TL;DR: In this paper, a beat tracking system that processes musical acoustic signals and recognizes temporal positions of beats in real time is presented, which is able to deal with audio signals that contain sounds of various instruments, especially drums.
Abstract: This paper presents a beat tracking system that processes musical acoustic signals and recognizes temporal positions of beats in real time. Most previous systems were not able to deal with audio signals that contained sounds of various instruments, especially drums. Our system deals with popular music in which drums maintain the beat. To examine multiple hypotheses of beats in parallel, our system manages multiple agents that predict beats by using autocorrelation and cross-correlation according to different strategies. In our experiment with eight pre-registered drum patterns, the system implemented on a parallel computer correctly tracked beats in 42 out of 44 commercially distributed popular songs.
95 citations
••
TL;DR: The overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important is evaluated.
Abstract: A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
94 citations
••
TL;DR: It will be shown that the proposed model can be used both for purposes of understanding the mechanisms of an NN of the auditory system and for sound source lateralization tasks in technical applications, e.g., its use with the Darmstadt robotic head.
Abstract: In this paper, a binaural sound source lateralization spiking neural network (NN) will be presented which is inspired by most recent neurophysiological studies on the role of certain nuclei in the superior olivary complex (SOC) and the inferior colliculus (IC). The binaural sound source lateralization neural network (BiSoLaNN) is a spiking NN based on neural mechanisms, utilizing complex neural models, and attempting to simulate certain parts of nuclei of the auditory system in detail. The BiSoLaNN utilizes both excitatory and inhibitory ipsilateral and contralateral influences arrayed in only one delay line originating in the contralateral side to achieve a sharp azimuthal localization. It will be shown that the proposed model can be used both for purposes of understanding the mechanisms of an NN of the auditory system and for sound source lateralization tasks in technical applications, e.g., its use with the Darmstadt robotic head (DRH).
55 citations
••
TL;DR: This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation.
30 citations