R
Ronan Flynn
Researcher at Athlone Institute of Technology
Publications - 66
Citations - 370
Ronan Flynn is an academic researcher from Athlone Institute of Technology. The author has contributed to research in topics: Quality of experience & Computer science. The author has an hindex of 9, co-authored 50 publications receiving 237 citations.
Papers
More filters
Proceedings ArticleDOI
A QoE evaluation of immersive augmented and virtual reality speech & language assessment applications
TL;DR: This is the first work that compares user QoE of VR and AR applications, in particular with a focus on applications in the speech and language domain, and suggests that users acclimatized to the AR environment more quickly than the VR environment.
Journal ArticleDOI
A Physiology-Based QoE Comparison of Interactive Augmented Reality, Virtual Reality and Tablet-Based Applications
TL;DR: The results indicate comparatively higher levels of QoE for users of the augmented reality and tablet platforms.
Proceedings ArticleDOI
A QoE assessment method based on EDA, heart rate and EEG of a virtual reality assistive technology system
Débora Pereira Salgado,Felipe Roque Martins,Thiago Braga Rodrigues,Conor Keighrey,Ronan Flynn,Eduardo Lázaro Martins Naves,Niall Murray +6 more
TL;DR: This demo will capture and present the users EEG, heart Rate, EDA and head motion during the use of AT VR application, composed of the sensor and presentation systems: for acquisition of biological signals constituted by wearable sensors and the virtual wheelchair simulator that interfaces to a typical LCD display.
Journal ArticleDOI
Combined speech enhancement and auditory modelling for robust distributed speech recognition
Ronan Flynn,Edward Jones +1 more
TL;DR: Results indicate that the combination of speech enhancement pre-processing and the auditory model front-end provides an improvement in recognition performance in noisy conditions over the ETSI front-ends.
Journal ArticleDOI
Robust distributed speech recognition using speech enhancement
Ronan Flynn,Edward Jones +1 more
TL;DR: This paper examines the use of an auditory model combined with a speech enhancement algorithm as a robust front-end for a distributed speech recognition (DSR) system, whereby frontend functionality is implemented on a limited-resource consumer device like a mobile phone, while back-end classifier functionality is carried out by a remote server.