scispace - formally typeset
Search or ask a question
Author

Flavia De Simone

Bio: Flavia De Simone is an academic researcher from Università degli Studi Suor Orsola Benincasa. The author has contributed to research in topics: Human–computer interaction & Noun. The author has an hindex of 4, co-authored 5 publications receiving 32 citations.

Papers
More filters
Proceedings ArticleDOI
14 Sep 2016
TL;DR: This work investigates a method to classify different levels of cognitive workload starting from synchronized EEG and eye-tracking information, targeted to score a performance high enough to be applicable as a gauge for performance of unobtrusive monitoring systems working with data of lower quality.
Abstract: It has been shown that an increased mental workload in pilots could lead to a decrease in their situation awareness, which could lead, in turn, to a worse piloting performance and ultimately to critical human errors. Assessing the current pilot's psycho-physiological state is a hot topic of interest for developing advanced embedded cockpits systems capable of adapting their behavior to the state and performance of the pilot. In this work, we investigate a method to classify different levels of cognitive workload starting from synchronized EEG and eye-tracking information. The classifier object of the research is targeted to score a performance high enough to be applicable as a gauge for performance of unobtrusive monitoring systems working with data of lower quality.

22 citations

Journal ArticleDOI
TL;DR: Psycholinguistic experiments conducted with the picture-word interference paradigm are typically preceded by a phase during which participants learn the words they will have to produce in the experiment, and participants who had not been previously familiarized with the materials showed facilitation.
Abstract: Psycholinguistic experiments conducted with the picture-word interference paradigm are typically preceded by a phase during which participants learn the words they will have to produce in the experiment. In Experiment 1, the pictures (e.g., a frog) were to be named and were presented with a categorically related (e.g., cat) or unrelated distracter (e.g., pen). In the related condition responses were slower relative to the unrelated condition for the participants who had gone through the learning phase. In contrast, participants who had not been previously familiarized with the materials showed facilitation. In Experiment 2 one group of participants, as usual, learned to produce the targets upon presentation of the corresponding pictures (e.g., a frog). The other group learned to produce the same targets upon presentation of unrelated pictures (e.g., a clock). They showed very similar semantic effects. The implications of the findings in the study of word production are discussed.

17 citations

Journal ArticleDOI
TL;DR: The results of four picture–word interference experiments supported the hypothesis that grammatical class information plays a crucial role in lexical production.
Abstract: Four picture-word interference experiments aimed to test the role of grammatical class in lexical production. In Experiment 1 target nouns and verbs were produced in presence of semantically unrelated distractors that could also be nouns and verbs. Participants were slower when the distractor was of the same grammatical category of the target. To rule out the semantic hypothesis that the effects were due to objects versus actions semantic dichotomy rather than to grammatical class, Experiment 2 was conducted. Participants named target verbs in presence of unrelated action nouns and verbs. The results evidenced a grammatical category effect. Finally, in Experiments 3 and 4, morphologically not derived materials were used to verify the role of morphological information. The results evidenced a syntactic effect independent from morphology. Taken together the results supported the hypothesis that grammatical class information plays a crucial role in lexical production.

10 citations

Proceedings ArticleDOI
01 Sep 2014
TL;DR: This position paper introduces the concept of “artificial co-pilot” (that is, a driver model), with a focus on driver's oriented cognitive cars, in order to illustrate a new approach for future intelligent vehicles, which overcomes the limitations of nowadays models.
Abstract: This position paper introduces the concept of “artificial co-pilot” (that is, a driver model), with a focus on driver's oriented cognitive cars, in order to illustrate a new approach for future intelligent vehicles, which overcomes the limitations of nowadays models. The core consists in adopting the human cognitive framework for vehicles, following an artificial intelligent approach to take decisions. This paper illustrates in details these concepts, as they are under development in the EU co-funded project HOLIDES.

4 citations

Book ChapterDOI
02 Sep 2021
TL;DR: In this article, the role of non-verbal cues in eliciting emotions was investigated and participants' ability to detect emotional content of a communication by concentrating only on nonverbal signals and a corresponding unconscious emotional activation was evaluated.
Abstract: The present study pursued a dual purpose: the primary aim of the study was to test the role of non-verbal cues in eliciting emotions; the second aim was to verify if the participants’ ability in communication decoding was related to an unconscious coherent emotional activation. Participants were invited to observe a mute video with a job interview. Three different conditions were considered: one clip where the interview was characterized by racial prejudice, one with sexual prejudice and a control condition. An ad hoc questionnaire was administered to test participant’s observations. Participants’ facial expressions were also recorded. Results evidenced a great participants’ ability in detecting emotional content of a communication by concentrating only on non-verbal signals and a corresponding unconscious emotional activation. Both evidences will be framed in the most recent theoretical debates.

Cited by
More filters
Journal ArticleDOI
TL;DR: This article reviews the published studies related to multimodal data fusion to estimate the cognitive workload and identifies the opportunities for designing better multi-modality fusion systems for cognitive workload modeling.
Abstract: Considerable progress has been made in improving the estimation accuracy of cognitive workload using various sensor technologies. However, the overall performance of different algorithms and methods remain suboptimal in real-world applications. Some studies in the literature demonstrate that a single modality is sufficient to estimate cognitive workload. These studies are limited to controlled settings, a scenario that is significantly different from the real world where data gets corrupted, interrupted, and delayed. In such situations, the use of multiple modalities is needed. Multimodal fusion approaches have been successful in other domains, such as wireless-sensor networks, in addressing single-sensor weaknesses and improving information quality/accuracy. These approaches are inherently more reliable when a data source is lost. In the cognitive workload literature, sensors, such as electroencephalography (EEG), electrocardiography (ECG), and eye tracking, have shown success in estimating the aspects of cognitive workload. Multimodal approaches that combine data from several sensors together can be more robust for real-time measurement of cognitive workload. In this article, we review the published studies related to multimodal data fusion to estimate the cognitive workload and synthesize their main findings. We identify the opportunities for designing better multimodal fusion systems for cognitive workload modeling.

66 citations

Journal ArticleDOI
TL;DR: This paper provided a comprehensive theoretical review of the existing evidence to date and several Bayesian meta-analyses and meta-regressions to determine the size of the effect and explore the experimental conditions in which the effect surfaces.

47 citations

Proceedings ArticleDOI
14 Jun 2018
TL;DR: This work presents a novel method to generalize from a set of trained classifiers to new and unknown subjects, which uses normalized features and a similarity function to match a new subject with similar subjects, for which classifiers have been previously trained.
Abstract: Real-time evaluation of a person's cognitive load can be desirable in many situations. It can be employed to automatically assess or adjust the difficulty of a task, as a safety measure, or in psychological research. Eye-related measures, such as the pupil diameter or blink rate, provide a non-intrusive way to assess the cognitive load of a subject and have therefore been used in a variety of applications. Usually, workload classifiers trained on these measures are highly subject-dependent and transfer poorly to other subjects. We present a novel method to generalize from a set of trained classifiers to new and unknown subjects. We use normalized features and a similarity function to match a new subject with similar subjects, for which classifiers have been previously trained. These classifiers are then used in a weighted voting system to detect workload for an unknown subject. For real-time workload classification, our methods performs at 70.4% accuracy. Higher accuracy of 76.8% can be achieved in an offline classification setting.

34 citations

Proceedings ArticleDOI
14 Oct 2019
TL;DR: Results indicate that a combination of behavioral and physiological indicators allows for reliable prediction of cognitive load in an emergency simulation game, opening up new avenues for adaptivity and interaction.
Abstract: The reliable estimation of cognitive load is an integral step towards real-time adaptivity of learning or gaming environments. We introduce a novel and robust machine learning method for cognitive load assessment based on behavioral and physiological measures in a combined within- and cross-participant approach. 47 participants completed different scenarios of a commercially available emergency personnel simulation game realizing several levels of difficulty based on cognitive load. Using interaction metrics, pupil dilation, eye-fixation behavior, and heart rate data, we trained individual, participant-specific forests of extremely randomized trees differentiating between low and high cognitive load. We achieved an average classification accuracy of 72%. We then apply these participant-specific classifiers in a novel way, using similarity between participants, normalization, and relative importance of individual features to successfully achieve the same level of classification accuracy in cross-participant classification. These results indicate that a combination of behavioral and physiological indicators allows for reliable prediction of cognitive load in an emergency simulation game, opening up new avenues for adaptivity and interaction.

34 citations

Journal ArticleDOI
TL;DR: Results suggest that conceptual knowledge was perceptually-grounded in expressive vocabulary in TFA versus receptive vocabulary in AWS, and suggests that lemma/word form connections were weaker in AWS.

30 citations