scispace - formally typeset
Search or ask a question

Showing papers by "Jean-Claude Martin published in 2014"


Journal ArticleDOI
TL;DR: Empirical data is reported suggesting that users only have partial awareness of controlling gaze-contingent displays, and the advantages of simulating joint attention for improving interpersonal skills and user engagement are discussed.
Abstract: This article analyses the issues pertaining to the simulation of joint attention with virtual humans. Gaze represents a powerful communication channel illustrated by the pivotal role of joint attention in social interactions. To our knowledge, there have been only few attempts to simulate gazing patterns associated with joint attention as a mean for developing empathic virtual agents. Eye-tracking technologies now enable creating non-invasive gaze-contingent systems that empower the user with the ability to lead a virtual human's focus of attention in real-time. Although gaze control can be deliberate, most of our visual behaviors in everyday life are not. This article reports empirical data suggesting that users only have partial awareness of controlling gaze-contingent displays. The technical challenges induced by detecting the user's focus of attention in virtual reality are reviewed and several solutions are compared. We designed and tested a platform for creating virtual humans endowed with the ability to follow the user's attention. The article discusses the advantages of simulating joint attention for improving interpersonal skills and user engagement. Joint attention plays a major role in the development of autism. The platform we designed is intended for research and treatment of autism and tests included participants with this disorder.

40 citations


Journal ArticleDOI
TL;DR: Regression analyses revealed that the increased HPA activity was related to only one alexithymia subfactor, the difficulty in differentiating feelings and distinguishing them from bodily sensations and emotion arousal.

35 citations


Journal ArticleDOI
TL;DR: This article investigates the improvement of the recognition rate of emotions using visuo-haptic feedback compared to facial and haptic expressions alone and highlighted the finding that participants are not equally aided by each modality when recognizing emotions efficiently.
Abstract: Several studies have investigated the relevance of haptics to convey various types of emotions physically. This article investigates the improvement of the recognition rate of emotions using visuo-haptic feedback compared to facial and haptic expressions alone. Four experiments were conducted in which the recognition rates of emotions using facial, haptic and visuo-haptic expressions were tested. The first experiment evaluates the recognition rate of emotions using facial expressions. The second experiment collects a large corpus of 3D haptic expressions of certain emotions and subsequently identifies the relevant haptic expression for each emotion. The third experiment evaluates the selected haptic expressions through statistical and perceptive tests to retain the ones that result in the most accurate identification of the corresponding emotion. Finally, the fourth experiment studies the effect of visuo–haptic coupling on the recognition of the investigated emotions. Generally, emotions with high amplitu...

17 citations


Proceedings ArticleDOI
05 May 2014
TL;DR: This paper presents the interactive facial animation system based on the Component Process Model, generating facial signs of appraisal during a real-time interactive game, and describes a study comparing the model to the categorical approach of facial animation of emotion.
Abstract: Interactive virtual characters are expected to lead to an intuitive interaction via multiple communicative modalities such as the expression of emotions. Generating facial expressions that are consistent with the interaction context is a challenge. This paper presents our interactive facial animation system based on the Component Process Model, generating facial signs of appraisal during a real-time interactive game. We describe a study comparing our model to the categorical approach of facial animation of emotion. Participants interacted with a virtual character in three conditions: no expression of emotion, an expression of a categorical emotion, and expressions of sequential signs of appraisal. The character in the appraisal condition was reported as being more expressive than in the other two conditions and was reported as experiencing more mental states. In addition, using appraisal signs modified the way participants interacted with the character (participants played slower after some emotions were expressed by the agent, i.e. pride and sadness).

10 citations


Proceedings ArticleDOI
08 Aug 2014
TL;DR: Haptic expression of emotions has received less attention than other modalities, but some non-significant tendencies of complementarity between the visual and haptic modalities are highlighted.
Abstract: Haptic expression of emotions has received less attention than other modalities. Bonnet et al. [2011] combine visio-haptic modalities to improve the recognition and discrimination of some emotions. However, few works investigated how these modalities complement each other. For instance, Bickmore et al. [2010] highlight some non-significant tendencies of complementarity between the visual and haptic modalities.

4 citations


Proceedings Article
26 May 2014
TL;DR: A protocol defined to collect a database to study expressive full-body dyadic interactions between a user and an autonomous virtual agent to study full body expressivity and interaction patterns via avatars is described.
Abstract: Recent technologies enable the exploitation of full body expressions in applications such as interactive arts but are still limited in terms of dyadic subtle interaction patterns. Our project aims at full body expressive interactions between a user and an autonomous virtual agent. The currently available databases do not contain full body expressivity and interaction patterns via avatars. In this paper, we describe a protocol defined to collect a database to study expressive full-body dyadic interactions. We detail the coding scheme for manually annotating the collected videos. Reliability measures for global annotations of expressivity and interaction are also provided.

2 citations