scispace - formally typeset
Search or ask a question
Author

Nicolas Tsapatsoulis

Bio: Nicolas Tsapatsoulis is an academic researcher from Cyprus University of Technology. The author has contributed to research in topics: Image retrieval & Automatic image annotation. The author has an hindex of 23, co-authored 154 publications receiving 3770 citations. Previous affiliations of Nicolas Tsapatsoulis include University of Cyprus & National and Kapodistrian University of Athens.


Papers
More filters
Journal ArticleDOI
TL;DR: Basic issues in signal processing and analysis techniques for consolidating psychological and linguistic analyses of emotion are examined, motivated by the PKYSTA project, which aims to develop a hybrid system capable of using information from faces and voices to recognize people's emotions.
Abstract: Two channels have been distinguished in human interaction: one transmits explicit messages, which may be about anything or nothing; the other transmits implicit messages about the speakers themselves. Both linguistics and technology have invested enormous efforts in understanding the first, explicit channel, but the second is not as well understood. Understanding the other party's emotions is one of the key tasks associated with the second, implicit channel. To tackle that task, signal processing and analysis techniques have to be developed, while, at the same time, consolidating psychological and linguistic analyses of emotion. This article examines basic issues in those areas. It is motivated by the PKYSTA project, in which we aim to develop a hybrid system capable of using information from faces and voices to recognize people's emotions.

2,255 citations

Journal ArticleDOI
TL;DR: This paper utilizes facial animation parameters (FAPs) to model primary expressions and describes a rule-based technique for handling intermediate ones, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.
Abstract: In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech, help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.

117 citations

Journal ArticleDOI
TL;DR: The hypothesis is that Instagram hashtags, and especially those provided by the photo owner/creator, express more accurately the content of a photo compared to the tags assigned to a photo during explicit image annotation processes like crowdsourcing.

73 citations

Book ChapterDOI
01 Jan 2002
TL;DR: This chapter describes a method for generating emotionally enriched human–computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions, using both MPEG-4 facial definition parameters (FDPs and FAPs).
Abstract: In the framework of MPEG-4 hybrid coding of natural and synthetic data streams, one can include teleconferencing and telepresence applications, in which a synthetic proxy or a virtual agent is capable of substituting the actual user. Such agents can interact with each other, analyzing input textual data entered by the user and multisensory data, including human emotions, facial expressions and nonverbal speech. This not only enhances interactivity, by replacing single media representations with dynamic multimedia renderings, but also assists human–computer interaction issues, letting the system become accustomed to the current needs and feelings of the user. Actual application of this technology [1] is expected in educational environments, 3-D videoconferencing and collaborative workplaces, online shopping and gaming, virtual communities and interactive entertainment. Facial expression synthesis and animation has gained much interest within the MPEG-4 framework; explicit facial animation parameters (FAPs) have been dedicated to this purpose. However, FAP implementation is an open research area [2]. In this chapter we describe a method for generating emotionally enriched human–computer interaction, focusing on analysis and synthesis of primary [3] and intermediate facial expressions [4]. To achieve this goal we utilize both MPEG-4 facial definition parameters (FDPs) and FAPs. The contribution of the work is twofold: it proposes a way of modeling primary expressions using FAPs and it describes a rule-based technique for analyzing both archetypal and intermediate expressions; for the latter we propose an innovative model generation framework. In particular, a relation between FAPs and the activation parameter proposed in classical psychological studies is established, extending the archetypal expression studies that the computer society has concentrated on. The overall scheme leads to a parameterized 144 EMOTION RECOGNITION AND SYNTHESIS BASED ON MPEG-4 FAPs approach to facial expression synthesis that is compatible with the MPEG-4 standard and can be used for emotion understanding.

66 citations

Book
01 Jan 2010
TL;DR: Affective, Interactive, and Cognitive Methods for E-Learning Design: Creating an Optimal Education Experience bridges a current gap in e-learning literature through focus on the study and application of human computer interaction principles in the design of online education.
Abstract: Affective, Interactive, and Cognitive Methods for E-Learning Design: Creating an Optimal Education Experience brides a current gap in e-learning literature through focus on the study and application of human computer interaction principles in the design of online education in order to offer students the optimal learning experience.

55 citations


Cited by
More filters
Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

01 Jan 2002

9,314 citations

Journal ArticleDOI
TL;DR: In this paper, the authors discuss human emotion perception from a psychological perspective, examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data.
Abstract: Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology.

2,503 citations