scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Affective understanding in film

TL;DR: A systematic approach grounded upon psychology and cinematography is developed to address several important issues in affective understanding and a holistic method of extracting affective information from the multifaceted audio stream has been introduced.
Abstract: Affective understanding of film plays an important role in sophisticated movie analysis, ranking and indexing. However, due to the seemingly inscrutable nature of emotions and the broad affective gap from low-level features, this problem is seldom addressed. In this paper, we develop a systematic approach grounded upon psychology and cinematography to address several important issues in affective understanding. An appropriate set of affective categories are identified and steps for their classification developed. A number of effective audiovisual cues are formulated to help bridge the affective gap. In particular, a holistic method of extracting affective information from the multifaceted audio stream has been introduced. Besides classifying every scene in Hollywood domain movies probabilistically into the affective categories, some exciting applications are demonstrated. The experimental results validate the proposed approach and the efficacy of the audiovisual cues.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A multimodal data set for the analysis of human affective states was presented and a novel method for stimuli selection is proposed using retrieval by affective tags from the last.fm website, video highlight detection, and an online assessment tool.
Abstract: We present a multimodal data set for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of arousal, valence, like/dislike, dominance, and familiarity. For 22 of the 32 participants, frontal face video was also recorded. A novel method for stimuli selection is proposed using retrieval by affective tags from the last.fm website, video highlight detection, and an online assessment tool. An extensive analysis of the participants' ratings during the experiment is presented. Correlates between the EEG signal frequencies and the participants' ratings are investigated. Methods and results are presented for single-trial classification of arousal, valence, and like/dislike ratings using the modalities of EEG, peripheral physiological signals, and multimedia content analysis. Finally, decision fusion of the classification results from different modalities is performed. The data set is made publicly available and we encourage other researchers to use it for testing their own affective state estimation methods.

3,013 citations


Cites background from "Affective understanding in film"

  • ...Ç...

    [...]

  • ...Last.fm offers an API, allowing one to retrieve tags and tagged songs....

    [...]

Proceedings ArticleDOI
25 Oct 2010
TL;DR: This work investigates and develops methods to extract and combine low-level features that represent the emotional content of an image, and uses these for image emotion classification.
Abstract: Images can affect people on an emotional level. Since the emotions that arise in the viewer of an image are highly subjective, they are rarely indexed. However there are situations when it would be helpful if images could be retrieved based on their emotional content. We investigate and develop methods to extract and combine low-level features that represent the emotional content of an image, and use these for image emotion classification. Specifically, we exploit theoretical and empirical concepts from psychology and art theory to extract image features that are specific to the domain of artworks with emotional expression. For testing and training, we use three data sets: the International Affective Picture System (IAPS); a set of artistic photography from a photo sharing site (to investigate whether the conscious use of colors and textures displayed by the artists improves the classification); and a set of peer rated abstract paintings to investigate the influence of the features and ratings on pictures without contextual content. Improved classification results are obtained on the International Affective Picture System (IAPS), compared to state of the art work.

734 citations


Cites background from "Affective understanding in film"

  • ...in [34] or [29] are at the higher, emotional level....

    [...]

  • ...Work on the affective content analysis in movies is presented in [11] and [29]....

    [...]

  • ...As was shown in [29] and many psychological studies [22, 21], choosing meaningful emotional categories is not an easy task and requires thorough consideration....

    [...]

Journal ArticleDOI
01 Nov 2011
TL;DR: Methods for video structure analysis, including shot boundary detection, key frame extraction and scene segmentation, extraction of features including static key frame features, object features and motion features, video data mining, video annotation, and video retrieval including query interfaces are analyzed.
Abstract: Video indexing and retrieval have a wide spectrum of promising applications, motivating the interest of researchers worldwide. This paper offers a tutorial and an overview of the landscape of general strategies in visual content-based video indexing and retrieval, focusing on methods for video structure analysis, including shot boundary detection, key frame extraction and scene segmentation, extraction of features including static key frame features, object features and motion features, video data mining, video annotation, video retrieval including query interfaces, similarity measure and relevance feedback, and video browsing. Finally, we analyze future research directions.

606 citations

Journal ArticleDOI
TL;DR: This article provides a comprehensive review of the methods that have been proposed for music emotion recognition and concludes with suggestions for further research.
Abstract: The proliferation of MP3 players and the exploding amount of digital music content call for novel ways of music organization and retrieval to meet the ever-increasing demand for easy and effective information access. As almost every music piece is created to convey emotion, music organization and retrieval by emotion is a reasonable way of accessing music information. A good deal of effort has been made in the music information retrieval community to train a machine to automatically recognize the emotion of a music signal. A central issue of machine recognition of music emotion is the conceptualization of emotion and the associated emotion taxonomy. Different viewpoints on this issue have led to the proposal of different ways of emotion annotation, model training, and result visualization. This article provides a comprehensive review of the methods that have been proposed for music emotion recognition. Moreover, as music emotion recognition is still in its infancy, there are many open issues. We review the solutions that have been proposed to address these issues and conclude with suggestions for further research.

340 citations

Journal ArticleDOI
TL;DR: A large video database, namely LIRIS-ACCEDE, is proposed, which consists of 9,800 good quality video excerpts with a large content diversity and provides four experimental protocols and a baseline for prediction of emotions using a large set of both visual and audio features.
Abstract: Research in affective computing requires ground truth data for training and benchmarking computational models for machine-based emotion understanding. In this paper, we propose a large video database, namely LIRIS-ACCEDE, for affective content analysis and related applications, including video indexing, summarization or browsing. In contrast to existing datasets with very few video resources and limited accessibility due to copyright constraints, LIRIS-ACCEDE consists of 9,800 good quality video excerpts with a large content diversity. All excerpts are shared under creative commons licenses and can thus be freely distributed without copyright issues. Affective annotations were achieved using crowdsourcing through a pair-wise video comparison protocol, thereby ensuring that annotations are fully consistent, as testified by a high inter-annotator agreement, despite the large diversity of raters’ cultural backgrounds. In addition, to enable fair comparison and landmark progresses of future affective computational models, we further provide four experimental protocols and a baseline for prediction of emotions using a large set of both visual and audio features. The dataset (the video clips, annotations, features and protocols) is publicly available at: http://liris-accede.ec-lyon.fr/.

270 citations


Cites background or methods from "Affective understanding in film"

  • ...The other features in this set, from the third best performing feature to the least efficient feature are: audio zero-crossing rate, entropy complexity [56], disparity of most salient points (standard deviation of normalized coordinates), audio asymmetry envelope, number of scene cuts per frame, depth of field (using the blur map computed in [58]), compositional balance [57], audio flatness, orientation of the most harmonious template [53], normalized number of white frames, the color energy and color contrast [21], scene complexity (area of the bounding box that encloses the top 96....

    [...]

  • ...In the same way, Wang and Cheong introduced features inspired from psychology and film-making rules [21]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.
Abstract: The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.

15,696 citations

Book
01 Jan 1957
TL;DR: In this article, the authors deal with the nature and theory of meaning and present a new, objective method for its measurement which they call the semantic differential, which can be adapted to a wide variety of problems in such areas as clinical psychology, social psychology, linguistics, mass communications, esthetics, and political science.
Abstract: In this pioneering study, the authors deal with the nature and theory of meaning and present a new, objective method for its measurement which they call the semantic differential. This instrument is not a specific test, but rather a general technique of measurement that can be adapted to a wide variety of problems in such areas as clinical psychology, social psychology, linguistics, mass communications, esthetics, and political science. The core of the book is the authors' description, application, and evaluation of this important tool and its far-reaching implications for empirical research.

9,476 citations

Book
01 Jan 1872
TL;DR: The Expression of the Emotions in Man and Animals Introduction to the First Edition and Discussion Index, by Phillip Prodger and Paul Ekman.
Abstract: Acknowledgments List of Illustrations Figures Plates Preface to the Anniversary Edition by Paul Ekman Preface to the Third Edition by Paul Ekman Preface to the Second Edition by Francis Darwin Introduction to the Third Edition by Paul Ekman The Expression of the Emotions in Man and Animals Introduction to the First Edition 1. General Principles of Expression 2. General Principles of Expression -- continued 3. General Principles of Expression -- continued 4. Means of Expression in Animals 5. Special Expressions of Animals 6. Special Expressions of Man: Suffering and Weeping 7. Low Spirits, Anxiety, Grief, Dejection, Despair 8. Joy, High Spirits, Love, Tender Feelings, Devotion 9. Reflection - Meditation - Ill-temper - Sulkiness - Determination 10. Hatred and Anger 11. Disdain - Contempt - Disgust - Guilt - Pride, Etc. - Helplessness - Patience - Affirmation and Negation 12. Surprise - Astonishment - Fear - Horror 13. Self-attention - Shame - Shyness - Modesty: Blushing 14. Concluding Remarks and Summary Afterword, by Paul Ekman APPENDIX I: Charles Darwin's Obituary, by T. H. Huxley APPENDIX II: Changes to the Text, by Paul Ekman APPENDIX III: Photography and The Expression of the Emotions, by Phillip Prodger APPENDIX IV: A Note on the Orientation of the Plates, by Phillip Prodger and Paul Ekman APPENDIX V: Concordance of Illustrations, by Phillip Prodger APPENDIX VI: List of Head Words from the Index to the First Edition NOTES NOTES TO THE COMMENTARIES INDEX

9,342 citations


"Affective understanding in film" refers background in this paper

  • ...The Darwinian perspective offers Ekman’s List which has been proven through substantial experimental backing to be universally identifiable and distinguishable across cultural borders [40]....

    [...]

  • ...Thus some may, in attempting to “unify” matters, force features arising naturally from all perspectives (e.g., the underlying physiological basis of speech audio features causes it to map more naturally in the Darwinian perspective) to map to the VA representation before mapping to the output emotions....

    [...]

  • ...Since the dimension and nature of the SAV is solely dependent on the exact output emotions chosen, it is intimately related to the Darwinian perspective....

    [...]

  • ...The Darwinian perspective provides the theoretical basis on how to categorize emotions meaningfully, but says nothing about other rich information residing in the film domain....

    [...]

  • ...The Darwinian perspective provides another motivation to subdivide “Happy”, which is observed to contain the most diversity in affective states [35], and hence able to yield sufficiently distinctive finer partitions....

    [...]

01 Jan 1999

4,584 citations


"Affective understanding in film" refers background in this paper

  • ...A posteriori sigmoidals fitted to the decision values of the SVMs are then learnt for each class-pair [30], where the sigmoidals are of the form, with as adjustable parameters,...

    [...]

01 Jan 1995
TL;DR: The International Affective Picture System (IAPS) as mentioned in this paper provides a set of normative emotional stimuli for experimental investigations of emotion and attention for the NIMH Center for Emotion and Attention.
Abstract: The International Affective Picture System (IAPS) is being developed to provide a set of normative emotional stimuli for experimental investigations of emotion and attention. The goal is to develop a large set of standardized, emotionally-evocative, internationally accessible, color photographs that includes contents across a wide range of semantic categories. The IAPS (pronounced EYE-APS), along with the International Affective Digitized Sound system (IADS), the Affective Lexicon of English Words (ANEW), as well as other collections of affective stimuli, are being developed and distributed by the NIMH Center for Emotion and Attention (CSEA) at the University of Florida in order to provide standardized materials that are available to researchers in the study of emotion and attention. The existence of these collections of normatively rated affective stimuli should: 1) allow better experimental control in the selection of emotional stimuli, 2) facilitate the comparison of results across different studies conducted in the same or different laboratory, and 3) encourage and allow exact replications within and across research labs who are assessing basic and applied problems in psychological science.

2,795 citations


"Affective understanding in film" refers background in this paper

  • ...Worry does not clearly belong to any of the categories, thinking in terms of VA axes suggests that the scene falls within the low A-V- part of the VA space; hence these scenes are categorized under Sad (see also [47] for using VA space to categorize images emotionally)....

    [...]