scispace - formally typeset
Search or ask a question

Showing papers on "Valence (psychology) published in 2019"


Journal ArticleDOI
TL;DR: In this paper, the authors collected, annotated, and prepared for public distribution a new database of facial emotions in the wild (called AffectNet), which contains more than 1,000,000 facial images from the Internet by querying three major search engines using 1,250 emotion related keywords in six different languages.
Abstract: Automated affective computing in the wild setting is a challenging problem in computer vision. Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal). To meet this need, we collected, annotated, and prepared for public distribution a new database of facial emotions in the wild (called AffectNet). AffectNet contains more than 1,000,000 facial images from the Internet by querying three major search engines using 1,250 emotion related keywords in six different languages. About half of the retrieved images were manually annotated for the presence of seven discrete facial expressions and the intensity of valence and arousal. AffectNet is by far the largest database of facial expression, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models. Two baseline deep neural networks are used to classify images in the categorical model and predict the intensity of valence and arousal. Various evaluation metrics show that our deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expression recognition systems.

432 citations


Journal ArticleDOI
20 Dec 2019-Science
TL;DR: Analysis of the terms used for emotions across a sample of 2474 spoken languages reveals low similarity across cultures, and finds evidence of universal structure in emotion colexification networks, with all families differentiating emotions primarily on the basis of hedonic valence and physiological activation.
Abstract: Many human languages have words for emotions such as "anger" and "fear," yet it is not clear whether these emotions have similar meanings across languages, or why their meanings might vary. We estimate emotion semantics across a sample of 2474 spoken languages using "colexification"-a phenomenon in which languages name semantically related concepts with the same word. Analyses show significant variation in networks of emotion concept colexification, which is predicted by the geographic proximity of language families. We also find evidence of universal structure in emotion colexification networks, with all families differentiating emotions primarily on the basis of hedonic valence and physiological activation. Our findings contribute to debates about universality and diversity in how humans understand and experience emotion.

174 citations


Journal ArticleDOI
TL;DR: This framework builds on a constructionist theory of emotion to explain how instances involving diverse physiological and behavioral patterns can be conceptualized as belonging to the same emotion category.

124 citations


Journal ArticleDOI
TL;DR: It is found that vocal bursts convey at least 24 distinct kinds of emotion, more so than affective appraisals (including valence and arousal), and the emotion categories conveyed by vocal bursts are bridged by smooth gradients with continuously varying meaning.
Abstract: Emotional vocalizations are central to human social life. Recent studies have documented that people recognize at least 13 emotions in brief vocalizations. This capacity emerges early in development, is preserved in some form across cultures, and informs how people respond emotionally to music. What is poorly understood is how emotion recognition from vocalization is structured within what we call a semantic space, the study of which addresses questions critical to the field: How many distinct kinds of emotions can be expressed? Do expressions convey emotion categories or affective appraisals (e.g., valence, arousal)? Is the recognition of emotion expressions discrete or continuous? Guided by a new theoretical approach to emotion taxonomies, we apply large-scale data collection and analysis techniques to judgments of 2,032 emotional vocal bursts produced in laboratory settings (Study 1) and 48 found in the real world (Study 2) by U.S. English speakers (N = 1,105). We find that vocal bursts convey at least 24 distinct kinds of emotion. Emotion categories (sympathy, awe), more so than affective appraisals (including valence and arousal), organize emotion recognition. In contrast to discrete emotion theories, the emotion categories conveyed by vocal bursts are bridged by smooth gradients with continuously varying meaning. We visualize the complex, high-dimensional space of emotion conveyed by brief human vocalization within an online interactive map. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

105 citations


Journal ArticleDOI
TL;DR: Models advanced to explain hemispheric asymmetries in representation of emotions will be discussed following their historical progression and a review of recent studies which have documented laterality effects within specific brain structures known to play a critical role in different components of emotions.
Abstract: Models advanced to explain hemispheric asymmetries in representation of emotions will be discussed following their historical progression. First, the clinical observations that have suggested a general dominance of the right hemisphere for all kinds of emotions will be reviewed. Then the experimental investigations that have led to proposal of a different hemispheric specialization for positive versus negative emotions (valence hypothesis) or, alternatively, for approach versus avoidance tendencies (motivational hypothesis) will be surveyed. The discussion of these general models will be followed by a review of recent studies which have documented laterality effects within specific brain structures, known to play a critical role in different components of emotions, namely the amygdata in the computation of emotionally laden stimuli, the ventromedial prefrontal cortex in the integration between cognition and emotion and in the control of impulsive reactions and the anterior insula in the conscious experience of emotion. Results of these recent investigations support and provide an updated integrated version of early models assuming a general right hemisphere dominance for all kinds of emotions.

103 citations


Journal ArticleDOI
TL;DR: Research on surprise relevant to the cognitive-evolutionary model of surprise proposed by Meyer, Reisenzein, and Schützwohl (1997) is reviewed and the majority of the assumptions of the model are found empirically supported.
Abstract: Research on surprise relevant to the cognitive-evolutionary model of surprise proposed by Meyer, Reisenzein, and Schutzwohl (1997) is reviewed The majority of the assumptions of the model are found empirically supported Surprise is evoked by unexpected (schema-discrepant) events and its intensity is determined by the degree if schema-discrepancy, whereas the novelty and the valence of the eliciting events probably do not have an independent effect Unexpected events cause an automatic interruption of ongoing mental processes that is followed by an attentional shift and attentional binding to the events, which is often followed by causal and other event analysis processes and by schema revision The facial expression of surprise postulated by evolutionary emotion psychologists has been found to occur rarely in surprise, for as yet unknown reasons A physiological orienting response marked by skin conductance increase, heart rate deceleration, and pupil dilation has been observed to occur regularly in the standard version of the repetition-change paradigm of surprise induction, but the specificity of these reactions as indicators of surprise is controversial There is indirect evidence for the assumption that the feeling of surprise consists of the direct awareness of the schema-discrepancy signal, but this feeling, or at least the self-report of surprise, is also influenced by experienced interference In contrast, facial feedback probably does contribute substantially to the feeling of surprise and the evidence for the hypothesis that surprise is affected by the difficulty of explaining an unexpected event is, in our view, inconclusive Regardless of how the surprise feeling is constituted, there is evidence that it has both motivational and informational effects Finally, the prediction failure implied by unexpected events sometimes causes a negative feeling, but there is no convincing evidence that this is always the case, and we argue that even if it were so, this would not be a sufficient reason for regarding this feeling as a component, rather than as an effect of surprise

90 citations


Journal ArticleDOI
TL;DR: A convolutional neural network is developed that accurately decodes images into 11 distinct emotion categories and is validated using more than 25,000 images and movies and shows that image content is sufficient to predict the category and valence of human emotion ratings.
Abstract: Theorists have suggested that emotions are canonical responses to situations ancestrally linked to survival. If so, then emotions may be afforded by features of the sensory environment. However, few computational models describe how combinations of stimulus features evoke different emotions. Here, we develop a convolutional neural network that accurately decodes images into 11 distinct emotion categories. We validate the model using more than 25,000 images and movies and show that image content is sufficient to predict the category and valence of human emotion ratings. In two functional magnetic resonance imaging studies, we demonstrate that patterns of human visual cortex activity encode emotion category-related model output and can decode multiple categories of emotional experience. These results suggest that rich, category-specific visual features can be reliably mapped to distinct emotions, and they are coded in distributed representations within the human visual system.

89 citations


Journal ArticleDOI
TL;DR: The results show that verbal instructions can readily overwrite the intrinsic meaning of facial emotions, with clear benefits for social communication as learning and anticipation of threat and safety readjusted to accurately track environmental changes.
Abstract: Facial expressions inform about other peoples' emotion and motivation and thus are central for social communication. However, the meaning of facial expressions may change depending on what we have learned about the related consequences. For instance, a smile might easily become threatening when displayed by a person who is known to be dangerous. The present study examined the malleability of emotional facial valence by means of social learning. To this end, facial expressions served as cues for verbally instructed threat-of-shock or safety (e.g., "happy faces cue shocks"). Moreover, reversal instructions tested the flexibility of threat/safety associations (e.g., "now happy faces cue safety"). Throughout the experiment, happy, neutral, and angry facial expressions were presented and auditory startle probes elicited defensive reflex activity. Results show that self-reported ratings and physiological reactions to threat/safety cues dissociate. Regarding threat and valence ratings, happy facial expressions tended to be more resistant becoming a threat cue, and angry faces remain threatening even when instructed as safety cue. For physiological response systems, however, we observed threat-potentiated startle reflex and enhanced skin conductance responses for threat compared to safety cues regardless of whether threat was cued by happy or angry faces. Thus, the incongruity of visual and verbal threat/safety information modulates conscious perception, but not the activation of physiological response systems. These results show that verbal instructions can readily overwrite the intrinsic meaning of facial emotions, with clear benefits for social communication as learning and anticipation of threat and safety readjusted to accurately track environmental changes.

89 citations


Journal ArticleDOI
TL;DR: Why tests of a basic-six model of emotion are not tests of the diagnostic value of facial expression more generally are discussed, and an alternative conceptual and methodological approach is offered that reveals a richer taxonomy of emotion.
Abstract: What would a comprehensive atlas of human emotions include? For 50 years, scientists have sought to map emotion-related experience, expression, physiology, and recognition in terms of the "basic six"-anger, disgust, fear, happiness, sadness, and surprise Claims about the relationships between these six emotions and prototypical facial configurations have provided the basis for a long-standing debate over the diagnostic value of expression (for review and latest installment in this debate, see Barrett et al, p 1) Building on recent empirical findings and methodologies, we offer an alternative conceptual and methodological approach that reveals a richer taxonomy of emotion Dozens of distinct varieties of emotion are reliably distinguished by language, evoked in distinct circumstances, and perceived in distinct expressions of the face, body, and voice Traditional models-both the basic six and affective-circumplex model (valence and arousal)-capture a fraction of the systematic variability in emotional response In contrast, emotion-related responses (eg, the smile of embarrassment, triumphant postures, sympathetic vocalizations, blends of distinct expressions) can be explained by richer models of emotion Given these developments, we discuss why tests of a basic-six model of emotion are not tests of the diagnostic value of facial expression more generally Determining the full extent of what facial expressions can tell us, marginally and in conjunction with other behavioral and contextual cues, will require mapping the high-dimensional, continuous space of facial, bodily, and vocal signals onto richly multifaceted experiences using large-scale statistical modeling and machine-learning methods

80 citations


Journal ArticleDOI
TL;DR: There is a need for future research that systematically analyses the impact of age and modality on the emergence of these valence effects, and it is found that children exhibit a clear positivity advantage for both word and face processing, indicating similar processing biases in both modalities.
Abstract: Emotional valence is predominately conveyed in social interactions by words and facial expressions. The existence of broad biases which favor more efficient processing of positive or negative emotions is still a controversial matter. While so far this question has been investigated separately for each modality, in this narrative review of the literature we focus on valence effects in processing both words and facial expressions. In order to identify the factors underlying positivity and negativity effects, and to uncover whether these effects depend on modality and age, we present and analyze three representative overviews of the literature concerning valence effects in word processing, face processing, and combinations of word and face processing. Our analysis of word processing studies points to a positivity bias or a balanced processing of positive and negative words, whereas the analysis of face processing studies showed the existence of separate positivity and negativity biases depending on the experimental paradigm. The mixed results seem to be a product of the different methods and types of stimuli being used. Interestingly, we found that children exhibit a clear positivity advantage for both word and face processing, indicating similar processing biases in both modalities. Over the course of development, the initial positivity advantage gradually disappears, and in some face processing studies even reverses into a negativity bias. We therefore conclude that there is a need for future research that systematically analyses the impact of age and modality on the emergence of these valence effects. Finally, we discuss possible explanations for the presence of the early positivity advantage and its subsequent decrease.

76 citations


Journal ArticleDOI
TL;DR: It is shown that speech prosody can communicate at least 12 emotions that are recognized across two different cultures, and that emotion categories drive the recognition of emotions more so than affective features, including Valence.
Abstract: Central to emotion science is the degree to which categories, such as Awe, or broader affective features, such as Valence, underlie the recognition of emotional expression. To explore the processes by which people recognize emotion from prosody, US and Indian participants were asked to judge the emotion categories or affective features communicated by 2,519 speech samples produced by 100 actors from 5 cultures. With large-scale statistical inference methods, we find that prosody can communicate at least 12 distinct kinds of emotion that are preserved across the 2 cultures. Analyses of the semantic and acoustic structure of the recognition of emotions reveal that emotion categories drive the recognition of emotions more so than affective features, including Valence. In contrast to discrete emotion theories, however, emotion categories are bridged by gradients representing blends of emotions. Our findings, visualized within an interactive map, reveal a complex, high-dimensional space of emotional states recognized cross-culturally in speech prosody. Whether emotions are universal across cultures is a central question in psychological research. In this study, Cowen et al. show that speech prosody can communicate at least 12 emotions that are recognized across two different cultures.

Journal ArticleDOI
TL;DR: This article examined the effect of review valence, emotional expression and language complexity on perceived poster, website and firm trustworthiness and subsequent behavioral intentions, using a mixed-method approach using qualitative critical incident technique (CIT) and quantitative experimental design.
Abstract: This paper aims to examine the underlying motivations, attitudes and behaviors of exaggerated review posters and readers by examining the effect of review valence, emotional expression and language complexity on perceived poster, website and firm trustworthiness and subsequent behavioral intentions.,This research uses a mixed-method approach using the qualitative critical incident technique (CIT) and quantitative experimental design. Study 1 uses CIT to examine exaggerated online reviews from the poster perspective where Study 2 uses CIT to examine readers’ perceptions of exaggerated reviews. Study 3 conducts a between-subjects experimental design examining the impact of valence (positive vs negative) × emotion (low vs high) × language (vague vs detailed) on trustworthiness and behavior intention.,Results of the two qualitative studies (Study 1 and 2) find posters and readers use language complexity and emotions in exaggerated reviews. The results from the quantitative experimental design study (Study 3) find that language style and emotions influence customer perceptions of poster, website and firm trustworthiness, which also mediates the relationship between the qualitative aspects of review text on behavioral intentions.,The findings provide multiple practical implications on the prevalence of exaggerated online reviews and the importance of language and emotion in determining customer perceptions and behavioral intentions.,By focusing on both readers and posters in exaggerated eWOM, specific motivations, emotions and language, this research contributes to the literature of online reviews, customer misbehavior, trustworthiness, language use and value co-destruction in online environments.

Journal ArticleDOI
TL;DR: In this article, two case studies investigating Chinese university EFL students' emotional reactions to teacher written corrective feedback (WCF) were conducted, and they found that while both students reported being emotionally undisturbed by WCF, they in fact experienced different discrete emotions with varying object foci, valence, and activation.

Journal ArticleDOI
TL;DR: Recent anatomo-clinical and activation studies that have investigated emotional and behavioral disorders of patients with asymmetrical forms of fronto-temporal degeneration and laterality effects in specific brain structures playing a critical role in different components of emotions support the hypothesis of a right hemisphere dominance for all components of the emotional system.
Abstract: Different models of emotional lateralization, advanced since the first clinical observations raised this issue, will be reviewed following their historical progression. The clinical investigations that have suggested a general dominance of the right hemisphere for all kinds of emotions and the experimental studies that have proposed a different hemispheric specialization for positive vs. negative emotions (valence hypothesis) or for approach vs. withdrawal tendencies (motivational hypothesis) will be reviewed first and extensively. This historical review will be followed by a short discussion of recent anatomo-clinical and activation studies that have investigated (a) emotional and behavioral disorders of patients with asymmetrical forms of fronto-temporal degeneration and (b) laterality effects in specific brain structures (amygdala, ventro-medial prefrontal cortex, and anterior insula) playing a critical role in different components of emotions. Overall, these studies support the hypothesis of a right hemisphere dominance for all components of the emotional system.

Journal ArticleDOI
TL;DR: This model argues that basic emotions are not contrary to the dimensional studies of emotions (core affects), and proposes that basic emotion should locate on the axis in the dimensions of emotion, and only represent one typical core affect (arousal or valence).
Abstract: How emotions are represented in the nervous system is a crucial unsolved problem in the affective neuroscience. Many studies are striving to find the localization of basic emotions in the brain but failed. Thus, many psychologists suspect the specific neural loci for basic emotions, but instead, some proposed that there are specific neural structures for the core affects, such as arousal and hedonic value. The reason for this widespread difference might be that basic emotions used previously can be further divided into more "basic" emotions. Here we review brain imaging data and neuropsychological data, and try to address this question with an integrative model. In this model, we argue that basic emotions are not contrary to the dimensional studies of emotions (core affects). We propose that basic emotion should locate on the axis in the dimensions of emotion, and only represent one typical core affect (arousal or valence). Therefore, we propose four basic emotions: joy-on positive axis of hedonic dimension, sadness-on negative axis of hedonic dimension, fear, and anger-on the top of vertical dimensions. This new model about basic emotions and construction model of emotions is promising to improve and reformulate neurobiological models of basic emotions.

Journal ArticleDOI
11 Apr 2019-Sensors
TL;DR: A comparative study between various machine and deep learning techniques, with and without feature selection, for recognizing and classifying fear levels based on the electroencephalogram (EEG) and peripheral data from the DEAP (Database for Emotion Analysis using Physiological signals) database.
Abstract: There has been steady progress in the field of affective computing over the last two decades that has integrated artificial intelligence techniques in the construction of computational models of emotion. Having, as a purpose, the development of a system for treating phobias that would automatically determine fear levels and adapt exposure intensity based on the user’s current affective state, we propose a comparative study between various machine and deep learning techniques (four deep neural network models, a stochastic configuration network, Support Vector Machine, Linear Discriminant Analysis, Random Forest and k-Nearest Neighbors), with and without feature selection, for recognizing and classifying fear levels based on the electroencephalogram (EEG) and peripheral data from the DEAP (Database for Emotion Analysis using Physiological signals) database. Fear was considered an emotion eliciting low valence, high arousal and low dominance. By dividing the ratings of valence/arousal/dominance emotion dimensions, we propose two paradigms for fear level estimation—the two-level (0—no fear and 1—fear) and the four-level (0—no fear, 1—low fear, 2—medium fear, 3—high fear) paradigms. Although all the methods provide good classification accuracies, the highest F scores have been obtained using the Random Forest Classifier—89.96% and 85.33% for the two-level and four-level fear evaluation modality.

Journal ArticleDOI
TL;DR: Current progress in identifying valence coding properties of neural populations in different nuclei of the amygdala, based on their activity, connectivity, and gene expression profile are examined.
Abstract: The neural mechanisms underlying emotional valence are at the interface between perception and action, integrating inputs from the external environment with past experiences to guide the behavior of an organism. Depending on the positive or negative valence assigned to an environmental stimulus, the organism will approach or avoid the source of the stimulus. Multiple convergent studies have demonstrated that the amygdala complex is a critical node of the circuits assigning valence. Here we examine the current progress in identifying valence coding properties of neural populations in different nuclei of the amygdala, based on their activity, connectivity, and gene expression profile.

Journal ArticleDOI
TL;DR: Light is shed on the role of emotions in affecting user judgments and the relationship between various aesthetic subdimensions (classical and expressive) and emotional subcomponents (valence and arousal) is explored.

Journal ArticleDOI
TL;DR: This study clearly supports the model of a general dominance of the right hemisphere for all emotions, regardless of affective valence, in patients with frontotemporal lobar degeneration.
Abstract: Background: Two main models have been advanced to explain the asymmetries observed in the representation and processing of emotions. The first model, labeled "the right hemisphere hypothesis," assumes a general dominance of the right hemisphere for all emotions, regardless of affective valence. The second model, named "the valence hypothesis," assumes an opposite dominance of the left hemisphere for positive emotions and the right hemisphere for negative emotions. Patients with frontotemporal lobar degeneration (FTLD) could contribute to clarifying this issue, because disorders of emotional and social behavior are very common in FTLD and because atrophy, which affects the antero-ventral part of the frontal and temporal lobes, can be clearly asymmetric in the early stages of this disease. Objective: The main scope of the present review therefore consists of evaluating if results of investigations conducted on emotional and behavioral disorders of patients with right and left FTLD, support the "right hemisphere" or the "valence" hypothesis. Method: A thorough review of behavioral and emotional disorders in FTLD patients, found that 177 possible studies, but only 32 papers met the requested criteria for inclusion in our review. Results: Almost all (25 out of 26) studies were relevant with respect to the "right hemisphere hypothesis" and supported the assumption of a general dominance of the right hemisphere for emotional functions, whereas only one of the six investigations were relevant with respect to the "valence hypothesis" and were in part consistent with this hypothesis, though these are also open to interpretation in terms of the "right hemisphere" hypothesis. Conclusions: This study, therefore, clearly supports the model of a general dominance of the right hemisphere for all emotions, regardless of affective valence.

Journal ArticleDOI
01 Mar 2019
TL;DR: It is proposed that arousal-related locus coeruleus-norepinephrine (LC-NE) system activation promotes the prioritization of the most salient features of an emotional experience in memory, which may drive lower-level sensory cortical activity and a stronger sense of recollection for arousing events.
Abstract: We tend to re-live emotional experiences more richly in memory than more mundane experiences. According to one recent neurocognitive model of emotional memory, negative events may be encoded with a larger amount of sensory information than neutral and positive events. As a result, there may be more perceptual information available to reconstruct these events at retrieval, leading to memory reinstatement patterns that correspond with greater memory vividness and sense of recollection for negative events. In this commentary, we offer an alternative perspective on how emotion may influence such sensory cortex reinstatement that focuses on engagement of the noradrenergic (NE) and dopaminergic (DA) systems rather than valence. Specifically, we propose that arousal-related locus coeruleus-norepinephrine (LC-NE) system activation promotes the prioritization of the most salient features of an emotional experience in memory. Thus, a select few details may drive lower-level sensory cortical activity and a stronger sense of recollection for arousing events. By contrast, states of high behavioral activation, including novelty-seeking and exploration, may recruit the DA system to broaden the scope of cognitive processing and integrate multiple event aspects in memory. These more integrated memory representations may be reflected in higher-order cortical reinstatement at retrieval. Thus, the balance between activation in these neuromodulatory systems at encoding, rather than the valence of the event, may ultimately determine the quality of emotional memory recollection and neural reinstatement.

Journal ArticleDOI
TL;DR: It is argued that neutral affect is a felt experience that provides important valence-relevant information, which influences cognition and behavior, and to provide novel theoretical and methodological perspectives that help advance the understanding of the affective landscape.
Abstract: Researchers interested in affect have often questioned the existence of neutral affective states. In this paper, we review and challenge three beliefs that researchers might hold about neutral affect. These beliefs are: (1) it is not possible to feel neutral because people are always feeling something, (2) neutrality is not an affective state because affect must be positively or negatively valenced, and (3) neutral affect is unimportant because it does not influence cognition or behavior. We review the reasons these beliefs might exist and provide empirical evidence that questions them. Specifically, we argue that neutral affect is a felt experience that provides important valence-relevant information, which influences cognition and behavior. By dispelling these beliefs about neutral affect, we hope to shine a light on the assumptions that researchers hold about the nature of affect and to provide novel theoretical and methodological perspectives that help advance our understanding of the affective landscape.

Journal ArticleDOI
TL;DR: In this paper, the influence of affective commitment, high-sacrifice commitment, and satisfaction on the customers' word-of-mouth concerning an online retailer was evaluated using structural equation modeling techniques.
Abstract: The goal of this research was to build a model that evaluates the influence of affective commitment, high-sacrifice commitment, and satisfaction on the customers’ word-of-mouth concerning an online retailer. Two word-of-mouth dimensions were considered: volume and valence. A survey was administered to 282 respondents and structural equation modeling techniques were used to process the data and test the hypotheses. Our findings show that satisfaction and high-sacrifice commitment have an important impact on both word-of-mouth volume and valence, while affective commitment only influences word-of-mouth valence. This paper offers detailed explanations of these results in light of other theories and studies in the field.

Journal ArticleDOI
TL;DR: It is suggested that naturalistic worry reduces the likelihood of a sharp increase in negative affect and does so by increasing and sustaining anxious activation and support the prospective ecological validity of CAM.
Abstract: The contrast avoidance model (CAM) suggests that worry increases and sustains negative emotion to prevent a negative emotional contrast (sharp upward shift in negative emotion) and increase the probability of a positive contrast (shift toward positive emotion). In Study 1, we experimentally validated momentary assessment items (N = 25). In Study 2, participants with generalized anxiety disorder (N = 31) and controls (N = 37) were prompted once per hour regarding their worry, thought valence, and arousal 10 times a day for 8 days. Higher worry duration, negative thought valence, and uncontrollable train of thoughts predicted feeling more keyed up concurrently and sustained anxious activation 1 hr later. More worry, feeling keyed up, and uncontrollable train of thoughts predicted lower likelihood of a negative emotional contrast in thought valence and higher likelihood of a positive emotional contrast in thought valence 1 hr later. Findings support the prospective ecological validity of CAM. Our findings suggest that naturalistic worry reduces the likelihood of a sharp increase in negative affect and does so by increasing and sustaining anxious activation.

Journal ArticleDOI
TL;DR: It is concluded that spontaneous physiological and behavioral responses typically reflect arousal, whereas learned responses can be valuable when investigating valence, and that the assessment of affective states can be furthered using mood assessments.

Posted Content
TL;DR: This work presents a new data-driven model and algorithm to identify perceived emotions of individuals based on their walking styles and presents an EWalk (Emotion Walk) dataset that consists of videos of walking individuals with gaits and labeled emotions.
Abstract: We present a new data-driven model and algorithm to identify the perceived emotions of individuals based on their walking styles. Given an RGB video of an individual walking, we extract his/her walking gait in the form of a series of 3D poses. Our goal is to exploit the gait features to classify the emotional state of the human into one of four emotions: happy, sad, angry, or neutral. Our perceived emotion recognition approach uses deep features learned via LSTM on labeled emotion datasets. Furthermore, we combine these features with affective features computed from gaits using posture and movement cues. These features are classified using a Random Forest Classifier. We show that our mapping between the combined feature space and the perceived emotional state provides 80.07% accuracy in identifying the perceived emotions. In addition to classifying discrete categories of emotions, our algorithm also predicts the values of perceived valence and arousal from gaits. We also present an EWalk (Emotion Walk) dataset that consists of videos of walking individuals with gaits and labeled emotions. To the best of our knowledge, this is the first gait-based model to identify perceived emotions from videos of walking individuals.

Journal ArticleDOI
TL;DR: Differences in facial expression in emotionally ambiguous contexts may be used to help infer emotional states of different valence within a single controlled experimental setting.
Abstract: Facial expressions are considered sensitive indicators of emotional states in humans and many animals. Identifying facial indicators of emotion is a major challenge and little systematic research has been done in non-primate species. In dogs, such research is important not only to address fundamental and applied scientific questions but also for practical reasons, since many problem behaviours are assumed to have an emotional basis, e.g. aggression based on frustration. Frustration responses can occur in superficially similar contexts as the emotional state of positive anticipation. For instance, the anticipated delivery of a food reward may induce the state of positive anticipation, but over time, if the food is not delivered, this will be replaced by frustration. We examined dogs’ facial expressions in contexts presumed to induce both positive anticipation and frustration, respectively, within a single controlled experimental setting. Using DogFACS, an anatomically-based method for coding facial expressions of dogs, we found that the “Ears adductor” action was more common in the positive condition and “Blink”, “Lips part”, “Jaw drop”, “Nose lick”, and “Ears flattener” were more common in the negative condition. This study demonstrates how differences in facial expression in emotionally ambiguous contexts may be used to help infer emotional states of different valence.

Journal ArticleDOI
TL;DR: The evolution of specific emotions for Twitter users is tracked by analysing the emotional content of their tweets before and after they explicitly report experiencing a positive or negative emotion, confirming previous affect labeling studies showing that putting one’s feelings into words can alleviate their intensity.
Abstract: Putting one’s feelings into words (also called affect labeling) can attenuate positive and negative emotions. Here, we track the evolution of specific emotions for 74,487 Twitter users by analysing the emotional content of their tweets before and after they explicitly report experiencing a positive or negative emotion. Our results describe the evolution of emotions and their expression at the temporal resolution of one minute. The expression of positive emotions is preceded by a short, steep increase in positive valence and followed by short decay to normal levels. Negative emotions, however, build up more slowly and are followed by a sharp reversal to previous levels, consistent with previous studies demonstrating the attenuating effects of affect labeling. We estimate that positive and negative emotions last approximately 1.25 and 1.5 h, respectively, from onset to evanescence. A separate analysis for male and female individuals suggests the potential for gender-specific differences in emotional dynamics. Bollen et al. tracked changes in the emotions of Twitter users before and after they expressed a feeling online. Emotions grow quickly before—and decrease rapidly after—their expression, confirming previous affect labeling studies showing that putting one’s feelings into words can alleviate their intensity.

Journal ArticleDOI
20 Dec 2019-Symmetry
TL;DR: The approach to emotion classification has future applicability in the field of affective computing, which includes all the methods used for the automatic assessment of emotions and their applications in healthcare, education, marketing, website personalization, recommender systems, video games, and social media.
Abstract: Emotions constitute an indispensable component of our everyday life. They consist of conscious mental reactions towards objects or situations and are associated with various physiological, behavioral, and cognitive changes. In this paper, we propose a comparative analysis between different machine learning and deep learning techniques, with and without feature selection, for binarily classifying the six basic emotions, namely anger, disgust, fear, joy, sadness, and surprise, into two symmetrical categorical classes (emotion and no emotion), using the physiological recordings and subjective ratings of valence, arousal, and dominance from the DEAP (Dataset for Emotion Analysis using EEG, Physiological and Video Signals) database. The results showed that the maximum classification accuracies for each emotion were: anger: 98.02%, joy:100%, surprise: 96%, disgust: 95%, fear: 90.75%, and sadness: 90.08%. In the case of four emotions (anger, disgust, fear, and sadness), the classification accuracies were higher without feature selection. Our approach to emotion classification has future applicability in the field of affective computing, which includes all the methods used for the automatic assessment of emotions and their applications in healthcare, education, marketing, website personalization, recommender systems, video games, and social media.

Journal ArticleDOI
TL;DR: In this article, the effect of emotional content within online reviews on product attitudes varies across consumers with different self-construal levels (i.e., independent or interdependent selfconstruality) perceive emotions in different ways.

Journal ArticleDOI
01 Jun 2019-Emotion
TL;DR: Interestingly, enhanced association-memory is observed in pairs composed of two positive words, but not in pairings of one positive and one neutral word, indicating that this enhancement may only when a sufficient amount of positive emotion is present.
Abstract: The influence of emotion on association-memory is often attributed to arousal, but negative stimuli are typically used to test for these effects. While prior studies of negative emotion on association-memory have found impairments, theories suggest that positive emotion may have a distinct effect on memory, and may lead to enhanced association-memory. Here we tested participants' memory for pairs of positive and neutral words using cued recall, supplemented with a mathematical modeling approach designed to disentangle item- versus association-memory effects that may otherwise confound cued-recall performance. In our main experiment, as well as in additional supplemental experiments, we consistently found enhanced association-memory due to positive emotion. Interestingly, we observed enhanced association-memory in pairs composed of two positive words, but not in pairings of one positive and one neutral word, indicating that this enhancement may only when a sufficient amount of positive emotion is present. These results provide further evidence that positive information is processed differently than negative and that, when examining association formation, valence as well as arousal must be considered. (PsycINFO Database Record (c) 2019 APA, all rights reserved).