scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Improved Danmaku Emotion Analysis and Its Application Based on Bi-LSTM Model

10 Jun 2020-IEEE Access (IEEE)-Vol. 8, pp 114123-114134
TL;DR: An improved emotion analysis model based on Bi-LSTM model to classify the further four-dimensional emotions of Pleasure, Anger, Sorrow and Joy is proposed and tags such as comment time and user name are added to the danmaku information.
Abstract: With the rapid development of social media, danmaku video provides a platform for users to communicate online. To some extent, danmaku video provides emotional timing information and an innovative method to analyze video data. In the age of big data, studying the characteristics of danmaku and its emotional tendencies can not only help us understand the psychological characteristics of users but also feedback the effective information of users to video platforms, which can help the platforms optimize related short video recommendations so that it can provide a more accurate solution for the selection of audiences during video production. However, danmaku is different from traditional comments. Current emotion classification methods are only suitable for two-dimensional classification which are not suitable for danmaku emotion analysis. Aiming at the problems such as the colloquialism, diversity, spelling errors, structural non-linearity informal language on the Internet, diversity of social topics, and context dependency of emotion analysis of the danmaku data, this paper proposes an improved emotion analysis model based on Bi-LSTM model to classify the further four-dimensional emotions of Pleasure, Anger, Sorrow and Joy. Furthermore, we add tags such as comment time and user name to the danmaku information. Experimental results show that the improved model has higher Accuracy, Recall, Precision, and F1-Score under the same conditions compared with the CNN and SVM. The classification effect of improved model is close to the SOTA. Experimental results also show that the improved model can be effectively applied to the analysis of irregular danmaku emotion.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
09 May 2018
TL;DR: It is highlighted that USee provides users a free and safe social place, in which users express high and unique enthusiasm and engagement, and location sensitivity is a feature designed to lower cost for users to participant in offline activities.
Abstract: This paper presents a notion of location-sensitive online-offline hybrid social system What makes it distinct from other mobile social media is that it supports not simply online interaction but also online-offline hybrid interaction, and location sensitivity is a feature designed to lower cost for users to participant in offline activities We designed and implemented a prototype mobile Application called USee, and danmaku - an emerging socio-digital paradigm, was employed as the main user interface We introduce the design and implementation of USee, and conduct a preliminary study to understand how it works in reality Findings of the preliminary study suggest promises of the notion of location-sensitive online-offline hybrid social system We highlight that USee provides users a free and safe social place, in which users express high and unique enthusiasm and engagement This paper contributes to provide meaningful insights and design implication to mobile social media

2 citations

Journal ArticleDOI
TL;DR: A comprehensive overview of the AVRS domain is provided, including various traditional recommendation algorithms and advanced deep learning-based algorithms, the commonly used affective video recommendation databases, audience response categories, and evaluation methods, and the potential future research directions.
Abstract: Traditional video recommendation provides the viewers with customized media content according to their historical records (e.g., ratings, reviews). However, such systems tend to generate terrible results if the data is insufficient, which leads to a cold-start problem. An affective video recommender system (AVRS) is a multidiscipline and multimodal human-robot interaction (HRI) system, and it incorporates physical, physiological, neuroscience, and computer science subjects and multimedia resources, including text, audio, and video. As a promising research domain, AVRS employs advanced affective analysis technologies in video resources; therefore, it can solve the cold-start problem. In AVRS, the viewers’ emotional responses can be obtained from various techniques, including physical signals (e.g., facial expression, gestures, and speech) and internal signals (e.g., physiological signals). The changes in these signals can be detected when the viewers face specific situations. The physiological signals are a response to central and autonomic nervous systems and are mostly involuntarily activated, which cannot be easily controlled. Therefore, it is suitable for reliable emotion analysis. The physical signals can be recorded by a webcam or recorder. In contrast, the physiological signals can be collected by various equipment, e.g., psychophysiological heart rate (HR) signals calculated by echocardiogram (ECG), electro-dermal activity (EDA), and brain activity (GA) from electroencephalography (EEG) signals, skin conductance response (SCR) by a galvanic skin response (GSR), and photoplethysmography (PPG) estimating users’ pulse. This survey aims to provide a comprehensive overview of the AVRS domain. To analyze the recent efforts in the field of affective video recommendation, we collected 92 relevant published articles from Google Scholar and summarized the articles and their key findings. In this survey, we feature these articles concerning AVRS from different perspectives, including various traditional recommendation algorithms and advanced deep learning-based algorithms, the commonly used affective video recommendation databases, audience response categories, and evaluation methods. Finally, we conclude the challenge of AVRS and provide the potential future research directions.

2 citations

Proceedings ArticleDOI
28 May 2021
TL;DR: In this article, an enhanced ERNIE Deep Recurrent Neural Networks model (EE-RNN) is employed to complete the five-dimensional live comments emotional analysis, which first obtains the general semantic embedding of the text through ERNs and introduces external emotional knowledge to further enhance the semantic coding representation and then uses improved RNN structure as well as attention mechanism to get an emotional enhanced high-level semantic feature representation.
Abstract: Live comments, also known as video Danmaku, is a technique through which audiences can express their real-time feelings and opinions with rich emotional information. Emotional analysis results of live comments can more truly reflect the overall characteristics of the video, while user's feedback can be further exploited by extensive applications. Most of the existing live comments emotion classification methods do not fully consider either the real fine granularity or the explicit emotional knowledge of the on-screen comments text. Besides, existing machine learning methods and deep learning methods such as Long Short-Term Memory neural network and Convolutional Neural Network based models do not make full use of the semantic layer representation and emotional features of the text. In this paper, Enhanced ERNIE Deep Recurrent Neural Networks model (EE-RNN) is employed to complete the five-dimensional live comments emotional analysis. The model first obtains the general semantic embedding of the text through ERNIE and introduces external emotional knowledge to further enhance the semantic coding representation, and then uses improved RNN structure as well as attention mechanism to get an emotional enhanced high-level semantic feature representation. Experimental results on the live comments emotional classification dataset and NLPCC2014 emotional classification dataset show that the proposed model greatly improves the classification performance compared with the existing methods and can be used in real applications.

1 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper investigated how barrage system fluctuation characteristics, namely, barrage fluctuation amplitude and frequency, impact user interaction and provided important theoretical contributions and managerial implications in terms of user interaction on video sharing platforms.
Abstract: PurposeIn the present world of constant connectivity, the barrage system, as a system of real-time dynamic comments coupled with video content, has become a popular interactive system technology for video sharing platforms. This study investigates how barrage system fluctuation characteristics, namely, barrage fluctuation amplitude and frequency, impact user interaction.Design/methodology/approachThe research model was estimated with a fixed-effects regression applied to a longitudinal panel dataset collected from one of the most popular video sharing platforms in China (Bilibili.com).FindingsBarrage fluctuation frequency has positive effects on users' real-time (synchronous) barrage interaction and the traditional (asynchronous) comment interaction. Barrage fluctuation amplitude has a positive effect on users' real-time (synchronous) barrage interaction but a negative effect on traditional (asynchronous) comment interaction. In addition, the interaction effects of the barrage fluctuation frequency and the barrage fluctuation amplitude on user interaction show adverse effects.Originality/valueThe results revealed the impact of different barrage fluctuation characteristics on different forms of interaction and provide important theoretical contributions and managerial implications in terms of user interaction on video sharing platforms.

1 citations

Proceedings ArticleDOI
01 Mar 2021
TL;DR: This article used the Albert model to build a classifier, and used the "movie review dataset" issued by Stanford University for network training, achieving an accuracy of 89.05% when performing sentiment analysis of movie reviews.
Abstract: Movie reviews include the real evaluation of the movie by the public. Through these reviews, the audience can better judge whether the movie is worth watching. However, as the amount of data on movie reviews continues to grow, it takes a lot of manpower and material resources to manually analyze the emotional tendency of each movie review. As an important research field of machine learning, sentiment analysis focuses on extracting topic information from text reviews. The field of sentiment analysis is closely related to natural language processing and text mining. It can be successfully used to determine the reviewer's attitude towards various topics or the overall polarity of the review. As far as movie reviews are concerned, in addition to scoring movies digitally, they can also quantitatively enlighten us on the advantages and disadvantages of watching movies. This article uses the Albert model to build a classifier, and uses the "movie review dataset" issued by Stanford University for network training. Experiments show that the trained Albert model can reach an accuracy of 89.05% when performing sentiment analysis of movie reviews. Compared with the traditional LSTM and GRU, the accuracy of the Albert model is improved by 3%.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Abstract: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.

72,897 citations


"Improved Danmaku Emotion Analysis a..." refers background in this paper

  • ...The forgetting gate ft is obtained by the following formula [51]....

    [...]

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations


"Improved Danmaku Emotion Analysis a..." refers background in this paper

  • ...Perform processing such as mixing and repetition on the read data and return to the processed data set [49]....

    [...]

Journal ArticleDOI
TL;DR: This work was supported in part by the Royal Society of the UK, the National Natural Science Foundation of China, and the Alexander von Humboldt Foundation of Germany.

2,404 citations


"Improved Danmaku Emotion Analysis a..." refers background in this paper

  • ...When recommending, the long video makes the audience difficult to grasp the video theme and emotion, which is not conducive to recommend to the users [46]–[48]....

    [...]

Journal ArticleDOI
TL;DR: A rigorous survey on sentiment analysis is presented, which portrays views presented by over one hundred articles published in the last decade regarding necessary tasks, approaches, and applications of sentiment analysis.
Abstract: With the advent of Web 2.0, people became more eager to express and share their opinions on web regarding day-to-day activities and global issues as well. Evolution of social media has also contributed immensely to these activities, thereby providing us a transparent platform to share views across the world. These electronic Word of Mouth (eWOM) statements expressed on the web are much prevalent in business and service industry to enable customer to share his/her point of view. In the last one and half decades, research communities, academia, public and service industries are working rigorously on sentiment analysis, also known as, opinion mining, to extract and analyze public mood and views. In this regard, this paper presents a rigorous survey on sentiment analysis, which portrays views presented by over one hundred articles published in the last decade regarding necessary tasks, approaches, and applications of sentiment analysis. Several sub-tasks need to be performed for sentiment analysis which in turn can be accomplished using various approaches and techniques. This survey covering published literature during 2002-2015, is organized on the basis of sub-tasks to be performed, machine learning and natural language processing techniques used and applications of sentiment analysis. The paper also presents open issues and along with a summary table of a hundred and sixty-one articles.

1,011 citations


"Improved Danmaku Emotion Analysis a..." refers background in this paper

  • ...When there are a large number of danmaku in a certain period time and the emotion of those danmaku content is strong enough, the video content in this period time is regarded as being more wonderful than others [39]–[41]....

    [...]

Journal ArticleDOI
TL;DR: A state-of-the-art sentiment analysis system that detects (a) the sentiment of short informal textual messages such as tweets and SMS (message-level task) and (b) the Sentiment of a word or a phrase within a message (term- level task).
Abstract: We describe a state-of-the-art sentiment analysis system that detects (a) the sentiment of short informal textual messages such as tweets and SMS (message-level task) and (b) the sentiment of a word or a phrase within a message (term-level task). The system is based on a supervised statistical text classification approach leveraging a variety of surface-form, semantic, and sentiment features. The sentiment features are primarily derived from novel high-coverage tweet-specific sentiment lexicons. These lexicons are automatically generated from tweets with sentiment-word hashtags and from tweets with emoticons. To adequately capture the sentiment of words in negated contexts, a separate sentiment lexicon is generated for negated words. The system ranked first in the SemEval-2013 shared task 'Sentiment Analysis in Twitter' (Task 2), obtaining an F-score of 69.02 in the message-level task and 88.93 in the term-level task. Post-competition improvements boost the performance to an F-score of 70.45 (message-level task) and 89.50 (term-level task). The system also obtains state-of-the-art performance on two additional datasets: the SemEval-2013 SMS test set and a corpus of movie review excerpts. The ablation experiments demonstrate that the use of the automatically generated lexicons results in performance gains of up to 6.5 absolute percentage points.

799 citations


"Improved Danmaku Emotion Analysis a..." refers background in this paper

  • ...Calculate the loss function, construct evaluation indicators, train the network in training mode, and evaluate the specified indicators in test mode [50]....

    [...]