Author
S Maghilnan
Bio: S Maghilnan is an academic researcher from VIT University. The author has contributed to research in topics: Speaker recognition & Pixel. The author has an hindex of 2, co-authored 2 publications receiving 22 citations.
Papers
More filters
••
23 Jun 2017TL;DR: This paper performed sentiment analysis on speaker discriminated speech transcripts to detect the emotions of individual speakers involved in the conversation, and analyzed different techniques to perform speaker discrimination and sentiment analysis to find efficient algorithms to perform this task.
Abstract: Sentiment analysis has evolved over past few decades, most of the work in it revolved around textual sentiment analysis with text mining techniques. But audio sentiment analysis is still in a nascent stage in the research community. In this proposed research, we perform sentiment analysis on speaker discriminated speech transcripts to detect the emotions of the individual speakers involved in the conversation. We analyzed different techniques to perform speaker discrimination and sentiment analysis to find efficient algorithms to perform this task.
27 citations
••
01 Jun 2017TL;DR: This work analyzes the PRNU estimation and enhances the content of the PRnU for better accurate identification of the source camera in sensor pattern noise associated with digital images.
Abstract: The sensor pattern noise is associated with digital images, due to imperfection in the chip of image sensor manufacturing process and it causes pixel sensitivity variation in the imaging sensor. The distinct property of these pattern noises makes it unique to that image sensor. Therefore, it acts as ‘fingerprint’ of that particular imaging sensor. The main contributor of sensor pattern noise is Photo Response Non-Uniformity noise (PRNU). In this proposed work, we analyse the PRNU estimation and enhance the content of the PRNU for better accurate identification of the source camera. The PRNU extraction consists of three stages: filtering, estimation and enhancement stage. Each stage consists of various techniques incorporated for the PRNU extraction. The experiments were conducted on natural images taken from the different camera models. For our experiment, 300 images from 6 different camera models are used and identification of source camera of a given image is done by correlating the PRNU reference pattern with the noise residual model obtained from the test image.
5 citations
Cited by
More filters
••
TL;DR: This work aims to present a survey of recent developments in analyzing the multimodal sentiments (involving text, audio, and video/image) which involve human–machine interaction and challenges involved in analyzing them.
Abstract: The analysis of sentiments is essential in identifying and classifying opinions regarding a source material that is, a product or service. The analysis of these sentiments finds a variety of applications like product reviews, opinion polls, movie reviews on YouTube, news video analysis, and health care applications including stress and depression analysis. The traditional approach of sentiment analysis which is based on text involves the collection of large textual data and different algorithms to extract the sentiment information from it. But multimodal sentimental analysis provides methods to carry out opinion analysis based on the combination of video, audio, and text which goes a way beyond the conventional text‐based sentimental analysis in understanding human behaviors. The remarkable increase in the use of social media provides a large collection of multimodal data that reflects the user's sentiment on certain aspects. This multimodal sentimental analysis approach helps in classifying the polarity (positive, negative, and neutral) of the individual sentiments. Our work aims to present a survey of recent developments in analyzing the multimodal sentiments (involving text, audio, and video/image) which involve human–machine interaction and challenges involved in analyzing them. A detailed survey on sentimental dataset, feature extraction algorithms, data fusion methods, and efficiency of different classification techniques are presented in this work.
47 citations
••
TL;DR: A novel and comprehensive framework for multimodal sentiment analysis in conversations is proposed, called a quantum-like multi-modal network (QMN), which leverages the mathematical formalism of quantum theory (QT) and a long short-term memory (LSTM) network.
46 citations
••
TL;DR: A new conversational dataset is presented, named ScenarioSA, and an interactive long short-term memory network is proposed for conversational sentiment analysis to model interactions between speakers in a conversation, which outperforms a wide range of strong baselines and achieves competitive results with the state-of-art approaches.
42 citations
••
06 Dec 2018TL;DR: An utterance-based deep neural network model is proposed, which has a parallel combination of CNN and LSTM based network, to obtain representative features termed Audio Sentiment Vector (ASV), that can maximally reflect sentiment information in an audio.
Abstract: Audio Sentiment Analysis is a popular research area which extends the text-based sentiment analysis to depend on effectiveness of acoustic features extracted from speech. However, current progress on audio sentiment analysis mainly focuses on extracting homogeneous acoustic features or doesn't fuse heterogeneous features effectively. In this paper, we propose an utterance-based deep neural network model, which has a parallel combination of CNN and LSTM based network, to obtain representative features termed Audio Sentiment Vector (ASV), that can maximally reflect sentiment information in an audio. Specifically, our model is trained by utterance-level labels and ASV can be extracted and fused creatively from two branches. In the CNN model branch, spectrum graphs produced by signals are fed as inputs while in the LSTM model branch, inputs include spectral centroid, MFCC and other recognized traditional acoustic features extracted from dependent utterances in an audio. Besides, BiLSTM with attention mechanism is used for feature fusion. Extensive experiments have been conducted to show our model can recognize audio sentiment precisely, and demonstrate our ASV are better than traditional acoustic features or vectors extracted from other deep learning models. Furthermore, experimental results indicate that the proposed model outperforms state-of-the-art approaches by 9.33% on MOSI.
20 citations
••
23 Jul 2018TL;DR: The conducted systematic literature review shed some light about the starting point to research in term of SA for Malay language as well as the domain and source of content.
Abstract: Recent research and developments in Sentiment Analysis (SA) have simplified sentiment detection and classification from textual content. The related domains for these studies are diverse and comprise fields such as tourism, costumer review, finance, software engineering, speech conversation, social media content, news and so on. SA research and developments field have been done on various languages such as Chinese and English language. However, SA research on other languages such as Malay language is still scarce. Thus, there is a need for constructing SA research specifically for Malay language. To understand trends and to support practitioners and researchers with comprehension information with regard to SA for Malay language, this study exhibit to review published articles on SA for Malay language. From five online databases including ACM, Emerald insight, IEEE Xplore, Science Direct, and Scopus, 2433 scientific articles were obtained. Moreover, through the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Statement, 10 articles have been chosen for the review process. Those articles have been reviewed depend on a few categories consisting of the aim of the study, SA classification techniques, as well as the domain and source of content. As a result, the conducted systematic literature review shed some light about the starting point to research in term of SA for Malay language.
7 citations