scispace - formally typeset
S

Soujanya Poria

Researcher at Singapore University of Technology and Design

Publications -  208
Citations -  20256

Soujanya Poria is an academic researcher from Singapore University of Technology and Design. The author has contributed to research in topics: Sentiment analysis & Computer science. The author has an hindex of 57, co-authored 175 publications receiving 13352 citations. Previous affiliations of Soujanya Poria include Indian Institute of Technology Kharagpur & Agency for Science, Technology and Research.

Papers
More filters
Journal ArticleDOI

Recent Trends in Deep Learning Based Natural Language Processing [Review Article]

TL;DR: This paper reviews significant deep learning related models and methods that have been employed for numerous NLP tasks and provides a walk-through of their evolution.
Posted Content

Recent Trends in Deep Learning Based Natural Language Processing

TL;DR: Deep learning methods employ multiple processing layers to learn hierarchical representations of data and have produced state-of-the-art results in many domains as mentioned in this paper, such as natural language processing (NLP).
Journal ArticleDOI

A review of affective computing

TL;DR: This first of its kind, comprehensive literature review of the diverse field of affective computing focuses mainly on the use of audio, visual and text information for multimodal affect analysis, and outlines existing methods for fusing information from different modalities.
Journal ArticleDOI

Aspect extraction for opinion mining with a deep convolutional neural network

TL;DR: This paper used a 7-layer deep convolutional neural network to tag each word in opinionated sentences as either aspect or non-aspect word, and developed a set of linguistic patterns for the same purpose and combined them with the neural network.
Proceedings ArticleDOI

Context-Dependent Sentiment Analysis in User-Generated Videos.

TL;DR: A LSTM-based model is proposed that enables utterances to capture contextual information from their surroundings in the same video, thus aiding the classification process and showing 5-10% performance improvement over the state of the art and high robustness to generalizability.