scispace - formally typeset
A

Amir Hussain

Researcher at Edinburgh Napier University

Publications -  582
Citations -  16665

Amir Hussain is an academic researcher from Edinburgh Napier University. The author has contributed to research in topics: Computer science & Sentiment analysis. The author has an hindex of 55, co-authored 506 publications receiving 11944 citations. Previous affiliations of Amir Hussain include Jinnah University for Women & Universities UK.

Papers
More filters
Journal ArticleDOI

A review of affective computing

TL;DR: This first of its kind, comprehensive literature review of the diverse field of affective computing focuses mainly on the use of audio, visual and text information for multimodal affect analysis, and outlines existing methods for fusing information from different modalities.
Journal ArticleDOI

Applications of Deep Learning and Reinforcement Learning to Biological Data

TL;DR: This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data and compares the performances of DL techniques when applied to different data sets across various application domains.
Journal ArticleDOI

Enhanced SenticNet with Affective Labels for Concept-Based Opinion Mining

TL;DR: The presented methodology enriches SenticNet concepts with affective information by assigning an emotion label by way of concept-based opinion mining.
Proceedings ArticleDOI

Convolutional MKL Based Multimodal Emotion Recognition and Sentiment Analysis

TL;DR: A novel method to extract features from visual and textual modalities using deep convolutional neural networks and significantly outperform the state of the art of multimodal emotion recognition and sentiment analysis on different datasets is presented.
Journal ArticleDOI

Fusing audio, visual and textual clues for sentiment analysis from multimodal content

TL;DR: This paper proposes a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information.