scispace - formally typeset
N

Newton Howard

Researcher at University of Oxford

Publications -  120
Citations -  3536

Newton Howard is an academic researcher from University of Oxford. The author has contributed to research in topics: Sentiment analysis & Artificial neural network. The author has an hindex of 24, co-authored 118 publications receiving 2664 citations. Previous affiliations of Newton Howard include University of Toulouse & Massachusetts Institute of Technology.

Papers
More filters
Journal ArticleDOI

Enhanced SenticNet with Affective Labels for Concept-Based Opinion Mining

TL;DR: The presented methodology enriches SenticNet concepts with affective information by assigning an emotion label by way of concept-based opinion mining.
Journal ArticleDOI

Fusing audio, visual and textual clues for sentiment analysis from multimodal content

TL;DR: This paper proposes a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information.
Journal ArticleDOI

The use of photoplethysmography for assessing hypertension

TL;DR: Although the technology is not yet mature, it is anticipated that in the near future, accurate, continuous BP measurements may be available from mobile and wearable devices given their vast potential.
Journal ArticleDOI

Comparing Oversampling Techniques to Handle the Class Imbalance Problem: A Customer Churn Prediction Case Study

TL;DR: The empirical results demonstrate that the overall predictive performance of MTDF and rules-generation based on genetic algorithms performed the best as compared with the rest of the evaluated oversampling methods and rule-generation algorithms.
Journal ArticleDOI

Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis

TL;DR: A multi-modal affective data analysis framework is proposed to extract user opinion and emotions from video content and outperforms the state-of-the-art model in multimodal sentiment analysis research with a margin of 10–13% and 3–5% accuracy on polarity detection and emotion recognition, respectively.