scispace - formally typeset
P

Peixiang Zhong

Researcher at Nanyang Technological University

Publications -  17
Citations -  767

Peixiang Zhong is an academic researcher from Nanyang Technological University. The author has contributed to research in topics: Commonsense knowledge & Persona. The author has an hindex of 8, co-authored 17 publications receiving 308 citations. Previous affiliations of Peixiang Zhong include Association for Computing Machinery.

Papers
More filters
Posted Content

EEG-Based Emotion Recognition Using Regularized Graph Neural Networks

TL;DR: A regularized graph neural network for EEG-based emotion recognition that considers the biological topology among different brain regions to capture both local and global relations among different EEG channels and ablation studies show that the proposed adjacency matrix and two regularizers contribute consistent and significant gain to the performance of the model.
Journal ArticleDOI

EEG-Based Emotion Recognition Using Regularized Graph Neural Networks

TL;DR: Wang et al. as mentioned in this paper proposed a regularized graph neural network (RGNN) for EEG-based emotion recognition, which considers the biological topology among different brain regions to capture both local and global relations among different EEG channels.
Proceedings ArticleDOI

Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations

TL;DR: In this article, a knowledge-enriched transformer (KET) model was proposed to combine both context and commonsense knowledge for emotion detection in text conversations, where contextual utterances are interpreted using hierarchical self-attention and external commonsense information is dynamically leveraged using a context-aware affective graph attention mechanism.
Posted Content

Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations

TL;DR: A Knowledge-Enriched Transformer (KET) is proposed, where contextual utterances are interpreted using hierarchical self-attention and external commonsense knowledge is dynamically leveraged using a context-aware affective graph attention mechanism.
Journal ArticleDOI

An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss

TL;DR: This article proposed an end-to-end affect-rich open-domain neural conversational model that produces responses not only appropriate in syntax and semantics, but also with rich affect, and they extended the Seq2Seq model and adopted VAD (Valence, Arousal and Dominance) affective notations to embed each word with affects.