scispace - formally typeset
H

Heng Yang

Researcher at South China Normal University

Publications -  16
Citations -  256

Heng Yang is an academic researcher from South China Normal University. The author has contributed to research in topics: Computer science & Sentiment analysis. The author has an hindex of 3, co-authored 10 publications receiving 79 citations.

Papers
More filters
Journal ArticleDOI

Augmentor or Filter? Reconsider the Role of Pre-trained Language Model in Text Classification Augmentation

Heng Yang, +1 more
- 06 Oct 2022 - 
TL;DR: This work proposes B OOST A UG, which reconsiders the role of the language 020 model in text augmentation and emphasizes the augmentation instance filtering rather than 022 generation and releases the code which can help improve existing augmentation methods.

PyABSA: A Modularized Framework for Reproducible Aspect-based Sentiment Analysis

Heng Yang, +1 more
TL;DR: Yang et al. as mentioned in this paper presented PyABSA, a modularized framework built on PyTorch for reproducible aspect-based sentiment analysis (ABSA), which supports several ABSA subtasks, including aspect term extraction, aspect sentiment classification, and end-to-end aspect based sentiment analysis.
Posted Content

Back to Reality: Leveraging Pattern-driven Modeling to Enable Affordable Sentiment Dependency Learning

TL;DR: Wang et al. as mentioned in this paper proposed the sentiment patterns (SP) to guide the model dependency learning and introduced the local sentiment aggregating (LSA) mechanism to focus on learning the sentiment dependency in the sentiment cluster.
Journal ArticleDOI

Reactive Perturbation Defocusing for Textual Adversarial Defense

Heng Yang, +1 more
- 06 May 2023 - 
TL;DR: The authors proposed a reactive perturbation focusing (RPD) method, which injects safe perturbations into adversarial examples to distract the objective models from the malicious attacks, and showed that the proposed framework successfully repairs up to approximately 97% of correctly identified adversarial instances with only about a 2% performance decrease on natural examples.