X
Xingxu Yao
Researcher at Nankai University
Publications - 13
Citations - 114
Xingxu Yao is an academic researcher from Nankai University. The author has contributed to research in topics: Image retrieval & Metric (mathematics). The author has an hindex of 3, co-authored 12 publications receiving 37 citations.
Papers
More filters
Proceedings ArticleDOI
Attention-Aware Polarity Sensitive Embedding for Affective Image Retrieval
TL;DR: An Attention-aware Polarity Sensitive Embedding (APSE) network is introduced to learn affective representations in an end-to-end manner and a weighted emotion-pair loss is presented to take the inter- and intra-polarity relationships of the emotional labels into consideration.
Journal ArticleDOI
Affective Image Content Analysis: Two Decades Review and New Perspectives.
Sicheng Zhao,Xingxu Yao,Jufeng Yang,Guoli Jia,Guiguang Ding,Tat-Seng Chua,Björn Schuller,Kurt Keutzer +7 more
TL;DR: Recently, with the rapid advancement of emotional intelligence and the explosive growth of visual data, extensive research efforts have been dedicated to affective image content analysis (AICA) as mentioned in this paper.
Journal ArticleDOI
Adaptive Deep Metric Learning for Affective Image Retrieval and Classification
TL;DR: An adaptive sentiment similarity loss is designed, which is able to embed affective images considering the emotion polarity and adaptively adjust the margin between different image pairs, and a unified multi-task deep framework is developed to simultaneously optimize both retrieval and classification goals.
Posted Content
Emotion-Based End-to-End Matching Between Image and Music in Valence-Arousal Space
TL;DR: End-to-end matching between image and music based on emotions in the continuous valence-arousal (VA) space is studied, demonstrating the superiority of CDCML for emotion-based image andMusic matching as compared to the state-of-the-art approaches.
Proceedings ArticleDOI
Emotion-Based End-to-End Matching Between Image and Music in Valence-Arousal Space
TL;DR: Zhang et al. as discussed by the authors proposed cross-modal deep continuous metric learning (CDCML) to learn a shared latent embedding space which preserves the crossmodal similarity relationship in the continuous matching space.