Y
Yang Wang
Researcher at University of Manitoba
Publications - 138
Citations - 7244
Yang Wang is an academic researcher from University of Manitoba. The author has contributed to research in topics: Image segmentation & Segmentation. The author has an hindex of 36, co-authored 134 publications receiving 5276 citations. Previous affiliations of Yang Wang include University of Illinois at Urbana–Champaign & Huawei.
Papers
More filters
Journal ArticleDOI
Applications of Support Vector Machine (SVM) Learning in Cancer Genomics.
TL;DR: The recent progress of SVMs in cancer genomic studies is reviewed and the strength of the SVM learning and its future perspective incancer genomic applications is comprehended.
Book ChapterDOI
Optimizing Intersection-Over-Union in Deep Neural Networks for Image Segmentation
Atiqur Rahman,Yang Wang +1 more
TL;DR: This paper proposes an approach for directly optimizing this intersection-over-union (IoU) measure in deep neural networks and demonstrates that this approach outperforms DNNs trained with standard softmax loss.
Journal ArticleDOI
Human Action Recognition by Semilatent Topic Models
Yang Wang,Greg Mori +1 more
TL;DR: Two new models for human action recognition from video sequences using topic models differ from previous latent topic models for visual recognition in two major aspects: first of all, the latent topics in the models directly correspond to class labels; second, some of the latent variables in previous topic models become observed in this case.
Journal ArticleDOI
Discriminative Latent Models for Recognizing Contextual Group Activities
TL;DR: This paper proposes a novel framework for recognizing group activities which jointly captures the group activity, the individual person actions, and the interactions among them and introduces a new feature representation called the action context (AC) descriptor.
Proceedings ArticleDOI
Cross-Modal Self-Attention Network for Referring Image Segmentation
TL;DR: A cross-modal self-attention (CMSA) module that effectively captures the long-range dependencies between linguistic and visual features and a gated multi-level fusion module to selectively integrateSelf-attentive cross- modal features corresponding to different levels in the image.