scispace - formally typeset
J

Jung-Woo Ha

Researcher at Naver Corporation

Publications -  109
Citations -  10104

Jung-Woo Ha is an academic researcher from Naver Corporation. The author has contributed to research in topics: Artificial neural network & Computer science. The author has an hindex of 23, co-authored 109 publications receiving 6796 citations. Previous affiliations of Jung-Woo Ha include New York University & Seoul National University.

Papers
More filters
Proceedings ArticleDOI

StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation

TL;DR: StarGAN as discussed by the authors proposes a unified model architecture to perform image-to-image translation for multiple domains using only a single model, which leads to superior quality of translated images compared to existing models as well as the capability of flexibly translating an input image to any desired target domain.
Posted Content

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

TL;DR: A unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network, which leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain.
Posted Content

StarGAN v2: Diverse Image Synthesis for Multiple Domains

TL;DR: StarGAN v2, a single framework that tackles image-to-image translation models with limited diversity and multiple models for all domains, is proposed and shows significantly improved results over the baselines.
Proceedings ArticleDOI

StarGAN v2: Diverse Image Synthesis for Multiple Domains

TL;DR: StarGAN v2 as mentioned in this paper proposes a single framework to learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains.
Proceedings ArticleDOI

Dual Attention Networks for Multimodal Reasoning and Matching

TL;DR: The authors propose Dual Attention Networks (DANs) which jointly leverage visual and textual attention mechanisms to capture fine-grained interplay between vision and language for VQA and image-text matching.