C
Chen Fang
Researcher at Adobe Systems
Publications - 109
Citations - 8242
Chen Fang is an academic researcher from Adobe Systems. The author has contributed to research in topics: Image retrieval & Feature (computer vision). The author has an hindex of 30, co-authored 108 publications receiving 5727 citations. Previous affiliations of Chen Fang include Tencent & Dartmouth College.
Papers
More filters
Proceedings ArticleDOI
Image Captioning with Semantic Attention
TL;DR: Zhang et al. as discussed by the authors proposed a model of semantic attention to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. But their model is not suitable for image caption generation.
Posted Content
Image Captioning with Semantic Attention
TL;DR: This paper proposes a new algorithm that combines top-down and bottom-up approaches to natural language description through a model of semantic attention, and significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.
Journal ArticleDOI
EnlightenGAN: Deep Light Enhancement Without Paired Supervision
Yifan Jiang,Xinyu Gong,Ding Liu,Yu Cheng,Chen Fang,Xiaohui Shen,Jianchao Yang,Pan Zhou,Zhangyang Wang +8 more
TL;DR: EnlightenGAN as mentioned in this paper proposes a highly effective unsupervised generative adversarial network that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images.
Posted Content
EnlightenGAN: Deep Light Enhancement without Paired Supervision
Yifan Jiang,Xinyu Gong,Ding Liu,Yu Cheng,Chen Fang,Xiaohui Shen,Jianchao Yang,Pan Zhou,Zhangyang Wang +8 more
TL;DR: This paper proposes a highly effective unsupervised generative adversarial network, dubbed EnlightenGAN, that can be trained without low/normal-light image pairs, yet proves to generalize very well on various real-world test images.
Proceedings Article
Universal Style Transfer via Feature Transforms
TL;DR: In this paper, a pair of feature transforms, whitening and coloring, are embedded to an image reconstruction network to reflect direct matching of feature covariance of the content image to a given style image, which shares similar spirit with the optimization of Gram matrix based cost in neural style transfer.