J
Jianlong Fu
Researcher at Microsoft
Publications - 146
Citations - 8432
Jianlong Fu is an academic researcher from Microsoft. The author has contributed to research in topics: Computer science & Feature (computer vision). The author has an hindex of 29, co-authored 114 publications receiving 4119 citations. Previous affiliations of Jianlong Fu include Chinese Academy of Sciences & University of Science and Technology of China.
Papers
More filters
Proceedings ArticleDOI
Look Closer to See Better: Recurrent Attention Convolutional Neural Network for Fine-Grained Image Recognition
TL;DR: Li et al. as discussed by the authors proposed a recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way.
Proceedings ArticleDOI
Learning Multi-attention Convolutional Neural Network for Fine-Grained Image Recognition
TL;DR: This paper proposes a novel part learning approach by a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other, and shows the best performances on three challenging published fine-grained datasets.
Proceedings ArticleDOI
Learning Texture Transformer Network for Image Super-Resolution
TL;DR: A novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively, which achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.
Proceedings ArticleDOI
The Seventh Visual Object Tracking VOT2019 Challenge Results
Matej Kristan,Amanda Berg,Linyu Zheng,Litu Rout,Luc Van Gool,Luca Bertinetto,Martin Danelljan,Matteo Dunnhofer,Meng Ni,Min Young Kim,Ming Tang,Ming-Hsuan Yang,Abdelrahman Eldesokey,Naveen Paluru,Niki Martinel,Pengfei Xu,Pengfei Zhang,Pengkun Zheng,Pengyu Zhang,Philip H. S. Torr,Qi Zhang Qiang Wang,Qing Guo,Radu Timofte,Jani Käpylä,Rama Krishna Sai Subrahmanyam Gorthi,Richard M. Everson,Ruize Han,Ruohan Zhang,Shan You,Shaochuan Zhao,Shengwei Zhao,Shihu Li,Shikun Li,Shiming Ge,Gustavo Fernandez,Shuai Bai,Shuosen Guan,Tengfei Xing,Tianyang Xu,Tianyu Yang,Ting Zhang,Tomas Vojir,Wei Feng,Weiming Hu,Weizhao Wang,Abel Gonzalez-Garcia,Wenjie Tang,Wenjun Zeng,Wenyu Liu,Xi Chen,Xi Qiu,Xiang Bai,Xiaojun Wu,Xiaoyun Yang,Xier Chen,Xin Li,Alireza Memarmoghadam,Xing Sun,Xingyu Chen,Xinmei Tian,Xu Tang,Xue-Feng Zhu,Yan Huang,Yanan Chen,Yanchao Lian,Yang Gu,Yang Liu,Andong Lu,Chen Yanjie,Yi Zhang,Yinda Xu,Yingming Wang,Yingping Li,Yu Zhou,Yuan Dong,Yufei Xu,Yunhua Zhang,Yunkun Li,Anfeng He,Zeyu Wang Zhao Luo,Zhaoliang Zhang,Zhen-Hua Feng,Zhenyu He,Zhichao Song,Zhihao Chen,Zhipeng Zhang,Zhirong Wu,Zhiwei Xiong,Zhongjian Huang,Anton Varfolomieiev,Zhu Teng,Zihan Ni,Antoni Chan,Jiri Matas,Ardhendu Shekhar Tripathi,Arnold W. M. Smeulders,Bala Suraj Pedasingu,Bao Xin Chen,Baopeng Zhang,Baoyuan Wu,Bi Li,Bin He,Bin Yan,Bing Bai,Ales Leonardis,Bing Li,Bo Li,Byeong Hak Kim,Chao Ma,Chen Fang,Chen Qian,Cheng Chen,Chenglong Li,Chengquan Zhang,Chi-Yi Tsai,Michael Felsberg,Chong Luo,Christian Micheloni,Chunhui Zhang,Dacheng Tao,Deepak K. Gupta,Dejia Song,Dong Wang,Efstratios Gavves,Eunu Yi,Fahad Shahbaz Khan,Roman Pflugfelder,Fangyi Zhang,Fei Wang,Fei Zhao,George De Ath,Goutam Bhat,Guangqi Chen,Guangting Wang,Guoxuan Li,Hakan Cevikalp,Hao Du,Joni-Kristian Kamarainen,Haojie Zhao,Hasan Saribas,Ho Min Jung,Hongliang Bai,Hongyuan Yu,Houwen Peng,Huchuan Lu,Hui Li,Jiakun Li,Luka Čehovin Zajc,Jianhua Li,Jianlong Fu,Jie Chen,Jie Gao,Jie Zhao,Jin Tang,Jing Li,Jingjing Wu,Jingtuo Liu,Jinqiao Wang,Ondrej Drbohlav,Jinqing Qi,Jinyue Zhang,John K. Tsotsos,Jong Hyuk Lee,Joost van de Weijer,Josef Kittler,Jun Ha Lee,Junfei Zhuang,Kangkai Zhang,Kangkang Wang,Alan Lukezic,Kenan Dai,Lei Chen,Lei Liu,Leida Guo,Li Zhang,Liang Wang,Liangliang Wang,Lichao Zhang,Lijun Wang,Lijun Zhou +179 more
TL;DR: The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative; results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years.
Proceedings ArticleDOI
Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting
TL;DR: This paper proposes a Pyramid-context Encoder Network for image inpainting by deep generative models, built upon a U-Net structure with three tailored components, ie.