A
Abel Gonzalez-Garcia
Researcher at Autonomous University of Barcelona
Publications - 39
Citations - 2783
Abel Gonzalez-Garcia is an academic researcher from Autonomous University of Barcelona. The author has contributed to research in topics: Convolutional neural network & Image translation. The author has an hindex of 18, co-authored 38 publications receiving 1664 citations. Previous affiliations of Abel Gonzalez-Garcia include University at Buffalo & University of Edinburgh.
Papers
More filters
Book ChapterDOI
The sixth visual object tracking VOT2018 challenge results
Matej Kristan,Ales Leonardis,Jiří Matas,Michael Felsberg,Roman Pflugfelder,Roman Pflugfelder,Luka Čehovin Zajc,Tomas Vojir,Goutam Bhat,Alan Lukežič,Abdelrahman Eldesokey,Gustavo Fernandez,Alvaro Garcia-Martin,Álvaro Iglesias-Arias,A. Aydin Alatan,Abel Gonzalez-Garcia,Alfredo Petrosino,Alireza Memarmoghadam,Andrea Vedaldi,Andrej Muhič,Anfeng He,Arnold W. M. Smeulders,Asanka G. Perera,Bo Li,Boyu Chen,Changick Kim,Changsheng Xu,Changzhen Xiong,Cheng Tian,Chong Luo,Chong Sun,Cong Hao,Daijin Kim,Deepak Mishra,Deming Chen,Dong Wang,Dongyoon Wee,Efstratios Gavves,Erhan Gundogdu,Erik Velasco-Salido,Fahad Shahbaz Khan,Fan Yang,Fei Zhao,Feng Li,Francesco Battistone,George De Ath,Gorthi R. K. Sai Subrahmanyam,Guilherme Sousa Bastos,Haibin Ling,Hamed Kiani Galoogahi,Hankyeol Lee,Haojie Li,Haojie Zhao,Heng Fan,Honggang Zhang,Horst Possegger,Houqiang Li,Huchuan Lu,Hui Zhi,Huiyun Li,Hyemin Lee,Hyung Jin Chang,Isabela Drummond,Jack Valmadre,Jaime Spencer Martin,Javaan Chahl,Jin-Young Choi,Jing Li,Jinqiao Wang,Jinqing Qi,Jinyoung Sung,Joakim Johnander,João F. Henriques,Jongwon Choi,Joost van de Weijer,Jorge Rodríguez Herranz,Jorge Rodríguez Herranz,José M. Martínez,Josef Kittler,Junfei Zhuang,Junyu Gao,Klemen Grm,Lichao Zhang,Lijun Wang,Lingxiao Yang,Litu Rout,Liu Si,Luca Bertinetto,Lutao Chu,Manqiang Che,Mario Edoardo Maresca,Martin Danelljan,Ming-Hsuan Yang,Mohamed H. Abdelpakey,Mohamed Shehata,Myunggu Kang,Namhoon Lee,Ning Wang,Ondrej Miksik,Payman Moallem,Pablo Vicente-Moñivar,Pedro Senna,Peixia Li,Philip H. S. Torr,Priya Mariam Raju,Qian Ruihe,Qiang Wang,Qin Zhou,Qing Guo,Rafael Martin-Nieto,Rama Krishna Sai Subrahmanyam Gorthi,Ran Tao,Richard Bowden,Richard M. Everson,Runling Wang,Sangdoo Yun,Seokeon Choi,Sergio Vivas,Shuai Bai,Shuangping Huang,Sihang Wu,Simon Hadfield,Siwen Wang,Stuart Golodetz,Tang Ming,Tianyang Xu,Tianzhu Zhang,Tobias Fischer,Vincenzo Santopietro,Vitomir Struc,Wang Wei,Wangmeng Zuo,Wei Feng,Wei Wu,Wei Zou,Weiming Hu,Wengang Zhou,Wenjun Zeng,Xiaofan Zhang,Xiaohe Wu,Xiaojun Wu,Xinmei Tian,Yan Li,Yan Lu,Yee Wei Law,Yi Wu,Yi Wu,Yiannis Demiris,Yicai Yang,Yifan Jiao,Yuhong Li,Yuhong Li,Yunhua Zhang,Yuxuan Sun,Zheng Zhang,Zheng Zhu,Zhen-Hua Feng,Zhihui Wang,Zhiqun He +158 more
TL;DR: The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative; results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years.
Proceedings ArticleDOI
The Seventh Visual Object Tracking VOT2019 Challenge Results
Matej Kristan,Amanda Berg,Linyu Zheng,Litu Rout,Luc Van Gool,Luca Bertinetto,Martin Danelljan,Matteo Dunnhofer,Meng Ni,Min Young Kim,Ming Tang,Ming-Hsuan Yang,Abdelrahman Eldesokey,Naveen Paluru,Niki Martinel,Pengfei Xu,Pengfei Zhang,Pengkun Zheng,Pengyu Zhang,Philip H. S. Torr,Qi Zhang Qiang Wang,Qing Guo,Radu Timofte,Jani Käpylä,Rama Krishna Sai Subrahmanyam Gorthi,Richard M. Everson,Ruize Han,Ruohan Zhang,Shan You,Shaochuan Zhao,Shengwei Zhao,Shihu Li,Shikun Li,Shiming Ge,Gustavo Fernandez,Shuai Bai,Shuosen Guan,Tengfei Xing,Tianyang Xu,Tianyu Yang,Ting Zhang,Tomas Vojir,Wei Feng,Weiming Hu,Weizhao Wang,Abel Gonzalez-Garcia,Wenjie Tang,Wenjun Zeng,Wenyu Liu,Xi Chen,Xi Qiu,Xiang Bai,Xiaojun Wu,Xiaoyun Yang,Xier Chen,Xin Li,Alireza Memarmoghadam,Xing Sun,Xingyu Chen,Xinmei Tian,Xu Tang,Xue-Feng Zhu,Yan Huang,Yanan Chen,Yanchao Lian,Yang Gu,Yang Liu,Andong Lu,Chen Yanjie,Yi Zhang,Yinda Xu,Yingming Wang,Yingping Li,Yu Zhou,Yuan Dong,Yufei Xu,Yunhua Zhang,Yunkun Li,Anfeng He,Zeyu Wang Zhao Luo,Zhaoliang Zhang,Zhen-Hua Feng,Zhenyu He,Zhichao Song,Zhihao Chen,Zhipeng Zhang,Zhirong Wu,Zhiwei Xiong,Zhongjian Huang,Anton Varfolomieiev,Zhu Teng,Zihan Ni,Antoni Chan,Jiri Matas,Ardhendu Shekhar Tripathi,Arnold W. M. Smeulders,Bala Suraj Pedasingu,Bao Xin Chen,Baopeng Zhang,Baoyuan Wu,Bi Li,Bin He,Bin Yan,Bing Bai,Ales Leonardis,Bing Li,Bo Li,Byeong Hak Kim,Chao Ma,Chen Fang,Chen Qian,Cheng Chen,Chenglong Li,Chengquan Zhang,Chi-Yi Tsai,Michael Felsberg,Chong Luo,Christian Micheloni,Chunhui Zhang,Dacheng Tao,Deepak K. Gupta,Dejia Song,Dong Wang,Efstratios Gavves,Eunu Yi,Fahad Shahbaz Khan,Roman Pflugfelder,Fangyi Zhang,Fei Wang,Fei Zhao,George De Ath,Goutam Bhat,Guangqi Chen,Guangting Wang,Guoxuan Li,Hakan Cevikalp,Hao Du,Joni-Kristian Kamarainen,Haojie Zhao,Hasan Saribas,Ho Min Jung,Hongliang Bai,Hongyuan Yu,Houwen Peng,Huchuan Lu,Hui Li,Jiakun Li,Luka Čehovin Zajc,Jianhua Li,Jianlong Fu,Jie Chen,Jie Gao,Jie Zhao,Jin Tang,Jing Li,Jingjing Wu,Jingtuo Liu,Jinqiao Wang,Ondrej Drbohlav,Jinqing Qi,Jinyue Zhang,John K. Tsotsos,Jong Hyuk Lee,Joost van de Weijer,Josef Kittler,Jun Ha Lee,Junfei Zhuang,Kangkai Zhang,Kangkang Wang,Alan Lukezic,Kenan Dai,Lei Chen,Lei Liu,Leida Guo,Li Zhang,Liang Wang,Liangliang Wang,Lichao Zhang,Lijun Wang,Lijun Zhou +179 more
TL;DR: The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative; results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years.
Proceedings ArticleDOI
Learning the Model Update for Siamese Trackers
TL;DR: This work uses a convolutional neural network, called UpdateNet, which given the initial template, the accumulated template and the template of the current frame aims to estimate the optimal template for the next frame, and demonstrates the generality of the proposed approach by applying it to two Siamese trackers.
Proceedings Article
Image-to-image translation for cross-domain disentanglement
TL;DR: This paper achieves better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets and compares the model to the state-of-the-art in multi-modal image translation.
Book ChapterDOI
Transferring GANs: generating images from limited data
Yaxing Wang,Chenshen Wu,Luis Herranz,Joost van de Weijer,Abel Gonzalez-Garcia,Bogdan Raducanu +5 more
TL;DR: The results show that using knowledge from pretrained networks can shorten the convergence time and can significantly improve the quality of the generated images, especially when the target data is limited and it is suggested that density may be more important than diversity.