B
Bohyung Han
Researcher at Seoul National University
Publications - 185
Citations - 20239
Bohyung Han is an academic researcher from Seoul National University. The author has contributed to research in topics: Computer science & Video tracking. The author has an hindex of 49, co-authored 161 publications receiving 16187 citations. Previous affiliations of Bohyung Han include Google & Pohang University of Science and Technology.
Papers
More filters
Proceedings ArticleDOI
Learning Deconvolution Network for Semantic Segmentation
TL;DR: A novel semantic segmentation algorithm by learning a deep deconvolution network on top of the convolutional layers adopted from VGG 16-layer net, which demonstrates outstanding performance in PASCAL VOC 2012 dataset.
Posted Content
Learning Deconvolution Network for Semantic Segmentation
TL;DR: In this paper, a deconvolution network is proposed to identify pixel-wise class labels and predict segmentation masks in an input image, and construct the final semantic segmentation map by combining the results from all proposals.
Proceedings ArticleDOI
Learning Multi-domain Convolutional Neural Networks for Visual Tracking
Hyeonseob Nam,Bohyung Han +1 more
TL;DR: A novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network using a large set of videos with tracking ground-truths to obtain a generic target representation.
Posted Content
Learning Multi-Domain Convolutional Neural Networks for Visual Tracking
Hyeonseob Nam,Bohyung Han +1 more
TL;DR: Zhang et al. as discussed by the authors proposed a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN), which pretrain a CNN using a large set of videos with tracking ground-truths to obtain a generic target representation.
Book ChapterDOI
The Visual Object Tracking VOT2016 Challenge Results
Matej Kristan,Ales Leonardis,Jiří Matas,Michael Felsberg,Roman Pflugfelder,Luka Cehovin,Tomas Vojir,Gustav Häger,Alan Lukežič,Gustavo Fernandez,Abhinav Gupta,Alfredo Petrosino,Alireza Memarmoghadam,Alvaro Garcia-Martin,Andres Solis Montero,Andrea Vedaldi,Andreas Robinson,Andy J. Ma,Anton Varfolomieiev,A. Aydin Alatan,Aykut Erdem,Bernard Ghanem,Bin Liu,Bohyung Han,Brais Martinez,Chang-Ming Chang,Changsheng Xu,Chong Sun,Daijin Kim,Dapeng Chen,Dawei Du,Deepak Mishra,Dit-Yan Yeung,Erhan Gundogdu,Erkut Erdem,Fahad Shahbaz Khan,Fatih Porikli,Fatih Porikli,Fei Zhao,Filiz Bunyak,Francesco Battistone,Gao Zhu,Giorgio Roffo,Gorthi R. K. Sai Subrahmanyam,Guilherme Sousa Bastos,Guna Seetharaman,Henry Medeiros,Hongdong Li,Honggang Qi,Horst Bischof,Horst Possegger,Huchuan Lu,Hyemin Lee,Hyeonseob Nam,Hyung Jin Chang,Isabela Drummond,Jack Valmadre,Jae-chan Jeong,Jaeil Cho,Jae-Yeong Lee,Jianke Zhu,Jiayi Feng,Jin Gao,Jin-Young Choi,Jingjing Xiao,Ji-Wan Kim,Jiyeoup Jeong,João F. Henriques,Jochen Lang,Jongwon Choi,José M. Martínez,Junliang Xing,Junyu Gao,Kannappan Palaniappan,Karel Lebeda,Ke Gao,Krystian Mikolajczyk,Lei Qin,Lijun Wang,Longyin Wen,Luca Bertinetto,Madan Kumar Rapuru,Mahdieh Poostchi,Mario Edoardo Maresca,Martin Danelljan,Matthias Mueller,Mengdan Zhang,Michael Arens,Michel Valstar,Ming Tang,Mooyeol Baek,Muhammad Haris Khan,Naiyan Wang,Nana Fan,Noor M. Al-Shakarji,Ondrej Miksik,Osman Akin,Payman Moallem,Pedro Senna,Philip H. S. Torr,Pong C. Yuen,Qingming Huang,Qingming Huang,Rafael Martin-Nieto,Rengarajan Pelapur,Richard Bowden,Robert Laganiere,Rustam Stolkin,Ryan Walsh,Sebastian B. Krah,Shengkun Li,Shengping Zhang,Shizeng Yao,Simon Hadfield,Simone Melzi,Siwei Lyu,Siyi Li,Stefan Becker,Stuart Golodetz,Sumithra Kakanuru,Sunglok Choi,Tao Hu,Thomas Mauthner,Tianzhu Zhang,Tony P. Pridmore,Vincenzo Santopietro,Weiming Hu,Wenbo Li,Wolfgang Hübner,Xiangyuan Lan,Xiaomeng Wang,Xin Li,Yang Li,Yiannis Demiris,Yifan Wang,Yuankai Qi,Zejian Yuan,Zexiong Cai,Zhan Xu,Zhenyu He,Zhizhen Chi +140 more
TL;DR: The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment.