M
Mooyeol Baek
Researcher at Pohang University of Science and Technology
Publications - 9
Citations - 1803
Mooyeol Baek is an academic researcher from Pohang University of Science and Technology. The author has contributed to research in topics: Video tracking & Convolutional neural network. The author has an hindex of 7, co-authored 9 publications receiving 1467 citations.
Papers
More filters
Book ChapterDOI
The Visual Object Tracking VOT2016 Challenge Results
Matej Kristan,Ales Leonardis,Jiří Matas,Michael Felsberg,Roman Pflugfelder,Luka Cehovin,Tomas Vojir,Gustav Häger,Alan Lukežič,Gustavo Fernandez,Abhinav Gupta,Alfredo Petrosino,Alireza Memarmoghadam,Alvaro Garcia-Martin,Andres Solis Montero,Andrea Vedaldi,Andreas Robinson,Andy J. Ma,Anton Varfolomieiev,A. Aydin Alatan,Aykut Erdem,Bernard Ghanem,Bin Liu,Bohyung Han,Brais Martinez,Chang-Ming Chang,Changsheng Xu,Chong Sun,Daijin Kim,Dapeng Chen,Dawei Du,Deepak Mishra,Dit-Yan Yeung,Erhan Gundogdu,Erkut Erdem,Fahad Shahbaz Khan,Fatih Porikli,Fatih Porikli,Fei Zhao,Filiz Bunyak,Francesco Battistone,Gao Zhu,Giorgio Roffo,Gorthi R. K. Sai Subrahmanyam,Guilherme Sousa Bastos,Guna Seetharaman,Henry Medeiros,Hongdong Li,Honggang Qi,Horst Bischof,Horst Possegger,Huchuan Lu,Hyemin Lee,Hyeonseob Nam,Hyung Jin Chang,Isabela Drummond,Jack Valmadre,Jae-chan Jeong,Jaeil Cho,Jae-Yeong Lee,Jianke Zhu,Jiayi Feng,Jin Gao,Jin-Young Choi,Jingjing Xiao,Ji-Wan Kim,Jiyeoup Jeong,João F. Henriques,Jochen Lang,Jongwon Choi,José M. Martínez,Junliang Xing,Junyu Gao,Kannappan Palaniappan,Karel Lebeda,Ke Gao,Krystian Mikolajczyk,Lei Qin,Lijun Wang,Longyin Wen,Luca Bertinetto,Madan Kumar Rapuru,Mahdieh Poostchi,Mario Edoardo Maresca,Martin Danelljan,Matthias Mueller,Mengdan Zhang,Michael Arens,Michel Valstar,Ming Tang,Mooyeol Baek,Muhammad Haris Khan,Naiyan Wang,Nana Fan,Noor M. Al-Shakarji,Ondrej Miksik,Osman Akin,Payman Moallem,Pedro Senna,Philip H. S. Torr,Pong C. Yuen,Qingming Huang,Qingming Huang,Rafael Martin-Nieto,Rengarajan Pelapur,Richard Bowden,Robert Laganiere,Rustam Stolkin,Ryan Walsh,Sebastian B. Krah,Shengkun Li,Shengping Zhang,Shizeng Yao,Simon Hadfield,Simone Melzi,Siwei Lyu,Siyi Li,Stefan Becker,Stuart Golodetz,Sumithra Kakanuru,Sunglok Choi,Tao Hu,Thomas Mauthner,Tianzhu Zhang,Tony P. Pridmore,Vincenzo Santopietro,Weiming Hu,Wenbo Li,Wolfgang Hübner,Xiangyuan Lan,Xiaomeng Wang,Xin Li,Yang Li,Yiannis Demiris,Yifan Wang,Yuankai Qi,Zejian Yuan,Zexiong Cai,Zhan Xu,Zhenyu He,Zhizhen Chi +140 more
TL;DR: The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment.
Posted Content
Modeling and Propagating CNNs in a Tree Structure for Visual Tracking.
TL;DR: An online visual tracking algorithm by managing multiple target appearance models in a tree structure using Convolutional Neural Networks to represent target appearances, where multiple CNNs collaborate to estimate target states and determine the desirable paths for online model updates in the tree.
Proceedings ArticleDOI
Multi-object Tracking with Quadruplet Convolutional Neural Networks
TL;DR: This work proposes Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses and employs a multi-task loss to jointly learn object association and bounding box regression for better localization.
Book ChapterDOI
Real-Time MDNet
TL;DR: This work presents a fast and accurate visual tracking algorithm based on the multi-domain convolutional neural network (MDNet) that accelerates feature extraction procedure and learns more discriminative models for instance classification; it enhances representation quality of target and background by maintaining a high resolution feature map with a large receptive field per activation.
Posted Content
Real-Time MDNet
TL;DR: In this paper, a multi-domain convolutional neural network (MDNet) is proposed to accelerate feature extraction procedure and learn more discriminative models for instance classification; it enhances representation quality of target and background by maintaining a high resolution feature map with a large receptive field per activation.