J
Junliang Xing
Researcher at Chinese Academy of Sciences
Publications - 155
Citations - 14447
Junliang Xing is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Video tracking & Object detection. The author has an hindex of 43, co-authored 155 publications receiving 10175 citations. Previous affiliations of Junliang Xing include Center for Excellence in Education & Tsinghua University.
Papers
More filters
Proceedings ArticleDOI
SiamRPN++: Evolution of Siamese Visual Tracking With Very Deep Networks
TL;DR: This work proves the core reason Siamese trackers still have accuracy gap comes from the lack of strict translation invariance, and proposes a new model architecture to perform depth-wise and layer-wise aggregations, which not only improves the accuracy but also reduces the model size.
Book ChapterDOI
The Visual Object Tracking VOT2016 Challenge Results
Matej Kristan,Ales Leonardis,Jiří Matas,Michael Felsberg,Roman Pflugfelder,Luka Cehovin,Tomas Vojir,Gustav Häger,Alan Lukežič,Gustavo Fernandez,Abhinav Gupta,Alfredo Petrosino,Alireza Memarmoghadam,Alvaro Garcia-Martin,Andres Solis Montero,Andrea Vedaldi,Andreas Robinson,Andy J. Ma,Anton Varfolomieiev,A. Aydin Alatan,Aykut Erdem,Bernard Ghanem,Bin Liu,Bohyung Han,Brais Martinez,Chang-Ming Chang,Changsheng Xu,Chong Sun,Daijin Kim,Dapeng Chen,Dawei Du,Deepak Mishra,Dit-Yan Yeung,Erhan Gundogdu,Erkut Erdem,Fahad Shahbaz Khan,Fatih Porikli,Fatih Porikli,Fei Zhao,Filiz Bunyak,Francesco Battistone,Gao Zhu,Giorgio Roffo,Gorthi R. K. Sai Subrahmanyam,Guilherme Sousa Bastos,Guna Seetharaman,Henry Medeiros,Hongdong Li,Honggang Qi,Horst Bischof,Horst Possegger,Huchuan Lu,Hyemin Lee,Hyeonseob Nam,Hyung Jin Chang,Isabela Drummond,Jack Valmadre,Jae-chan Jeong,Jaeil Cho,Jae-Yeong Lee,Jianke Zhu,Jiayi Feng,Jin Gao,Jin-Young Choi,Jingjing Xiao,Ji-Wan Kim,Jiyeoup Jeong,João F. Henriques,Jochen Lang,Jongwon Choi,José M. Martínez,Junliang Xing,Junyu Gao,Kannappan Palaniappan,Karel Lebeda,Ke Gao,Krystian Mikolajczyk,Lei Qin,Lijun Wang,Longyin Wen,Luca Bertinetto,Madan Kumar Rapuru,Mahdieh Poostchi,Mario Edoardo Maresca,Martin Danelljan,Matthias Mueller,Mengdan Zhang,Michael Arens,Michel Valstar,Ming Tang,Mooyeol Baek,Muhammad Haris Khan,Naiyan Wang,Nana Fan,Noor M. Al-Shakarji,Ondrej Miksik,Osman Akin,Payman Moallem,Pedro Senna,Philip H. S. Torr,Pong C. Yuen,Qingming Huang,Qingming Huang,Rafael Martin-Nieto,Rengarajan Pelapur,Richard Bowden,Robert Laganiere,Rustam Stolkin,Ryan Walsh,Sebastian B. Krah,Shengkun Li,Shengping Zhang,Shizeng Yao,Simon Hadfield,Simone Melzi,Siwei Lyu,Siyi Li,Stefan Becker,Stuart Golodetz,Sumithra Kakanuru,Sunglok Choi,Tao Hu,Thomas Mauthner,Tianzhu Zhang,Tony P. Pridmore,Vincenzo Santopietro,Weiming Hu,Wenbo Li,Wolfgang Hübner,Xiangyuan Lan,Xiaomeng Wang,Xin Li,Yang Li,Yiannis Demiris,Yifan Wang,Yuankai Qi,Zejian Yuan,Zexiong Cai,Zhan Xu,Zhenyu He,Zhizhen Chi +140 more
TL;DR: The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment.
Posted Content
Co-occurrence Feature Learning for Skeleton based Action Recognition using Regularized Deep LSTM Networks
TL;DR: This work takes the skeleton as the input at each time slot and introduces a novel regularization scheme to learn the co-occurrence features of skeleton joints, and proposes a new dropout algorithm which simultaneously operates on the gates, cells, and output responses of the LSTM neurons.
Proceedings ArticleDOI
Pose-Driven Deep Convolutional Model for Person Re-identification
TL;DR: Zhang et al. as mentioned in this paper proposed a Pose-driven Deep Convolutional (PDC) model to learn improved feature extraction and matching models from end to end, which explicitly leverages the human part cues to alleviate the pose variations and learn robust feature representations from both the global image and different local parts.
Proceedings Article
An end-to-end spatio-temporal attention model for human action recognition from skeleton data
TL;DR: Zhang et al. as mentioned in this paper proposed an end-to-end spatial and temporal attention model for human action recognition from skeleton data, which learns to selectively focus on discriminative joints of skeleton within each frame of the inputs and pays different levels of attention to the outputs of different frames.