B
Brais Martinez
Researcher at Samsung
Publications - 74
Citations - 3405
Brais Martinez is an academic researcher from Samsung. The author has contributed to research in topics: Computer science & Kernel (image processing). The author has an hindex of 23, co-authored 61 publications receiving 2565 citations. Previous affiliations of Brais Martinez include University of Nottingham & Amazon.com.
Papers
More filters
Book ChapterDOI
The Visual Object Tracking VOT2016 Challenge Results
Matej Kristan,Ales Leonardis,Jiří Matas,Michael Felsberg,Roman Pflugfelder,Luka Cehovin,Tomas Vojir,Gustav Häger,Alan Lukežič,Gustavo Fernandez,Abhinav Gupta,Alfredo Petrosino,Alireza Memarmoghadam,Alvaro Garcia-Martin,Andres Solis Montero,Andrea Vedaldi,Andreas Robinson,Andy J. Ma,Anton Varfolomieiev,A. Aydin Alatan,Aykut Erdem,Bernard Ghanem,Bin Liu,Bohyung Han,Brais Martinez,Chang-Ming Chang,Changsheng Xu,Chong Sun,Daijin Kim,Dapeng Chen,Dawei Du,Deepak Mishra,Dit-Yan Yeung,Erhan Gundogdu,Erkut Erdem,Fahad Shahbaz Khan,Fatih Porikli,Fatih Porikli,Fei Zhao,Filiz Bunyak,Francesco Battistone,Gao Zhu,Giorgio Roffo,Gorthi R. K. Sai Subrahmanyam,Guilherme Sousa Bastos,Guna Seetharaman,Henry Medeiros,Hongdong Li,Honggang Qi,Horst Bischof,Horst Possegger,Huchuan Lu,Hyemin Lee,Hyeonseob Nam,Hyung Jin Chang,Isabela Drummond,Jack Valmadre,Jae-chan Jeong,Jaeil Cho,Jae-Yeong Lee,Jianke Zhu,Jiayi Feng,Jin Gao,Jin-Young Choi,Jingjing Xiao,Ji-Wan Kim,Jiyeoup Jeong,João F. Henriques,Jochen Lang,Jongwon Choi,José M. Martínez,Junliang Xing,Junyu Gao,Kannappan Palaniappan,Karel Lebeda,Ke Gao,Krystian Mikolajczyk,Lei Qin,Lijun Wang,Longyin Wen,Luca Bertinetto,Madan Kumar Rapuru,Mahdieh Poostchi,Mario Edoardo Maresca,Martin Danelljan,Matthias Mueller,Mengdan Zhang,Michael Arens,Michel Valstar,Ming Tang,Mooyeol Baek,Muhammad Haris Khan,Naiyan Wang,Nana Fan,Noor M. Al-Shakarji,Ondrej Miksik,Osman Akin,Payman Moallem,Pedro Senna,Philip H. S. Torr,Pong C. Yuen,Qingming Huang,Qingming Huang,Rafael Martin-Nieto,Rengarajan Pelapur,Richard Bowden,Robert Laganiere,Rustam Stolkin,Ryan Walsh,Sebastian B. Krah,Shengkun Li,Shengping Zhang,Shizeng Yao,Simon Hadfield,Simone Melzi,Siwei Lyu,Siyi Li,Stefan Becker,Stuart Golodetz,Sumithra Kakanuru,Sunglok Choi,Tao Hu,Thomas Mauthner,Tianzhu Zhang,Tony P. Pridmore,Vincenzo Santopietro,Weiming Hu,Wenbo Li,Wolfgang Hübner,Xiangyuan Lan,Xiaomeng Wang,Xin Li,Yang Li,Yiannis Demiris,Yifan Wang,Yuankai Qi,Zejian Yuan,Zexiong Cai,Zhan Xu,Zhenyu He,Zhizhen Chi +140 more
TL;DR: The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment.
Proceedings ArticleDOI
Facial point detection using boosted regression and graph models
TL;DR: A method based on a combination of Support Vector Regression and Markov Random Fields to drastically reduce the time needed to search for a point's location and increase the accuracy and robustness of the algorithm.
Journal ArticleDOI
Automatic Analysis of Facial Actions: A Survey
TL;DR: This paper systematically review all components of such systems: pre-processing, feature extraction and machine coding of facial actions, and the existing FACS-coded facial expression databases are summarised.
Journal ArticleDOI
The Automatic Detection of Chronic Pain-Related Expression: Requirements, Challenges and the Multimodal EmoPain Dataset
M.S.H. Aung,Sebastian Kaltwang,Bernardino Romera-Paredes,Brais Martinez,Aneesha Singh,Matteo Cella,Michel Valstar,Hongying Meng,Andrew Kemp,Moshen Shafizadeh,Aaron C. Elkins,Natalie Kanakam,Amschel de Rothschild,Nick Tyler,Paul J. Watson,Amanda C de C Williams,Maja Pantic,Nadia Bianchi-Berthouze +17 more
TL;DR: The factors and challenges in the automated recognition of such expressions and behaviour are described and potential avenues for development of such systems are discussed by discussing potential avenues in the context of these findings.
Journal ArticleDOI
A Dynamic Appearance Descriptor Approach to Facial Actions Temporal Modeling
TL;DR: The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.