P
Peijun Zhao
Researcher at University of Oxford
Publications - 33
Citations - 951
Peijun Zhao is an academic researcher from University of Oxford. The author has contributed to research in topics: Radar & Odometry. The author has an hindex of 11, co-authored 30 publications receiving 361 citations. Previous affiliations of Peijun Zhao include Tsinghua University.
Papers
More filters
Proceedings ArticleDOI
mID: Tracking and Identifying People with Millimeter Wave Radar
Peijun Zhao,Chris Xiaoxuan Lu,Jianan Wang,Changhao Chen,Wei Wang,Niki Trigoni,Andrew Markham +6 more
TL;DR: This work proposes a human tracking and identification system (mID) based on millimeter wave radar which has a high tracking accuracy, without being visually compromising, and is capable of tracking and identifying multiple people simultaneously.
Proceedings ArticleDOI
See through smoke: robust indoor mapping with low-cost mmWave radar
Chris Xiaoxuan Lu,Stefano Rosa,Peijun Zhao,Bing Wang,Changhao Chen,John A. Stankovic,Niki Trigoni,Andrew Markham +7 more
TL;DR: In this article, a single-chip millimetre wave (mmWave) radar-based indoor mapping system is proposed for low-visibility environments to assist in emergency response.
Journal ArticleDOI
Deep-Learning-Based Pedestrian Inertial Navigation: Methods, Data Set, and On-Device Inference
TL;DR: In this paper, the authors present the Oxford Inertial Odometry Data Set (OxIOD), a first-of-its-kind public data set for deep learning-based inertial navigation research with fine-grained ground truth on all sequences.
Posted Content
milliEgo: Single-chip mmWave Radar Aided Egomotion Estimation via Deep Sensor Fusion
Chris Xiaoxuan Lu,Muhamad Risqi U. Saputra,Peijun Zhao,Yasin Almalioglu,Pedro P. B. de Gusmao,Changhao Chen,Ke Sun,Niki Trigoni,Andrew Markham +8 more
TL;DR: In this paper, the authors propose milliEgo, a novel deep learning approach to robust egomotion estimation which exploits the capabilities of low-cost mmWave radar, and fuse mmWave pose estimates with additional sensors, e.g. inertial or visual sensors.
Posted Content
AtLoc: Attention Guided Camera Localization
TL;DR: This work shows that attention can be used to force the network to focus on more geometrically robust objects and features, achieving state-of-the-art performance in common benchmark, even if using only a single image as input.