scispace - formally typeset
Search or ask a question
Author

Huanlei Chen

Bio: Huanlei Chen is an academic researcher from Tongji University. The author has contributed to research in topics: Radar & Man-portable radar. The author has an hindex of 2, co-authored 5 publications receiving 11 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors developed a robust multi-object detection and tracking method for moving objects based on radar and camera data fusion, which can accurately estimate the number and state of targets in object occlusion, measurement loss scenarios, and achieve robust continuous multiobject tracking.
Abstract: Obtaining the accurate and real-time state of surrounding objects is essential for automated vehicle planning and decision-making to ensure safe driving. In complex traffic scenarios, object occlusion, clutter interference, and limited sensor detection capabilities lead to false alarms and missed object detection, making it challenging to ensure the stability of tracking and state prediction. To address these challenges, in this study, we developed a robust multi-object detection and tracking method for moving objects based on radar and camera data fusion. First, the radar and camera perform target detection independently, and the detection results are correlated in the image plane to generate a random finite set with an object type. Then, based on the Gaussian mixture probability hypothesis density algorithm framework, the tracking process is improved using elliptic discriminant thresholds, an attenuation function, and simplified pruning methods. The experimental results demonstrate that the improved algorithm can accurately estimate the number and state of targets in object occlusion, measurement loss scenarios, and achieve robust continuous multi-object tracking. The proposed method could guide the design of safer and more efficient intelligent driving systems.

31 citations

Proceedings ArticleDOI
16 May 2016
TL;DR: This paper presents a method to generate numerically radar signal with given phase noise level that can be used in a time domain radar system simulator to simulate the effect of the phase noise on the radar performance of multiple target detection and measurement.
Abstract: Today the advanced driver assistant system and autonomous driving system are very popular. Millimeter wave automotive radar, which can detect objects around the ego vehicle with high accuracy and acceptable cost, is one of the key components of the advanced driver assistant system. Radar performance of detecting objects is greatly influenced by the phase noise of the radar signal source. This paper presents a method to generate numerically radar signal with given phase noise level. This numerically generated radar signal with phase noise can be used in a time domain radar system simulator to simulate the effect of the phase noise on the radar performance of multiple target detection and measurement. The time domain radar system simulator is based on SystemVue which can model the whole automotive radar system including the radio frequency frontend, analog circuits, analog-to-digital converter and radar signal processing algorithms.

5 citations

Proceedings ArticleDOI
01 Feb 2016
TL;DR: Modelling a whole 24GHz automotive radar by using SystemVue and MATLAB and Influences of the real RF signal source on the radar performance of object detection can be analyzed.
Abstract: Millimeter wave automotive radar is one of the most important components in the advanced driver assistance system (ADAS), which detects objects around the ego vehicle with moderate cost. Because automotive radar is composed of a vast variety of different components including RF, analog and digital circuits and radar signal processing software, modelling the whole radar system is challenging. This paper presents modelling a whole 24GHz automotive radar by using SystemVue and MATLAB. The ramps of the radar are generated by a phase locked loop. The radar radio frequency frontend and the radar target echo signal are modelled in SystemVue and the radar signal processing is taken place in MATLAB. Influences of the real RF signal source on the radar performance of object detection can be analyzed.

2 citations


Cited by
More filters
Journal ArticleDOI
Ushemadzoro Chipengo1
TL;DR: Results from this paper show that guardrails with low RCS improve the visibility of adjacent stationary targets, and thus have the potential to reduce accidents and possibly save lives.
Abstract: Radar is one of the primary active safety sensors for advanced driver assistance systems. Autonomous vehicles will heavily rely on the ability of automotive radar systems to accurately identify crucial targets while filtering out false targets. Road guardrails present a unique corner case challenge to automotive radar sensors due to their large radar cross section (RCS) that can lead to false targets alerts. This paper presents a full physics, full-scale electromagnetic simulation-based study on the radar returns of road guardrails. Results from this paper demonstrate how guardrails can obfuscate crucial targets, such as pedestrians and nearby stationary vehicles. A novel guardrail system for high-pedestrian density areas is proposed. Further RCS reduction of this design is achieved through a proposed diffraction mitigation technique. Simulations using this proposed guardrail system predict over 25-dB reduction in guardrail RCS. Results from this paper show that guardrails with low RCS improve the visibility of adjacent stationary targets, and thus have the potential to reduce accidents and possibly save lives.

32 citations

Journal ArticleDOI
TL;DR: This paper presents a high-fidelity physics simulation of a 77GHz, frequency-modulated continuous-waveform (FMCW)-based 128 channel MIMO radar sensor, and detection and angular separation performance comparisons of 16, 64 and 128 channel radar sensors are made for two complex driving scenarios.
Abstract: Automotive radar is one of the enabling technologies for advanced driver assistance systems (ADAS) and subsequently fully autonomous vehicles. Along with determining the range and velocity of targets with fairly high resolution, autonomous vehicles navigating complex urban environments need radar sensors with high azimuth and elevation resolution. Size and cost constraints limit the physical number of antennas that can be used to achieve high resolution direction-of-arrival (DoA) estimation. Multiple-input/multiple-output (MIMO) schemes achieve larger virtual arrays using fewer physical antennas than would be needed for a single-input/multiple-output (SIMO) system. This paper presents a high-fidelity physics simulation of a 77GHz, frequency-modulated continuous-waveform (FMCW)-based 128 channel (8 transmitters (T x ), 16 receivers (R x )) MIMO radar sensor. The 77GHz synthetic radar returns from full scale traffic scenes are obtained using a high-fidelity physics, shooting and bouncing ray electromagnetics solver. A fast Fourier transform (FFT) based signal processing scheme is used across slow-time (chirp) and space (channel) to obtain range-Doppler and DoA maps, respectively. Detection and angular separation performance comparisons of 16, 64 and 128 channel MIMO radar sensors are made for two complex driving scenarios.

23 citations

Journal ArticleDOI
TL;DR: In this paper , a Deep Reinforcement Learning framework is proposed to guide the scene-adaptive choice of radar tracking-parameters towards an improved performance on multi-target tracking.
Abstract: Multi-target tracking with radars is a highly challenging problem due to detection artifacts, sensor noise, and interference sources. The traditional signal processing chain is, therefore, a complex combination of various algorithms with several tunable tracking-parameters. Usually, these are initially set by engineers and are independent of the scene tracked. For this reason, they are often non-optimal and generate poorly performing tracking. In this context, scene-adaptive radar processing refers to algorithms that can sense, understand and learn information related to detected targets as well as the environment and adapt its tracking-parameters to optimize the desired goal. In this paper, we propose a Deep Reinforcement Learning framework that guides the scene-adaptive choice of radar tracking-parameters towards an improved performance on multi-target tracking.

11 citations

Journal ArticleDOI
TL;DR: A complete implementation of camera–mmW radar late feature fusion to improve the camera’s velocity estimation performance is presented, implementing a lightweight ML model that successfully maps the mmW radar features to the camera, allowing it to perceive and estimate the dynamics of a target object without any calibration.
Abstract: Perception in terms of object detection, classification, and dynamic estimation (position and velocity) are fundamental functionalities that autonomous agents (unmanned ground vehicles, unmanned aerial vehicles, or robots) have to navigate safely and autonomously. To date, various sensors have been used individually or in combination to achieve this goal. In this paper, we present a novel method for leveraging millimeter wave radar’s (mmW radar’s) ability to accurately measure position and velocity in order to improve and optimize velocity estimation using a monocular camera (using optical flow) and machine learning techniques. The proposed method eliminates ambiguity in optical flow velocity estimation when the object of interest is at the edge of the frame or far away from the camera without requiring camera–radar calibration. Moreover, algorithms of various complexity were implemented using custom dataset, and each of them successfully detected the object and estimated its velocity accurately and independently of the object’s distance and location in frame. Here, we present a complete implementation of camera–mmW radar late feature fusion to improve the camera’s velocity estimation performance. It includes setup design, data acquisition, dataset development, and finally, implementing a lightweight ML model that successfully maps the mmW radar features to the camera, allowing it to perceive and estimate the dynamics of a target object without any calibration.

6 citations

Proceedings ArticleDOI
08 Jul 2022
TL;DR: The experimental result shows that combining radar and optical sensors accomplishes tracking accuracy and coherence in target detection and tracking.
Abstract: Target detection and tracking using optical and radar sensors have many applications in surveillance. As the optical sensor helps to visualize the target and the radar can provide its range and velocity, their combination results in useful information for continuous monitoring and coherence. This paper presents a radar-camera experimental setup to detect and track intruders in a restricted area. A real-time experiment with different target speeds and various radar cross-sections(RCS) (1. A person running, 2. A cyclist, and 3. A motorcyclist). We deployed a 77GHz IWR1642BOOST FMCW (Frequency Modulated Continuous Wave) radar module as a radar unit and a phone camera with an aperture of f/1.79 as an optical sensor. The data collected from the radar and camera sensor are applied to detection and tracking modules to obtain target tracks. The radar provides the observations of range, Doppler, and angle information. These observations are used to estimate the state of the target via extended Kalman filtering(EKF), dBscan clustering, and global nearest neighbor(GNN) association, followed by track maintenance. The optical sensor provides video frames as input and output tracks via foreground detection, blob analysis, motion-based detection, Kalman filtering, and track maintenance. The experimental result shows that combining radar and optical sensors accomplishes tracking accuracy and coherence in target detection and tracking.

5 citations