scispace - formally typeset
Search or ask a question
Author

Rui Qin

Bio: Rui Qin is an academic researcher from Chongqing University of Posts and Telecommunications. The author has contributed to research in topics: Computer science & Feature (linguistics). The author has an hindex of 1, co-authored 2 publications receiving 12 citations.

Papers
More filters
Posted Content
TL;DR: This paper aims at helping researchers and practitioners to better understand the application of ML techniques to RSP-related problems by providing a comprehensive, structured and reasoned literature overview of ML-based RSP techniques.
Abstract: Modern radar systems have high requirements in terms of accuracy, robustness and real-time capability when operating on increasingly complex electromagnetic environments. Traditional radar signal processing (RSP) methods have shown some limitations when meeting such requirements, particularly in matters of target classification. With the rapid development of machine learning (ML), especially deep learning, radar researchers have started integrating these new methods when solving RSP-related problems. This paper aims at helping researchers and practitioners to better understand the application of ML techniques to RSP-related problems by providing a comprehensive, structured and reasoned literature overview of ML-based RSP techniques. This work is amply introduced by providing general elements of ML-based RSP and by stating the motivations behind them. The main applications of ML-based RSP are then analysed and structured based on the application field. This paper then concludes with a series of open questions and proposed research directions, in order to indicate current gaps and potential future solutions and trends.

22 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed an end-to-end lightweight network called morphological feature pyramid Yolo v4-tiny for SAR ship detection, where a morphological network is introduced to preprocess the SAR images for speckle noise suppression and edge enhancement, providing spatial highfrequency information for target detection.
Abstract: Intelligent ship detection based on high-precision synthetic aperture radar (SAR) images plays a vital role in ocean monitoring and maritime management. Denoising is an effective preprocessing step for target detection. Morphological network-based denoising can effectively remove speckle noise, while the smoothing effect of which blurs the edges of the image and reduces the detection accuracy. The fusion of edge extraction and morphological network can improve detection accuracy by compensating for the lack of edge information caused by smoothing. This article proposes an end-to-end lightweight network called morphological feature-pyramid Yolo v4-tiny for SAR ship detection. First, a morphological network is introduced to preprocess the SAR images for speckle noise suppression and edge enhancement, providing spatial high-frequency information for target detection. Then, the original and preprocessed images are combined into the multichannel as an input for the convolution layer of the network. The feature pyramid fusion structure is used to extract the high-level semantic features and shallow detailed features from the image, improving the performance of multiscale target detection. Experiments on the public SAR ship detection dataset and AIR SARShip-1.0 show that the proposed method performs better than the other convolution neural network-based methods.

12 citations

Journal ArticleDOI
TL;DR: In this paper, a novel network based on meta-transfer learning, called RRSARNet, was proposed to achieve effective adaptive RRS recognition in the context of low signal-to-noise ratio (SNR).
Abstract: Radar radio source (RRS) recognition plays an important role in the fields of military electronic support systems (ESM) and civilian autonomous driving. The rapid development of machine learning technology, especially deep learning, has effectively and efficiently improved RRS intelligent recognition performances when operating in the increasingly complex electromagnetic environment. However, the data sampling limitation and computation cost are still severe challenges in real RRS recognition scenarios. In this paper, we propose a novel network based on meta-transfer learning, called RRSARNet, to achieve effective adaptive RRS recognition in the context of low signal-to-noise ratio (SNR). First, by using the short-time Fourier transform, a six-type small samples RRS simulation dataset with different SNR levels is constructed. Then, a novel RRSARNet, based on metric learning, is proposed, which consists of a four-layer embedding module and a four-layer relational module. Finally, the RRS dataset is divided into training, supporting and testing subsets, which are used to train and test the RRSARNet in a meta-transfer learning method. Experiments on the RRS dataset show that the proposed RRSARNet can achieve an overall accuracy (OA) above 96% and 99% when the SNR is above −15 dB and −10 dB, respectively. Even when the SNR is -30 dB, OA can reach more than 70%. For 5-way 1-shot and 5-way 5-shot experiments, the inference time of an image is about 0.043 and 0.140 milliseconds, respectively. Besides, experiments on the RRS simulation dataset and the two benchmark datasets, the RRSARNet performs better or more competitive than many existing state-of-the-art technologies in terms of recognition accuracy.

9 citations

Journal ArticleDOI
TL;DR: Experimental results show that the proposed novel multidomain feature subspace fusion representation learning method can achieve better or competitive performance than that of many current existing state-of-the-art methods in terms of recognition accuracy and computational cost.
Abstract: Deep-learning-based synthetic aperture radar automatic target recognition (SAR-ATR) plays a significant role in the military and civilian fields. However, data limitation and large computational cost are still severe challenges in the actual application of SAR-ATR. To improve the performance of the convolutional neural network (CNN) model with limited data samples in SAR-ATR, this article proposes a novel multidomain feature subspace fusion representation learning method, i.e., a lightweight cascaded multidomain attention network, namely, LW-CMDANet. First, we design a four-layer CNN model to perform hierarchical feature representation learning via the hinge loss function, which can efficiently alleviate the overfitting problem of the CNN model by a nongreedy training style with a small dataset. Then, a cascaded multidomain attention module, based on discrete cosine transform and discrete wavelet transform, is embedded into the previous CNN to further complete the class-specific feature extraction from both the frequency and wavelet transform domains of the input feature maps. Thus, the multidomain attention can enhance the feature extraction ability of previous nongreedy learning manner, to effectively improve the recognition accuracy of the CNN model. Experimental results on small SAR datasets show that our proposed method can achieve better or competitive performance than that of many current existing state-of-the-art methods in terms of recognition accuracy and computational cost.

5 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a lightweight YOLO-based arbitrary-oriented vehicle detector via precise positional information encoding and bidirectional feature fusion to address the vehicle targets in dynamic scenarios, such as uncertain background, dramatically varying arrangement density, multi-scale, and arbitrary oriented.
Abstract: Unmanned aerial vehicles (UAVs) open up new opportunities for transportation monitoring. However, the vehicle targets in UAV images are situated in dynamic scenarios, such as uncertain background, dramatically varying arrangement density, multi-scale, and arbitrary-oriented. Most strategies for UAV-based monitoring require complex manoeuvring and still lack accurate abilities and lightweight structures. Consequently, designing effective detection methods with both speed and accuracy is challenging. This paper proposes a lightweight YOLO-based arbitrary-oriented vehicle detector via precise positional information encoding and bidirectional feature fusion to address the above issues. First, an additional angular classification prediction branch is added to the YOLO head network to significantly improve the detection performance for arbitrary-oriented vehicles without incurring the extra computational complexity and burden. Second, a C3 module embedded coordinate attention (C3CA) is presented to capture long-range dependencies and preserve vehicles’ precise positional information in feature maps. Then, a fully connected bidirectional feature fusion module (FC-BiFPN) is applied at the neck of the YOLO detection framework, which is helpful for multi-scale vehicle detection. This module can efficiently aggregate features at different resolutions and automatically enhance information interaction. Finally, experiments and comparisons on vehicle and remote sensing datasets demonstrate that our approach outperforms the state-of-the-art methods in balancing precision and efficiency. In addition, the overall network design follows the lightweight concept, which better meets the real-time requirements of the UAV urban traffic monitoring platform in realistic scenarios.

Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, a comprehensive and well-structured review on the application of deep learning (DL) based algorithms, such as convolutional neural networks (CNN) and long-short term memory (LSTM), in radar signal processing is given.
Abstract: A comprehensive and well-structured review on the application of deep learning (DL) based algorithms, such as convolutional neural networks (CNN) and long-short term memory (LSTM), in radar signal processing is given. The following DL application areas are covered: i) radar waveform and antenna array design; ii) passive or low probability of interception (LPI) radar waveform recognition; iii) automatic target recognition (ATR) based on high range resolution profiles (HRRPs), Doppler signatures, and synthetic aperture radar (SAR) images; and iv) radar jamming/clutter recognition and suppression. Although DL is unanimously praised as the ultimate solution to many bottleneck problems in most of existing works on similar topics, both the positive and the negative sides of stories about DL are checked in this work. Specifically, two limiting factors of the real-life performance of deep neural networks (DNNs), limited training samples and adversarial examples, are thoroughly examined. By investigating the relationship between the DL-based algorithms proposed in various papers and linking them together to form a full picture, this work serves as a valuable source for researchers who are seeking potential research opportunities in this promising research field.

45 citations

DOI
TL;DR: In this paper , the sparse SAR image formation can be treated as a class of ill-posed linear inverse problems, and the resolution is limited by the data bandwidth for traditional imaging techniques via matched filter (MF).
Abstract: Synthetic aperture radar (SAR) image formation can be treated as a class of ill-posed linear inverse problems, and the resolution is limited by the data bandwidth for traditional imaging techniques via matched filter (MF). The sparse SAR imaging technology using compressed sensing (CS) has been developed for enhanced performance, such as superresolution, feature enhancement, etc. More recently, sparse SAR imaging from machine learning (ML), including deep learning (DL), has been further studied, showing great potential in the imaging area. However, there are still gaps between the two groups of methods for sparse SAR imaging, and their connections have not been established.

22 citations

Journal ArticleDOI
TL;DR: A system that can effectively detect fall/collapse and classify other discrete daily living activities such as sitting, standing, walking, drinking, and bending is developed using a publicly accessible dataset.
Abstract: Human activity monitoring is essential for a variety of applications in many fields, particularly healthcare. The goal of this research work is to develop a system that can effectively detect fall/collapse and classify other discrete daily living activities such as sitting, standing, walking, drinking, and bending. For this paper, a publicly accessible dataset is employed, which is captured at various geographical locations using a 5.8 GHz Frequency-Modulated Continuous-Wave (FMCW) RADAR. A total of ninety-nine participants, including young and elderly individuals, took part in the experimental campaign. During data acquisition, each aforementioned activity was recorded for 5–10 s. Through the obtained data, we generated the micro-doppler signatures using short-time Fourier transform by exploiting MATLAB tools. Subsequently, the micro-doppler signatures are validated, trained, and tested using a state-of-the-art deep learning algorithm called Residual Neural Network or ResNet. The ResNet classifier is developed in Python, which is utilised to classify six distinct human activities in this study. Furthermore, the metrics used to analyse the trained model’s performance are precision, recall, F1-score, classification accuracy, and confusion matrix. To test the resilience of the proposed method, two separate experiments are carried out. The trained ResNet models are put to the test by subject-independent scenarios and unseen data of the above-mentioned human activities at diverse geographical spaces. The experimental results showed that ResNet detected the falling and rest of the daily living human activities with decent accuracy.

22 citations