scispace - formally typeset
Search or ask a question
Author

Hai N. Nguyen

Bio: Hai N. Nguyen is an academic researcher from Northeastern University. The author has contributed to research in topics: Computer science & Object detection. The author has an hindex of 1, co-authored 3 publications receiving 3 citations.

Papers
More filters
Proceedings ArticleDOI
22 Nov 2021
TL;DR: In this article, a wideband, real-time RF identification system with spectro-temporal detection, framework and system is presented, which is capable to detect, classify, and precisely locate RF emissions in time and frequency using RF samples of 100 MHz spectrum in real time(over 6Gbps incoming I&Q streams).
Abstract: RF emissions' detection, classification, and spectro-temporal localization are crucial not only for tasks relating to understanding, managing, and protecting the RF spectrum, but also for safety and security applications such as detecting intruding drones or jammers. Achieving this goal for wideband spectrum and in real-time is a challenging problem. Existing methods are limited to a small bandwidth, and lack the capability to detect and classify multiple RF emissions in every part of a wide spectrum with a unified detection and classification solution. We present WRIST, a Wideband, Real-time RF Identification system with Spectro-Temporal detection,framework and system. Our resulting deep learning (DL) model is capable to detect, classify, and precisely locate RF emissions in time and frequency using RF samples of 100 MHz spectrum in real-time(over 6Gbps incoming I&Q streams). Such capabilities are made feasible by leveraging a deep learning-based one-stage object detection framework, and transfer learning to a multi-channel visual-based RF signals representation. We also introduce an iterative training approach which leverages synthesized and augmented RF data to efficiently build large labelled datasets of RF emissions. WRIST's detector achieves 90 mean Average Precision even in extremely congested environment in the wild. WRIST model classifies five technologies (Bluetooth, Lightbridge, Wi-Fi, XPD, and ZigBee) and is easily extendable to others.

8 citations

Proceedings ArticleDOI
15 May 2019
TL;DR: This paper introduces a set of techniques to achieve transfer learning from computer vision to RF spectrum analysis and demonstrates the usefulness of this approach to scale the learning, accuracy, and efficiency of detection of adversarial and unintentional communications collisions using VGG-16.
Abstract: We introduce a set of techniques to achieve transfer learning from computer vision to RF spectrum analysis. In this paper, we demonstrate the usefulness of this approach to scale the learning, accuracy, and efficiency of detection of adversarial and unintentional communications collisions using VGG-16. We achieve high accuracy (94% collisions detected) on a DARPA Spectrum Collaboration Challenge (SC2) dataset.

4 citations

Proceedings ArticleDOI
24 Oct 2022
TL;DR: The approach and system DEFORM, a Deep Learning (DL)-based RX beamforming achieves significant gain for multi-antenna RF receivers while being agnostic to the transmitted signal features, and is specifically designed to address the unique features of wireless signals complex samples.
Abstract: We introduce, design, and evaluate a set of universal receiver beamforming techniques. Our approach and system DEFORM, a Deep Learning (DL)-based RX beamforming achieves significant gain for multi-antenna RF receivers while being agnostic to the transmitted signal features (e.g., modulation or bandwidth). It is well known that combining coherent RF signals from multiple antennas results in a beamforming gain proportional to the number of receiving elements. However in practice, this approach heavily relies on explicit channel estimation techniques, which are link specific and require significant communication overhead to be transmitted to the receiver. DEFORM addresses this challenge by leveraging Convolutional Neural Network to estimate the channel characteristics in particular the relative phase to antenna elements. It is specifically designed to address the unique features of wireless signals complex samples, such as the ambiguous 2π phase discontinuity and the high sensitivity of the link Bit Error Rate. The channel prediction is subsequently used in the Maximum Ratio Combining algorithm to achieve an optimal combination of the received signals. While being trained on a fixed, basic RF settings, we show that DEFORM's DL model is universal, achieving up to 3 dB of SNR gain for a two-antenna receiver in extensive evaluation demonstrating various settings of modulations and bandwidths.

1 citations

Journal ArticleDOI
TL;DR: WRIST as discussed by the authors is a wideband, real-time, spectro-temporal RF identification system that can detect, classify, and precisely locate RF emissions in time and frequency using RF samples of 100 MHz spectrum in real time.
Abstract: RF emissions' detection, classification, and spectro-temporal localization are essential not only for understanding, managing, and protecting the radio frequency resources, but also for countering today's security threats such as jammers. Achieving this goal for wideband, real-time operation remains challenging. In this paper, we present WRIST, a Wideband, Real-time, Spectro-Temporal RF Identification system. WRIST can detect, classify, and precisely locate RF emissions in time and frequency using RF samples of 100 MHz spectrum in real-time. The system leverages an one-stage object detection Deep Learning framework, and transfer learning to a multi-channel visual-based spectral representation. Towards developing WRIST, we devised an iterative training approach which leverages synthesized and augmented RF data to efficiently build a large dataset with high-quality labels. WRIST achieves over $99 \%$ class detection accuracy, $94 \%$ emission precision and recall, with less than 0.08 bandwidth and time offset ratios in a large anechoic chamber over-the-air environment. In the extremely congested in-the-wild environment, WRIST still achieves over $80 \%$ precision and recall. WRIST currently supports five 2.4 GHz technologies (Bluetooth, Lightbridge, Wi-Fi, XPD, and ZigBee) and is easily extendable to others. We are making our curated dataset available to the whole community. It comprises over 10 million labelled RF emissions from off-the-shelf wireless radios spanning the five classes of technologies.

1 citations

Journal ArticleDOI
TL;DR: The obtained results, and the comparison with other works designed to accelerate the same types of architectures, show the e-ciency and the competitiveness of the pro- posed accelerator design by significantly improved performance and resource utilization.
Abstract: we present a new efficient OpenCL-based Accelerator for large scale Convolutional Neural Networks called “Fast Inference on FPGAs for Convolution Neural Network” (FFCNN). FFCNN is based on a deeply pipelined OpenCL kernels architecture. As pointed out before, high-level synthesis tools such as the OpenCL framework can easily port codes originally designed for CPUs/GPUs to FPGAs, but it is still difficult to make OpenCL codes run efficiently on FPGAs. This work aims to propose an efficient FPGA implementation of OpenCL High-Performance Computing Applications. To do so, a Data reuse and task mapping techniques are also presented to improve design efficiency. In addition, the following motivations were taken into account when developing FFCNN: • FFCNN has been designed to be easily implemented on Intel OpenCL SDK based FPGA design flow. • In FFFCN, different techniques have been integrated to improve the memory band- with and throughput. A performance analysis is conducted on two deep CNN for Large-Scale Images classifica- tion. The obtained results, and the comparison with other works designed to accelerate the same types of architectures, show the efficiency and the competitiveness of the pro- posed accelerator design by significantly improved performance and resource utilization.

1 citations


Cited by
More filters
Posted Content
TL;DR: Motivated by research directions surveyed in the context of ML for wireless security, ML-based attack and defense solutions and emerging adversarial ML techniques in the wireless domain are identified along with a roadmap to foster research efforts in bridging ML and wireless security.
Abstract: Wireless systems are vulnerable to various attacks such as jamming and eavesdropping due to the shared and broadcast nature of wireless medium. To support both attack and defense strategies, machine learning (ML) provides automated means to learn from and adapt to wireless communication characteristics that are hard to capture by hand-crafted features and models. This article discusses motivation, background, and scope of research efforts that bridge ML and wireless security. Motivated by research directions surveyed in the context of ML for wireless security, ML-based attack and defense solutions and emerging adversarial ML techniques in the wireless domain are identified along with a roadmap to foster research efforts in bridging ML and wireless security.

35 citations

Proceedings ArticleDOI
01 Nov 2020
TL;DR: In this paper, a deep learning-based CAA detection framework is proposed to overcome the difficulties encountered in existing methods, such as long latency, high data collection overhead, and limited applicable range.
Abstract: Coping with diverse channel access attacks (CAAs) has been a major obstacle to realize the full potential of wireless networks as a basic building block of smart applications. Identifying and classifying different types of CAAs in a timely manner is a great challenge because of the inherently shared nature and randomness of the wireless medium. To overcome the difficulties encountered in existing methods, such as long latency, high data collection overhead, and limited applicable range, a deep learning-based CAA detection framework is proposed in this paper. First, we show the challenges of CAA classification by analyzing the impacts of CAAs on wireless network performance using an event-driven network simulator. Second, a state-transition model is built for the channel access process at a node, whose output sequences characterize the changing patterns of the node’s transmission status in different CAA scenarios. Third, a deep learning-based CAA classification framework is presented, which takes state transition sequences of a node as input and outputs predicted CAA types. The performance of three deep neural networks, i.e., fully-connected, convolutional, and Long Short-Term Memory (LSTM) network, for classifying CAAs are evaluated under our CAA classification framework in five CAA scenarios and the normal scenario without CAA. Experimental results show that LSTM outperforms the other two neural network architectures, and its CAA classification accuracy is higher than 95%. We successfully transferred the learned LSTM model to classify CAAs on other nodes in the same network and the nodes in other networks, which verifies the generality of our proposed framework.

5 citations

Posted Content
TL;DR: Interestingly, the efficacy of randomization in improving detection accuracy and the generalization capability of certain deep neural network architectures with Bootstrap Aggregating (Bagging) is unveiled.
Abstract: We demonstrate a first example for employing deep learning in predicting frame errors for a Collaborative Intelligent Radio Network (CIRN) using a dataset collected during participation in the final scrimmages of the DARPA SC2 challenge. Four scenarios are considered based on randomizing or fixing the strategy for bandwidth and channel allocation, and either training and testing with different links or using a pilot phase for each link to train the deep neural network. We also investigate the effect of latency constraints, and uncover interesting characteristics of the predictor over different Signal to Noise Ratio (SNR) ranges. The obtained insights open the door for implementing a deep-learning-based strategy that is scalable to large heterogeneous networks, generalizable to diverse wireless environments, and suitable for predicting frame error instances and rates within a congested shared spectrum.

4 citations

Journal ArticleDOI
TL;DR: The development history and application fields of some representative neural networks are introduced and the importance of studying deep learning technology is pointed out, as well as the reasons and advantages of using FPGA to accelerate deep learning.
Abstract: Deep learning based on neural networks has been widely used in image recognition, speech recognition, natural language processing, automatic driving, and other fields and has made breakthrough progress. FPGA stands out in the field of accelerated deep learning with its advantages such as flexible architecture and logic units, high energy efficiency ratio, strong compatibility, and low delay. In order to track the latest research results of neural network optimization technology based on FPGA in time and to keep abreast of current research hotspots and application fields, the related technologies and research contents are reviewed. This paper introduces the development history and application fields of some representative neural networks and points out the importance of studying deep learning technology, as well as the reasons and advantages of using FPGA to accelerate deep learning. Several common neural network models are introduced. Moreover, this paper reviews the current mainstream FPGA-based neural network acceleration technology, method, accelerator, and acceleration framework design and the latest research status, pointing out the current FPGA-based neural network application facing difficulties and the corresponding solutions, as well as prospecting the future research directions. We hope that this work can provide insightful research ideas for the researchers engaged in the field of neural network acceleration based on FPGA.

3 citations

Proceedings ArticleDOI
16 May 2022
TL;DR: This work presents moving beyond initial success in RF ML towards expanding the solution for eventual edge operation in a larger system, showing new results on how to handle difficult cases of signal interference, how to efficiently improve performance with the use of geolocation information, and ways to further reduce computational constraints for edge operation.
Abstract: Great progress has been made recently in radio frequency (RF) machine learning (ML), including RF fingerprinting. Much of this work to date, however, has been limited in scope to proof-of-concept demonstrations or narrowly defined and tested under circumstances that only address part of the actual operational considerations. In this paper we expand this consideration for an RF fingerprinting application. In doing so, we build on our previous work developing our RiftNet#8482; deep learning classifier to consider realistic operational and systems aspects to ensure the solution is robust to real-world signal environments and other tasks of an RF receiver. In particular, we show new results on how to handle difficult cases of signal interference, how to efficiently improve performance with the use of geolocation information, and ways to further reduce computational constraints for edge operation. In summary, this work presents moving beyond initial success in RF ML towards expanding the solution for eventual edge operation in a larger system.

2 citations