scispace - formally typeset
Search or ask a question

Showing papers by "Linga Reddy Cenkeramaddi published in 2022"


Journal ArticleDOI
TL;DR: The spoofing scenario results show that using the predicted fusion state provides the same accuracy as a GPS receiver in a clean environment, and because the innovation is calculatedUsing the predicted fused state, there is no effect on the number of satellite signals on PRMSE.
Abstract: In today’s world, Global positioning system (GPS)-based navigation is inexpensive for providing position, velocity, and time (PVT) information. GPS receivers are widely used on unmanned aerial vehicles (UAVs), and these targets are vulnerable to deliberate interference such as spoofing. In this paper, GPS spoofing detection and mitigation for UAVs are proposed using distributed radar ground stations equipped with a local tracker. In the proposed approach, UAVs and local trackers are linked to the fusion node. The UAVs estimate their position and covariance using the extended Kalman filter framework and send it to a fusion node as primary data. Simultaneously, the time-varying kinematics of the UAVs are estimated using the extended Kalman filter and global nearest neighbor association tracker frameworks, and this data is transmitted to the central fusion node as secondary data. A track-to-track association is proposed to detect spoofing attacks using available primary and secondary data. After detecting the spoofing attack, the secondary data is subjected to a correlation-free fusion. We propose using this fused state as a control input to the UAVs to mitigate the spoofing attack. The spoofing scenario results show that using the predicted fusion state provides the same accuracy as a GPS receiver in a clean environment. Furthermore, because the innovation is calculated using the predicted fused state, there is no effect on the number of satellite signals on PRMSE. Additionally, in terms of PRMSE, radars with low measurement noise outperform radars with high measurement noise. The proposed algorithm is best suited for use in drone swarm applications.

39 citations


Journal ArticleDOI
TL;DR: This paper provides a comprehensive review on the state-of-the-art embedded sensors, communication technologies, computing platforms and machine learning techniques used in autonomous UAVs.
Abstract: Unmanned aerial vehicles (UAVs) are increasingly becoming popular due to their use in many commercial and military applications, and their affordability. The UAVs are equipped with various sensors, hardware platforms and software technologies which enable them to support the diverse application portfolio. Sensors include vision-based sensors such as RGB-D cameras, thermal cameras, light detection and ranging (LiDAR), mmWave radars, ultrasonic sensors, and an inertial measurement unit (IMU) which enable UAVs for autonomous navigation, obstacle detection, collision avoidance, object tracking and aerial inspection. To enable smooth operation, UAVs utilize a number of communication technologies such as wireless fidelity (Wi-Fi), long range (LoRa), long-term evolution for machine-type communication (LTE-M), etc., along with various machine learning algorithms. However, each of these different technologies come with their own set of advantages and challenges. Hence, it is essential to have an overview of the different type of sensors, computing and communication modules and algorithms used for UAVs. This paper provides a comprehensive review on the state-of-the-art embedded sensors, communication technologies, computing platforms and machine learning techniques used in autonomous UAVs. The key performance metrics along with operating principles and a detailed comparative study of the various technologies are also studied and presented. The information gathered in this paper aims to serve as a practical reference guide for designing smart sensing applications, low-latency and energy efficient communication strategies, power efficient computing modules and machine learning algorithms for autonomous UAVs. Finally, some of the open issues and challenges for future research and development are also discussed.

32 citations


Journal ArticleDOI
TL;DR: This work presents the video based hand gestures recognition using the depth camera and a light weight convolutional neural network (CNN) model, and compares the accuracy of the proposed light weight CNN model with the state-of-the hand gesture classification models.
Abstract: Hand gestures are a well-known and intuitive method of human-computer interaction. The majority of the research has concentrated on hand gesture recognition from the RGB images, however, little work has been done on recognition from videos. In addition, RGB cameras are not robust in varying lighting conditions. Motivated by this, we present the video based hand gestures recognition using the depth camera and a light weight convolutional neural network (CNN) model. We constructed a dataset and then used a light weight CNN model to detect and classify hand movements efficiently. We also examined the classification accuracy with a limited number of frames in a video gesture. We compare the depth camera’s video gesture recognition performance to that of the RGB camera. We evaluate the proposed model’s performance on edge computing devices and compare to benchmark models in terms of accuracy and inference time. The proposed model results in an accuracy of 99.48% on the RGB version of test dataset and 99.18% on the depth version of test dataset. Finally, we compare the accuracy of the proposed light weight CNN model with the state-of-the hand gesture classification models.

11 citations


Journal ArticleDOI
TL;DR: The design objectives and the methodologies used by LPWAN to provide extensive coverage for low-power devices, as well as their system architectures and standards, are discussed.
Abstract: Low-power wide-area networks (LPWANs) are gaining popularity in the research community due to their low power consumption, low cost, and wide geographical coverage. LPWAN technologies complement and outperform short-range and traditional cellular wireless technologies in a variety of applications, including smart city development, machine-to-machine (M2M) communications, healthcare, intelligent transportation, industrial applications, climate-smart agriculture, and asset tracking. This review paper discusses the design objectives and the methodologies used by LPWAN to provide extensive coverage for low-power devices. We also explore how the presented LPWAN architecture employs various topologies such as star and mesh. We examine many current and emerging LPWAN technologies, as well as their system architectures and standards, and evaluate their ability to meet each design objective. In addition, the possible coexistence of LPWAN with other technologies, combining the best attributes to provide an optimum solution is also explored and reported in the current overview. Following that, a comparison of various LPWAN technologies is performed and their market opportunities are also investigated. Furthermore, an analysis of various LPWAN use cases is performed, highlighting their benefits and drawbacks. This aids in the selection of the best LPWAN technology for various applications. Before concluding the work, the open research issues, and challenges in designing LPWAN are presented.

8 citations


Journal ArticleDOI
TL;DR: A novel method is proposed that computes the optimum height at which UAV should hover, resulting in maximum coverage radius with sufficiently small outage probability, which validate the utilization of the proposed method over large scale network applications.
Abstract: With millions of devices connected together, the Internet of Things (IoT) has become an emerging technology for future wireless networks. The ever-increasing number of smart devices and data hungry applications demand a high Quality-of-Service (QoS) for IoT. In conventional networks, data being sent to cloud for computational purpose leads to poor QoS. In order to address QoS challenges, mobile edge networks have emerged as a promising solution. In edge networks, bringing the networks resources closer to the end devices results in improved QoS. The maneuverability and the ease of versatile deployment coupled with cost efficiency makes unmanned aerial vehicles (UAVs) a promising candidate for future edge networks. The UAVs can act as edge servers to provide computational capabilities and improved services to the edge devices. Due to the flying ability, UAVs can establish better line-of-sight link with the ground devices. In this paper, we consider that the edge devices in the area of interest have to be facilitated with a certain desired QoS, which is based on the notion of outage probability of the wireless link between the UAV and the edge devices. In this context, we first propose a novel method that computes the optimum height at which UAV should hover, resulting in maximum coverage radius with sufficiently small outage probability. Then the geographical area is divided in optimal number of clusters using a novel algorithm based on K-means clustering. The method computes the optimum number of UAVs required for covering the area of interest. Each of the UAVs utilizes 3D beamforming in order to cover its own coverage area. For this purpose, we are taking coordinate transformation of the original area and forming a wide beam to cover the desired area. The obtained results demonstrate the effectiveness of the proposed method when compared to existing methods, which validate the utilization of the proposed method over large scale network applications.

6 citations


Journal ArticleDOI
TL;DR: In this paper , the authors present a survey on state-of-the-art spectrum cartography techniques for the construction of various radio environment maps (REMs), including channel gain map, power spectral density map (PSD), power spectrum density map, spectrum map and power propagation map.

5 citations


Journal ArticleDOI
TL;DR: In this paper , a detailed description of the specialized hardware-based accelerators used in the training and/or inference of DNNs is discussed, and a comparative study based on factors like power, area, and throughput is also made on the various accelerators discussed.
Abstract: In the modern-day era of technology, a paradigm shift has been witnessed in the areas involving applications of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). Specifically, Deep Neural Networks (DNNs) have emerged as a popular field of interest in most AI applications such as computer vision, image and video processing, robotics, etc. In the context of developed digital technologies and the availability of authentic data and data handling infrastructure, DNNs have been a credible choice for solving more complex real-life problems. The performance and accuracy of a DNN is a way better than human intelligence in certain situations. However, it is noteworthy that the DNN is computationally too cumbersome in terms of the resources and time to handle these computations. Furthermore, general-purpose architectures like CPUs have issues in handling such computationally intensive algorithms. Therefore, a lot of interest and efforts have been invested by the research fraternity in specialized hardware architectures such as Graphics Processing Unit (GPU), Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), and Coarse Grained Reconfigurable Array (CGRA) in the context of effective implementation of computationally intensive algorithms. This paper brings forward the various research works on the development and deployment of DNNs using the aforementioned specialized hardware architectures and embedded AI accelerators. The review discusses the detailed description of the specialized hardware-based accelerators used in the training and/or inference of DNN. A comparative study based on factors like power, area, and throughput, is also made on the various accelerators discussed. Finally, future research and development directions, such as future trends in DNN implementation on specialized hardware accelerators, are discussed. This review article is intended to guide hardware architects to accelerate and improve the effectiveness of deep learning research.

4 citations


Journal ArticleDOI
TL;DR: This work proposes a lightweight deep convolutional neural network (CNN) in conjunction with spectrograms for an efficient background sound classification with practical human speech signals and outperforms the benchmark models in terms of both accuracy and inference time when evaluated on edge computing devices.
Abstract: Recognizing background information in human speech signals is a task that is extremely useful in a wide range of practical applications, and many articles on background sound classification have been published. It has not, however, been addressed with background embedded in real-world human speech signals. Thus, this work proposes a lightweight deep convolutional neural network (CNN) in conjunction with spectrograms for an efficient background sound classification with practical human speech signals. The proposed model classifies 11 different background sounds such as airplane, airport, babble, car, drone, exhibition, helicopter, restaurant, station, street, and train sounds embedded in human speech signals. The proposed deep CNN model consists of four convolution layers, four max-pooling layers, and one fully connected layer. The model is tested on human speech signals with varying signal-to-noise ratios (SNRs). Based on the results, the proposed deep CNN model utilizing spectrograms achieves an overall background sound classification accuracy of 95.2% using the human speech signals with a wide range of SNRs. It is also observed that the proposed model outperforms the benchmark models in terms of both accuracy and inference time when evaluated on edge computing devices.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors propose a reward criterion based on the percentage of positive and negative rewards received by an agent, which gives rise to three different reward classes: balanced class, skewed positive class, and skewed negative class.

4 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed an implicit Wiener filter-based algorithm for speech enhancement using edge computing system, which is evaluated for two speech utterances, one uttered by a male speaker and the other by a female speaker, both utterances are degraded by different types of nonstationary noises such as exhibition, station, drone, helicopter, airplane, and white Gaussian stationary noise with different signal-to-noise ratios.
Abstract: Abstract Speech enables easy human-to-human communication as well as human-to-machine interaction. However, the quality of speech degrades due to background noise in the environment, such as drone noise embedded in speech during search and rescue operations. Similarly, helicopter noise, airplane noise, and station noise reduce the quality of speech. Speech enhancement algorithms reduce background noise, resulting in a crystal clear and noise-free conversation. For many applications, it is also necessary to process these noisy speech signals at the edge node level. Thus, we propose implicit Wiener filter-based algorithm for speech enhancement using edge computing system. In the proposed algorithm, a first order recursive equation is used to estimate the noise. The performance of the proposed algorithm is evaluated for two speech utterances, one uttered by a male speaker and the other by a female speaker. Both utterances are degraded by different types of non-stationary noises such as exhibition, station, drone, helicopter, airplane, and white Gaussian stationary noise with different signal-to-noise ratios. Further, we compare the performance of the proposed speech enhancement algorithm with the conventional spectral subtraction algorithm. Performance evaluations using objective speech quality measures demonstrate that the proposed speech enhancement algorithm outperforms the spectral subtraction algorithm in estimating the clean speech from the noisy speech. Finally, we implement the proposed speech enhancement algorithm, in addition to the spectral subtraction algorithm, on the Raspberry Pi 4 Model B, which is a low power edge computing device.

4 citations


Proceedings ArticleDOI
10 Jan 2022
TL;DR: This work proposes two kinds of user grouping and pairing schemes that differ in the order in which CoMP and NOMA are performed for a group of users, showing that the proposed schemes can be used to achieve a suitable coverage-throughout trade-off in UDNs.
Abstract: Non-orthogonal multiple access (NOMA) is a next-generation multiple access technology to improve users’ throughput and spectral efficiency for 5G and beyond cellular networks. Similarly, coordinated multi-point transmission and reception (CoMP) is an existing technology to improve the coverage of cell-edge users. Hence, NOMA with CoMP can potentially enhance the throughput and coverage of the users. However, the order of implementation of CoMP and NOMA can significantly impact the system performance of Ultra-dense networks (UDNs). Motivated by this, we study the performance of the CoMP and NOMA-based UDN by proposing two kinds of user grouping and pairing schemes that differ in the order in which CoMP and NOMA are performed for a group of users. Detailed simulation results are presented, comparing the proposed schemes with the state-of-the-art systems with varying user and base station densities. Through numerical results, we show that the proposed schemes can be used to achieve a suitable coverage-throughout trade-off in UDNs.

DOI
TL;DR: In this article , a frequency-modulated continuous-wave (FMCW) mmWave radar is integrated with a pan, tilt, and zoom (PTZ) camera to automate camera steering and direct the radar toward the person facing the camera.
Abstract: The demand for noncontact breathing and heart rate measurement is increasing. In addition, because of the high demand for medical services and the scarcity of on-site personnel, the measurement process must be automated in unsupervised conditions with high reliability and accuracy. In this article, we propose a novel automated process for measuring breathing rate and heart rate with mmWave radar and classifying these two vital signs with machine learning. A frequency-modulated continuous-wave (FMCW) mmWave radar is integrated with a pan, tilt, and zoom (PTZ) camera to automate camera steering and direct the radar toward the person facing the camera. The obtained signals are then fed into a deep convolutional neural network to classify them into breathing and heart signals that are individually low, normal, and high in combination, yielding six classes. This classification can be used in medical diagnostics by medical personnel. The average classification accuracy obtained is 87% with precision, recall, and an F1 score of 0.93.

DOI
TL;DR: In this paper , the authors presented the design and fabrication of a novel Seesaw device incorporating a diaphragm and Fiber Bragg Grating (FBG) sensor to measure the pressure of liquids.
Abstract: Pressure sensors are used in various industrial applications assisting in preventing unintended disasters. This paper presents the design and fabrication of a novel Seesaw device incorporating a diaphragm and Fiber Bragg Grating (FBG) sensor to measure the pressure of liquids. The designed sensor has been tested in a static water column. The proposed design enables the user to easily make and modify the diaphragm based on the required pressure range without interfering with the FBG sensor. The developed pressure sensor produces improved accuracy and sensitivity to applied liquid pressure in both low and high-pressure ranges without requiring sophisticated sensor construction. A finite element analysis has been performed on the diaphragm and on the entire structure at 10 bar pressure. The deformation of the diaphragm is comparable to theoretical deformation levels. A copper diaphragm with a thickness of 0.25 mm is used in the experiments. All experiments are performed in the elastic region of the diaphragm. The sensor's sensitivity as 19.244 nm/MPa with the linearity of $99.64 \%$ is obtained based on the experiments. Also, the proposed sensor's performance is compared with recently reported pressure sensors.

Journal ArticleDOI
TL;DR: In this article , the authors presented a dataset containing low resolution thermal images corresponding to various sign language digits represented by hand and captured using the Omron D6T thermal camera, which has a resolution of 32×32 pixels.

Journal ArticleDOI
TL;DR: A Reinforcement Learning based Fault-Tolerant Routing (RL-FTR) algorithm is proposed to tackle the routing issues caused by link and router faults in the mesh-based NoC architecture and provides an optimal routing path from the source router to the destination router.
Abstract: Network-on-Chip (NoC) has emerged as the most promising on-chip interconnection framework in Multi-Processor System-on-Chips (MPSoCs) due to its efficiency and scalability. In the deep sub-micron level, NoCs are vulnerable to faults, which leads to the failure of network components such as links and routers. Failures in NoC components diminish system efficiency and reliability. This paper proposes a Reinforcement Learning based Fault-Tolerant Routing (RL-FTR) algorithm to tackle the routing issues caused by link and router faults in the mesh-based NoC architecture. The efficiency of the proposed RL-FTR algorithm is examined using System-C based cycle-accurate NoC simulator. Simulations are carried out by increasing the number of links and router faults in various sizes of mesh. Followed by simulations, real-time functioning of the proposed RL-FTR algorithm is observed using the FPGA implementation. Results of the simulation and hardware shows that the proposed RL-FTR algorithm provides an optimal routing path from the source router to the destination router.

Journal ArticleDOI
TL;DR: In this article , an update to the previously published low-resolution thermal imaging dataset is presented, which contains high resolution thermal images corresponding to various hand gestures captured using the FLIR Lepton 3.5 thermal camera and Purethermal 2 breakout board.

Proceedings ArticleDOI
20 Jun 2022
TL;DR: This work investigates the impacts of RIS elements' amplitude and practical phase-shift model on the overall network sum rate and required transmit power and demonstrates a significant performance improvement using the proposed method when compared to ideal phase shift and random active elements selection models.
Abstract: Reflective intelligent surfaces (RIS) are emerging as a promising solution to alleviate the spectral efficiency challenges and energy consumption issues of the edge networks. RIS is com-prised of programmable, passive, and low-cost electromagnetic elements that assist blockage reduction during signal propagation over wireless channels. The quality-of-service (QoS) challenges over RIS-assisted edge networks can further be improved by the efficient utilization of RIS elements. In particular, in this work, the impacts of RIS elements' amplitude and practical phase-shift model are investigated on the overall network sum rate and required transmit power. Moreover, considering the practical reflection model of RIS, computation of the optimal number of active RIS elements is also performed by putting constraints over QoS conditions. Extensive experimental results demonstrate a significant performance improvement using the proposed method when compared to ideal phase shift and random active elements selection models.

DOI
TL;DR: In this article , a UAV classification method that utilizes fixed boundary empirical wavelet sub-bands of radio frequency (RF) fingerprints and a deep convolutional neural network (CNN) is proposed.
Abstract: Unmanned aerial vehicle (UAV) classification and identification have many applications in a variety of fields, including UAV tracking systems, antidrone systems, intrusion detection systems, military, space research, product delivery, agriculture, search and rescue, and internet carrier. It is challenging to identify a specific drone and/or type in critical scenarios, such as intrusion. In this article, a UAV classification method that utilizes fixed boundary empirical wavelet sub-bands of radio frequency (RF) fingerprints and a deep convolutional neural network (CNN) is proposed. In the proposed method, RF fingerprints collected from UAV receivers are decomposed into 16 fixed boundary empirical wavelet sub-band signals. Then, these sub-band signals are then fed into a lightweight deep CNN model to classify various types of UAVs. Using the proposed method, we classify a total of 15 different commercially available UAVs with an average testing accuracy of 97.25%. The proposed model is also tested with various sampling points in the signal. Furthermore, the proposed method is compared with recently reported works for classifying UAVs utilizing remote controller RF signals.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed an implicit Wiener filter-based algorithm for speech enhancement using edge computing system, which is evaluated for two speech utterances, one uttered by a male speaker and the other by a female speaker, both utterances are degraded by different types of nonstationary noises such as exhibition, station, drone, helicopter, airplane, and white Gaussian stationary noise with different signal-to-noise ratios.
Abstract: Abstract Speech enables easy human-to-human communication as well as human-to-machine interaction. However, the quality of speech degrades due to background noise in the environment, such as drone noise embedded in speech during search and rescue operations. Similarly, helicopter noise, airplane noise, and station noise reduce the quality of speech. Speech enhancement algorithms reduce background noise, resulting in a crystal clear and noise-free conversation. For many applications, it is also necessary to process these noisy speech signals at the edge node level. Thus, we propose implicit Wiener filter-based algorithm for speech enhancement using edge computing system. In the proposed algorithm, a first order recursive equation is used to estimate the noise. The performance of the proposed algorithm is evaluated for two speech utterances, one uttered by a male speaker and the other by a female speaker. Both utterances are degraded by different types of non-stationary noises such as exhibition, station, drone, helicopter, airplane, and white Gaussian stationary noise with different signal-to-noise ratios. Further, we compare the performance of the proposed speech enhancement algorithm with the conventional spectral subtraction algorithm. Performance evaluations using objective speech quality measures demonstrate that the proposed speech enhancement algorithm outperforms the spectral subtraction algorithm in estimating the clean speech from the noisy speech. Finally, we implement the proposed speech enhancement algorithm, in addition to the spectral subtraction algorithm, on the Raspberry Pi 4 Model B, which is a low power edge computing device.

Journal ArticleDOI
TL;DR: In this article , a large dataset is created for accurate classification of hand gestures under complex backgrounds, made up of 29718 frames from RGB and depth versions corresponding to various hand gestures from different people collected at different time instances with complex backgrounds.

DOI
TL;DR: In this paper , the authors used a combination of the gating technique within the Kalman filter framework and logic-based track management to maintain the track and navigate the GNSS's time-varying kinematics.
Abstract: Global navigation satellite system (GNSS) provides reliable positioning across the globe. However, GNSS is vulnerable to deliberate interference problem like spoofing, which can cause fake navigation. This article proposes navigation in a GNSS spoofing environment by taking the received power, correlation distortion function, and pseudorange measurement observation space into account. In the proposed approach, both actual and interference measurements are considered a set. Machine learning screens the authentic measurements from the accessible set using parameters such as received power and correlation function distortion. To maintain the track and navigate the GNSS’s time-varying kinematics, we used a combination of the gating technique within the Kalman filter framework and logic-based track management. The machine learning classifiers like support vector machines (SVMs), neural networks (NNs), ensemble, nearest neighbor, and decision trees are explored, and we observe that linear SVM and NN provide a test accuracy of 98.20%. A time-varying position-pull off strategy is considered, and the metrics like position RMSE and track failure are compared with the conventional M-best algorithm. The results show that for four authentic measurements and spoof injections, there are only a few track failures. In contrast, even with an increase in spoof injections, track failures are zero in the case of six authentic measurements.


Journal ArticleDOI
TL;DR: A semi-automated method for extracting bit-slices from the Innovus SDP flow and it has been demonstrated that the proposed method results in 17% less density or use for a pixel buffer design.
Abstract: State-of-the-art modern microprocessor and domain-specific accelerator designs are dominated by data-paths composed of regular structures, also known as bit-slices. Random logic placement and routing techniques may not result in an optimal layout for these data-path-dominated designs. As a result, implementation tools such as Cadence’s Innovus include a Structured Data-Path (SDP) feature that allows data-path placement to be completely customized by constraining the placement engine. A relative placement file is used to provide these constraints to the tool. However, the tool neither extracts nor automatically places the regular data-path structures. In other words, the relative placement file is not automatically generated. In this paper, we propose a semi-automated method for extracting bit-slices from the Innovus SDP flow. It has been demonstrated that the proposed method results in 17% less density or use for a pixel buffer design. At the same time, the other performance metrics are unchanged when compared to the traditional place and route flow.

Proceedings ArticleDOI
01 Dec 2022
TL;DR: In this paper , the authors proposed a method for classifying UAVs from radio frequency (RF) fingerprints using time-frequency transformation and con-volutional neural networks (CNN), which achieved an accuracy of 99.09% at 387 kilobytes (KB) size and can run on Raspberry Pi in 25.54 milliseconds.
Abstract: Unmanned aerial vehicles (UAVs) have re-cently gained a significant interest in the research com-munity owing to their unrivaled commercial chances in wireless communications, search and rescue, surveillance, logistics, delivery, and intelligent agriculture. In safety-critical applications such as intrusions, identifying the type of drone enhances the countermeasures. This paper proposes classifying UAVs from radio frequency (RF) fingerprints using time-frequency transformation and con-volutional neural networks (CNN). The proposed method-ology involves RF fingerprints' wavelet synchrosqueezed transform (WSST) followed by a proposed lightweight CNN model. The methodology is verified on a data set containing fifteen different classes of drone's RF fingerprint. The proposed CNN model size, Raspberry Pi deployment feasibility, and accuracy are compared with the existing pre-trained state-of-art deep learning models. The proposed model achieves a testing accuracy of 99.09% at 387 kilobytes (KB) size and can run on Raspberry Pi in 25.54 milliseconds.

Proceedings ArticleDOI
01 Dec 2022
TL;DR: In this article , the average power and distortion correlation features are extracted from the multi-correlation output to classify the received GNSS signals in a multichannel receiver as interference-free, multipath, jamming, or spoofing.
Abstract: The global navigation satellite system (GNSS) provides accurate position, velocity, and time data all over the world. However, GNSS are susceptible to multipath effects in suburban, urban, and indoor environments. Furthermore, it is vulnerable to intentional interference such as jamming and spoofing, which can result in either no or false position estimates. This study focuses on classifying received GNSS signals in a multi-correlation GNSS receiver as interference-free, multipath, jamming, or spoofing. To classify the GNSS signal, the average power and distortion correlation features are extracted from the multi-correlation output. Various machine learning algorithms such as neural networks, support vector machines, nearest neighbors, kernel approxi-mation, decision trees, discriminant analysis, naive Bayes, and ensemble classifiers are investigated and quantified using test accuracy and confusion matrix.

Proceedings ArticleDOI
01 Dec 2022
TL;DR: In this article , the authors classify the real and synthetic images captured from the UAV and 696 synthetic images into three classes: simple, real, and Duplo lines based on the geometry of the line.
Abstract: Identifying the type of a power transmission line from images is a fascinating field. It can lead to applications like tower detection, line inspection, location detection, multi-fitting detection, fault detection, foreign object detection, etc. The data collection involves 348 real-time images captured from the UAV and 696 synthetic images. Based on the geometry of the line, the whole data is classified into three classes: simple, real, and Duplo lines. This paper feeds real and synthetic images to the pretrained deep neural networks (DNN) for the line inspection application. Comprehensively top-32 pretrained models were tested on the dataset and evaluated the classification performance. In addition, the DNN algorithms are also tested on the Raspberry Pi hardware platform to know the run-time feasibility of this real-time application.

Proceedings ArticleDOI
10 Apr 2022
TL;DR: Through extensive simulations, it is shown that the proposed SIC-RSRA mechanism outperforms the other mechanisms in terms of number of RACH successes and number of supported devices.
Abstract: Inclusion of massive machine-type-communication (mMTC) devices in 5G cellular Internet of Things (IoT) has significantly raised the issue of network congestion. To address this challenge, a successive interference cancellation-rate splitting random access (SIC-RSRA) mechanism is proposed in this paper. Unlike traditional mechanisms, all selected mMTC devices are allowed to make a finite number of repeated message requests in randomly selected time slots within a radio frame. The gNodeB, on the other hand, applies both intra-slot SIC (utilizing RSRA) and inter-slot SIC to decode messages from a larger number of devices. For the proposed mechanism, the impact of increasing the number of devices as well as the received power difference is investigated. Through extensive simulations, we show that the proposed mechanism outperforms the other mechanisms in terms of number of RACH successes and number of supported devices.

Proceedings ArticleDOI
24 Nov 2022
TL;DR: In this article , a generalized pseudorange measurement model is presented by combining the authentic and spoofed pseudorange measurements, and the GNSS receiver's state is estimated by mitigating the spoofed pseudooranges and it is formulated as a Least Absolute Shrinkage and Selection Operator (LASSO) optimization problem.
Abstract: The Global Navigation Satellite Systems (GNSS) are widespread for providing Position, Velocity, and Time (PVT) information across the globe. The GNSS usually employs the Extended Kalman Filter (EKF) framework to estimate the PVT information of the receiver. The GNSS receivers PVT information is falsified by using a mimic GNSS signals is called a spoofing attack. This paper focuses mainly to combat the spoofing attack using sparse estimation theory. A generalized mathematical model is proposed for authentic and spoofed pseudoranges at the GNSS receiver. After that, a generalized pseudorange measurement model is presented by combining the authentic and spoofed pseudorange measurements. It is assumed that, only a part of satellite signals are spoofed. Further, the GNSS receiver’s state is estimated by mitigating the spoofed pseudoranges and it is formulated as a Least Absolute Shrinkage and Selection Operator (LASSO) optimization problem. The simulated results, compares the proposed LASSO based EKF algorithm with traditional EKF framework. It is observed that, the proposed algorithm suppresses the spoofing effect. Moreover, the Position Root Mean Square Error (PRMSE) of the proposed algorithm decreases by increasing the number of spoofed measurements.

Proceedings ArticleDOI
24 Nov 2022
TL;DR: In this paper , an epoch-by-epochamber robust positioning algorithm followed by the Grubbs outlier test is proposed to address the GPS spoofing problem, which considers all possible combinations of measurements and generates several position estimates, which contain actual position, spoof position and biased positions.
Abstract: Global positioning system (GPS) is favored to provide the position, velocity, and time (PVT) details across the globe. This paper proposes an epoch-by-epoch robust positioning algorithm followed by the Grubbs outlier test to address the GPS spoofing problem. We propose to accept both authentic and spoofed GPS signals to compute the robust positions. The robust positioning considers all possible combinations of measurements and generates several position estimates, which contain actual position, spoof position, and biased positions. In this case, the positions evolved due to spoof pseudorange measurements must be removed. Hence, we model eliminating spoof locations as an outlier problem and is addressed using Grubbs outlier test. The median of the processed data after the Grubbs test is the positional information at that epoch. Moreover, this problem is also extended to the Kalman filter’s (KF) framework to address the time-varying kinematics of the target. Simulations are carried out for various numbers of actual and spoofed pseudorange measurements. In order to illustrate the robustness of the proposed technique, position root mean square error (PRMSE) is taken as a metric.

DOI
01 Jun 2022
TL;DR: The proposed Tsetlin Machine model for aerial vehicle’s activity classification using the mm-Wave frequency modulated continuous wave (FMCW) Radar data is based on propositional logic, which is much more transparent and lighter than the existing models.
Abstract: The activity classification for aerial vehicles plays a vital role in privacy monitoring and security surveillance applications, which is crucial and valuable in modern times. This paper presents the Tsetlin Machine model for aerial vehicle’s activity classification using the mm-Wave frequency modulated continuous wave (FMCW) Radar data. The proposed Tsetlin Machine (TM) model is based on propositional logic, which is much more transparent and lighter than the existing models. It can also be easily transferred to hardware, making it more useful even in practical circumstances. Furthermore, the model has a 92.5% accuracy in activity classification, which is close to other lightweight classification models like logistic regression, light gradient boosting machine (GBM) and support vector machine (SVM). Furthermore, the proposed model’s accuracy is much better than the pre-trained models such as VGG16, ResNet50, and InceptionResNet with at least $98 \times$ reduction in memory size.