scispace - formally typeset
Search or ask a question

Showing papers in "Sensors in 2022"


Journal ArticleDOI
01 Jan 2022-Sensors
TL;DR: This paper investigates different versions of the YOLO objection detection method and compares their performances for the specific application of detecting a safe landing location for a UAV that has suffered an in-flight failure, and confirms the feasibility of utilizing these algorithms for effective emergency landing spot detection.
Abstract: In-flight system failure is one of the major safety concerns in the operation of unmanned aerial vehicles (UAVs) in urban environments. To address this concern, a safety framework consisting of following three main tasks can be utilized: (1) Monitoring health of the UAV and detecting failures, (2) Finding potential safe landing spots in case a critical failure is detected in step 1, and (3) Steering the UAV to a safe landing spot found in step 2. In this paper, we specifically look at the second task, where we investigate the feasibility of utilizing object detection methods to spot safe landing spots in case the UAV suffers an in-flight failure. Particularly, we investigate different versions of the YOLO objection detection method and compare their performances for the specific application of detecting a safe landing location for a UAV that has suffered an in-flight failure. We compare the performance of YOLOv3, YOLOv4, and YOLOv5l while training them by a large aerial image dataset called DOTA in a Personal Computer (PC) and also a Companion Computer (CC). We plan to use the chosen algorithm on a CC that can be attached to a UAV, and the PC is used to verify the trends that we see between the algorithms on the CC. We confirm the feasibility of utilizing these algorithms for effective emergency landing spot detection and report their accuracy and speed for that specific application. Our investigation also shows that the YOLOv5l algorithm outperforms YOLOv4 and YOLOv3 in terms of accuracy of detection while maintaining a slightly slower inference speed.

195 citations


Journal ArticleDOI
01 Mar 2022-Sensors
TL;DR: This systematic literature review offers a wide range of information on Industry 4.0 from the designing phase to security needs, from the deployment stage to the classification of the network, the difficulties, challenges, and future directions.
Abstract: The 21st century has seen rapid changes in technology, industry, and social patterns. Most industries have moved towards automation, and human intervention has decreased, which has led to a revolution in industries, named the fourth industrial revolution (Industry 4.0). Industry 4.0 or the fourth industrial revolution (IR 4.0) relies heavily on the Internet of Things (IoT) and wireless sensor networks (WSN). IoT and WSN are used in various control systems, including environmental monitoring, home automation, and chemical/biological attack detection. IoT devices and applications are used to process extracted data from WSN devices and transmit them to remote locations. This systematic literature review offers a wide range of information on Industry 4.0, finds research gaps, and recommends future directions. Seven research questions are addressed in this article: (i) What are the contributions of WSN in IR 4.0? (ii) What are the contributions of IoT in IR 4.0? (iii) What are the types of WSN coverage areas for IR 4.0? (iv) What are the major types of network intruders in WSN and IoT systems? (v) What are the prominent network security attacks in WSN and IoT? (vi) What are the significant issues in IoT and WSN frameworks? and (vii) What are the limitations and research gaps in the existing work? This study mainly focuses on research solutions and new techniques to automate Industry 4.0. In this research, we analyzed over 130 articles from 2014 until 2021. This paper covers several aspects of Industry 4.0, from the designing phase to security needs, from the deployment stage to the classification of the network, the difficulties, challenges, and future directions.

152 citations


Journal ArticleDOI
28 Aug 2022-Sensors
TL;DR: In this article , an ultra-narrow band graphene refractive index sensor, consisting of a patterned graphene layer on the top, a dielectric layer of SiO2 in the middle, and a bottom Au layer, was proposed.
Abstract: The paper proposes an ultra-narrow band graphene refractive index sensor, consisting of a patterned graphene layer on the top, a dielectric layer of SiO2 in the middle, and a bottom Au layer. The absorption sensor achieves the absorption efficiency of 99.41% and 99.22% at 5.664 THz and 8.062 THz, with the absorption bandwidths 0.0171 THz and 0.0152 THz, respectively. Compared with noble metal absorbers, our graphene absorber can achieve tunability by adjusting the Fermi level and relaxation time of the graphene layer with the geometry of the absorber unchanged, which greatly saves the manufacturing cost. The results show that the sensor has the properties of polarization-independence and large-angle insensitivity due to the symmetric structure. In addition, the practical application of testing the content of hemoglobin biomolecules was conducted, the frequency of first resonance mode shows a shift of 0.017 THz, and the second resonance mode has a shift of 0.016 THz, demonstrating the good frequency sensitivity of our sensor. The S (sensitivities) of the sensor were calculated at 875 GHz/RIU and 775 GHz/RIU, and quality factors FOM (Figure of Merit) are 26.51 and 18.90, respectively; and the minimum limit of detection is 0.04. By comparing with previous similar sensors, our sensor has better sensing performance, which can be applied to photon detection in the terahertz band, biochemical sensing, and other fields.

131 citations


Journal ArticleDOI
23 Jan 2022-Sensors
TL;DR: Simulation results and analysis show that Pelican Optimization Algorithm has a better and more competitive performance via striking a proportional balance between exploration and exploitation compared to eight competitor algorithms in providing optimal solutions for optimization problems.
Abstract: Optimization is an important and fundamental challenge to solve optimization problems in different scientific disciplines. In this paper, a new stochastic nature-inspired optimization algorithm called Pelican Optimization Algorithm (POA) is introduced. The main idea in designing the proposed POA is simulation of the natural behavior of pelicans during hunting. In POA, search agents are pelicans that search for food sources. The mathematical model of the POA is presented for use in solving optimization issues. The performance of POA is evaluated on twenty-three objective functions of different unimodal and multimodal types. The optimization results of unimodal functions show the high exploitation ability of POA to approach the optimal solution while the optimization results of multimodal functions indicate the high ability of POA exploration to find the main optimal area of the search space. Moreover, four engineering design issues are employed for estimating the efficacy of the POA in optimizing real-world applications. The findings of POA are compared with eight well-known metaheuristic algorithms to assess its competence in optimization. The simulation results and their analysis show that POA has a better and more competitive performance via striking a proportional balance between exploration and exploitation compared to eight competitor algorithms in providing optimal solutions for optimization problems.

130 citations


Journal ArticleDOI
21 Jan 2022-Sensors
TL;DR: A new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features is proposed, which outperforms recent techniques.
Abstract: After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.

91 citations


Journal ArticleDOI
01 Feb 2022-Sensors
TL;DR: An improved metaheuristics-based clustering with multihop routing protocol for underwater wireless sensor networks, named the IMCMR-UWSN technique, which helps to significantly boost the energy efficiency and lifetime of the UWSN.
Abstract: Underwater wireless sensor networks (UWSNs) comprise numerous underwater wireless sensor nodes dispersed in the marine environment, which find applicability in several areas like data collection, navigation, resource investigation, surveillance, and disaster prediction. Because of the usage of restricted battery capacity and the difficulty in replacing or charging the inbuilt batteries, energy efficiency becomes a challenging issue in the design of UWSN. Earlier studies reported that clustering and routing are considered effective ways of attaining energy efficacy in the UWSN. Clustering and routing processes can be treated as nondeterministic polynomial-time (NP) hard optimization problems, and they can be addressed by the use of metaheuristics. This study introduces an improved metaheuristics-based clustering with multihop routing protocol for underwater wireless sensor networks, named the IMCMR-UWSN technique. The major aim of the IMCMR-UWSN technique is to choose cluster heads (CHs) and optimal routes to a destination. The IMCMR-UWSN technique incorporates two major processes, namely the chaotic krill head algorithm (CKHA)-based clustering and self-adaptive glow worm swarm optimization algorithm (SA-GSO)-based multihop routing. The CKHA technique selects CHs and organizes clusters based on different parameters such as residual energy, intra-cluster distance, and inter-cluster distance. Similarly, the SA-GSO algorithm derives a fitness function involving four parameters, namely residual energy, delay, distance, and trust. Utilization of the IMCMR-UWSN technique helps to significantly boost the energy efficiency and lifetime of the UWSN. To ensure the improved performance of the IMCMR-UWSN technique, a series of simulations were carried out, and the comparative results reported the supremacy of the IMCMR-UWSN technique in terms of different measures.

88 citations


Journal ArticleDOI
01 Jan 2022-Sensors
TL;DR: The experimental results highlighted an enhanced performance of the MCR-UWSN technique over the recent state-of-art techniques, and the multi-hop routing technique, alongside the grasshopper optimization (MHR-GOA) technique, is derived using multiple input parameters.
Abstract: In recent years, the underwater wireless sensor network (UWSN) has received a significant interest among research communities for several applications, such as disaster management, water quality prediction, environmental observance, underwater navigation, etc. The UWSN comprises a massive number of sensors placed in rivers and oceans for observing the underwater environment. However, the underwater sensors are restricted to energy and it is tedious to recharge/replace batteries, resulting in energy efficiency being a major challenge. Clustering and multi-hop routing protocols are considered energy-efficient solutions for UWSN. However, the cluster-based routing protocols for traditional wireless networks could not be feasible for UWSN owing to the underwater current, low bandwidth, high water pressure, propagation delay, and error probability. To resolve these issues and achieve energy efficiency in UWSN, this study focuses on designing the metaheuristics-based clustering with a routing protocol for UWSN, named MCR-UWSN. The goal of the MCR-UWSN technique is to elect an efficient set of cluster heads (CHs) and route to destination. The MCR-UWSN technique involves the designing of cultural emperor penguin optimizer-based clustering (CEPOC) techniques to construct clusters. Besides, the multi-hop routing technique, alongside the grasshopper optimization (MHR-GOA) technique, is derived using multiple input parameters. The performance of the MCR-UWSN technique was validated, and the results are inspected in terms of different measures. The experimental results highlighted an enhanced performance of the MCR-UWSN technique over the recent state-of-art techniques.

84 citations


Journal ArticleDOI
01 Apr 2022-Sensors
TL;DR: The present work introduces a novel method for the automated diagnosis and detection of metastases from whole slide images using the Fast AI framework and the 1-cycle policy and indicates that the suggested model may assist general practitioners in accurately analyzing breast cancer situations, hence preventing future complications and mortality.
Abstract: Lymph node metastasis in breast cancer may be accurately predicted using a DenseNet-169 model. However, the current system for identifying metastases in a lymph node is manual and tedious. A pathologist well-versed with the process of detection and characterization of lymph nodes goes through hours investigating histological slides. Furthermore, because of the massive size of most whole-slide images (WSI), it is wise to divide a slide into batches of small image patches and apply methods independently on each patch. The present work introduces a novel method for the automated diagnosis and detection of metastases from whole slide images using the Fast AI framework and the 1-cycle policy. Additionally, it compares this new approach to previous methods. The proposed model has surpassed other state-of-art methods with more than 97.4% accuracy. In addition, a mobile application is developed for prompt and quick response. It collects user information and models to diagnose metastases present in the early stages of cancer. These results indicate that the suggested model may assist general practitioners in accurately analyzing breast cancer situations, hence preventing future complications and mortality. With digital image processing, histopathologic interpretation and diagnostic accuracy have improved considerably.

84 citations


Journal ArticleDOI
01 Jan 2022-Sensors
TL;DR: A systematic survey of the literature on the implementation of FL in EC environments with a taxonomy to identify advanced solutions and other open problems is provided to help researchers better understand the connection between FL and EC enabling technologies and concepts.
Abstract: Edge Computing (EC) is a new architecture that extends Cloud Computing (CC) services closer to data sources. EC combined with Deep Learning (DL) is a promising technology and is widely used in several applications. However, in conventional DL architectures with EC enabled, data producers must frequently send and share data with third parties, edge or cloud servers, to train their models. This architecture is often impractical due to the high bandwidth requirements, legalization, and privacy vulnerabilities. The Federated Learning (FL) concept has recently emerged as a promising solution for mitigating the problems of unwanted bandwidth loss, data privacy, and legalization. FL can co-train models across distributed clients, such as mobile phones, automobiles, hospitals, and more, through a centralized server, while maintaining data localization. FL can therefore be viewed as a stimulating factor in the EC paradigm as it enables collaborative learning and model optimization. Although the existing surveys have taken into account applications of FL in EC environments, there has not been any systematic survey discussing FL implementation and challenges in the EC paradigm. This paper aims to provide a systematic survey of the literature on the implementation of FL in EC environments with a taxonomy to identify advanced solutions and other open problems. In this survey, we review the fundamentals of EC and FL, then we review the existing related works in FL in EC. Furthermore, we describe the protocols, architecture, framework, and hardware requirements for FL implementation in the EC environment. Moreover, we discuss the applications, challenges, and related existing solutions in the edge FL. Finally, we detail two relevant case studies of applying FL in EC, and we identify open issues and potential directions for future research. We believe this survey will help researchers better understand the connection between FL and EC enabling technologies and concepts.

62 citations


Journal ArticleDOI
27 Sep 2022-Sensors
TL;DR: A Genetic Algorithm inspired method to strengthen weak keys obtained from Random DNA-based Key Generators instead of completely discarding them is proposed.
Abstract: DNA (Deoxyribonucleic Acid) Cryptography has revolutionized information security by combining rigorous biological and mathematical concepts to encode original information in terms of a DNA sequence. Such schemes are crucially dependent on corresponding DNA-based cryptographic keys. However, owing to the redundancy or observable patterns, some of the keys are rendered weak as they are prone to intrusions. This paper proposes a Genetic Algorithm inspired method to strengthen weak keys obtained from Random DNA-based Key Generators instead of completely discarding them. Fitness functions and the application of genetic operators have been chosen and modified to suit DNA cryptography fundamentals in contrast to fitness functions for traditional cryptographic schemes. The crossover and mutation rates are reducing with each new population as more keys are passing fitness tests and need not be strengthened. Moreover, with the increasing size of the initial key population, the key space is getting highly exhaustive and less prone to Brute Force attacks. The paper demonstrates that out of an initial 25 × 25 population of DNA Keys, 14 keys are rendered weak. Complete results and calculations of how each weak key can be strengthened by generating 4 new populations are illustrated. The analysis of the proposed scheme for different initial populations shows that a maximum of 8 new populations has to be generated to strengthen all 500 weak keys of a 500 × 500 initial population.

62 citations


Journal ArticleDOI
21 Jan 2022-Sensors
TL;DR: In this article , the authors present the physics of guided surface acoustic waves and the piezoelectric materials used for designing SAW sensors and discuss the applications of these sensors and their progress in the fields of biomedical, microfluidics, chemical, and mechano-biological applications.
Abstract: Surface acoustic waves (SAWs) are the guided waves that propagate along the top surface of a material with wave vectors orthogonal to the normal direction to the surface. Based on these waves, SAW sensors are conceptualized by employing piezoelectric crystals where the guided elastodynamic waves are generated through an electromechanical coupling. Electromechanical coupling in both active and passive modes is achieved by integrating interdigitated electrode transducers (IDT) with the piezoelectric crystals. Innovative meta-designs of the periodic IDTs define the functionality and application of SAW sensors. This review article presents the physics of guided surface acoustic waves and the piezoelectric materials used for designing SAW sensors. Then, how the piezoelectric materials and cuts could alter the functionality of the sensors is explained. The article summarizes a few key configurations of the electrodes and respective guidelines for generating different guided wave patterns such that new applications can be foreseen. Finally, the article explores the applications of SAW sensors and their progress in the fields of biomedical, microfluidics, chemical, and mechano-biological applications along with their crucial roles and potential plans for improvements in the long-term future in the field of science and technology.

Journal ArticleDOI
01 Feb 2022-Sensors
TL;DR: Deep learning is investigated to intelligently detect road cracks, and Faster R-CNN and Mask R- CNN are compared and analyzed, and the results show that the joint training strategy is very effective.
Abstract: The intelligent crack detection method is an important guarantee for the realization of intelligent operation and maintenance, and it is of great significance to traffic safety. In recent years, the recognition of road pavement cracks based on computer vision has attracted increasing attention. With the technological breakthroughs of general deep learning algorithms in recent years, detection algorithms based on deep learning and convolutional neural networks have achieved better results in the field of crack recognition. In this paper, deep learning is investigated to intelligently detect road cracks, and Faster R-CNN and Mask R-CNN are compared and analyzed. The results show that the joint training strategy is very effective, and we are able to ensure that both Faster R-CNN and Mask R-CNN complete the crack detection task when trained with only 130+ images and can outperform YOLOv3. However, the joint training strategy causes a degradation in the effectiveness of the bounding box detected by Mask R-CNN.

Journal ArticleDOI
14 Feb 2022-Sensors
TL;DR: A comprehensive analysis of the current advancements, developing trends, and major challenges for wearable-based human activity recognition (HAR) can be found in this paper , where the authors also present cutting-edge frontiers and future directions for deep learning-based HAR.
Abstract: Mobile and wearable devices have enabled numerous applications, including activity tracking, wellness monitoring, and human-computer interaction, that measure and improve our daily lives. Many of these applications are made possible by leveraging the rich collection of low-power sensors found in many mobile and wearable devices to perform human activity recognition (HAR). Recently, deep learning has greatly pushed the boundaries of HAR on mobile and wearable devices. This paper systematically categorizes and summarizes existing work that introduces deep learning methods for wearables-based HAR and provides a comprehensive analysis of the current advancements, developing trends, and major challenges. We also present cutting-edge frontiers and future directions for deep learning-based HAR.

Journal ArticleDOI
01 Jan 2022-Sensors
TL;DR: A hybrid model is developed by incorporating Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) for activity recognition where CNN is used for spatial features extraction and LSTM network is utilized for learning temporal information.
Abstract: In recent years, Human Activity Recognition (HAR) has become one of the most important research topics in the domains of health and human-machine interaction. Many Artificial intelligence-based models are developed for activity recognition; however, these algorithms fail to extract spatial and temporal features due to which they show poor performance on real-world long-term HAR. Furthermore, in literature, a limited number of datasets are publicly available for physical activities recognition that contains less number of activities. Considering these limitations, we develop a hybrid model by incorporating Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) for activity recognition where CNN is used for spatial features extraction and LSTM network is utilized for learning temporal information. Additionally, a new challenging dataset is generated that is collected from 20 participants using the Kinect V2 sensor and contains 12 different classes of human physical activities. An extensive ablation study is performed over different traditional machine learning and deep learning models to obtain the optimum solution for HAR. The accuracy of 90.89% is achieved via the CNN-LSTM technique, which shows that the proposed model is suitable for HAR applications.

Journal ArticleDOI
01 Nov 2022-Sensors
TL;DR: In this paper , the authors presented a novel and enhanced deep-learning-based Mask R-CNN model for the identification of laryngeal cancer and its related symptoms by utilizing diverse image datasets and CT images in real time.
Abstract: Recently, laryngeal cancer cases have increased drastically across the globe. Accurate treatment for laryngeal cancer is intricate, especially in the later stages. This type of cancer is an intricate malignancy inside the head and neck area of patients. In recent years, diverse diagnosis approaches and tools have been developed by researchers for helping clinical experts to identify laryngeal cancer effectively. However, these existing tools and approaches have diverse issues related to performance constraints such as lower accuracy in the identification of laryngeal cancer in the initial stage, more computational complexity, and large time consumption in patient screening. In this paper, the authors present a novel and enhanced deep-learning-based Mask R-CNN model for the identification of laryngeal cancer and its related symptoms by utilizing diverse image datasets and CT images in real time. Furthermore, our suggested model is capable of capturing and detecting minor malignancies of the larynx portion in a significant and faster manner in the real-time screening of patients, and it saves time for the clinicians, allowing for more patient screening every day. The outcome of the suggested model is enhanced and pragmatic and obtained an accuracy of 98.99%, precision of 98.99%, F1 score of 97.99%, and recall of 96.79% on the ImageNet dataset. Several studies have been performed in recent years on laryngeal cancer detection by using diverse approaches from researchers. For the future, there are vigorous opportunities for further research to investigate new approaches for laryngeal cancer detection by utilizing diverse and large dataset images.

Journal ArticleDOI
21 Jan 2022-Sensors
TL;DR: A new method for multiclass skin lesion classification using best deep learning feature fusion and an extreme learning machine is proposed and the method’s accuracy is improved and the proposed method is computationally efficient.
Abstract: The variation in skin textures and injuries, as well as the detection and classification of skin cancer, is a difficult task. Manually detecting skin lesions from dermoscopy images is a difficult and time-consuming process. Recent advancements in the domains of the internet of things (IoT) and artificial intelligence for medical applications demonstrated improvements in both accuracy and computational time. In this paper, a new method for multiclass skin lesion classification using best deep learning feature fusion and an extreme learning machine is proposed. The proposed method includes five primary steps: image acquisition and contrast enhancement; deep learning feature extraction using transfer learning; best feature selection using hybrid whale optimization and entropy-mutual information (EMI) approach; fusion of selected features using a modified canonical correlation based approach; and, finally, extreme learning machine based classification. The feature selection step improves the system’s computational efficiency and accuracy. The experiment is carried out on two publicly available datasets, HAM10000 and ISIC2018. The achieved accuracy on both datasets is 93.40 and 94.36 percent. When compared to state-of-the-art (SOTA) techniques, the proposed method’s accuracy is improved. Furthermore, the proposed method is computationally efficient.

Journal ArticleDOI
27 Jan 2022-Sensors
TL;DR: A compressed sensing reconstruction method that combines the total variation regularization and the non-local self-similarity constraint, and permits a gain up to 25% in terms of denoising efficiency and visual quality using two metrics: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).
Abstract: In remote sensing applications and medical imaging, one of the key points is the acquisition, real-time preprocessing and storage of information. Due to the large amount of information present in the form of images or videos, compression of these data is necessary. Compressed sensing is an efficient technique to meet this challenge. It consists in acquiring a signal, assuming that it can have a sparse representation, by using a minimum number of nonadaptive linear measurements. After this compressed sensing process, a reconstruction of the original signal must be performed at the receiver. Reconstruction techniques are often unable to preserve the texture of the image and tend to smooth out its details. To overcome this problem, we propose, in this work, a compressed sensing reconstruction method that combines the total variation regularization and the non-local self-similarity constraint. The optimization of this method is performed by using an augmented Lagrangian that avoids the difficult problem of nonlinearity and nondifferentiability of the regularization terms. The proposed algorithm, called denoising-compressed sensing by regularization (DCSR) terms, will not only perform image reconstruction but also denoising. To evaluate the performance of the proposed algorithm, we compare its performance with state-of-the-art methods, such as Nesterov’s algorithm, group-based sparse representation and wavelet-based methods, in terms of denoising and preservation of edges, texture and image details, as well as from the point of view of computational complexity. Our approach permits a gain up to 25% in terms of denoising efficiency and visual quality using two metrics: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).

Journal ArticleDOI
01 Jul 2022-Sensors
TL;DR: The Random Forest Ensemble Method had the best accuracy (97%), whereas the AdaBoost and Bagging algorithms had lower accuracy, precision, recall, and F1-scores.
Abstract: Diabetes is a long-lasting disease triggered by expanded sugar levels in human blood and can affect various organs if left untreated. It contributes to heart disease, kidney issues, damaged nerves, damaged blood vessels, and blindness. Timely disease prediction can save precious lives and enable healthcare advisors to take care of the conditions. Most diabetic patients know little about the risk factors they face before diagnosis. Nowadays, hospitals deploy basic information systems, which generate vast amounts of data that cannot be converted into proper/useful information and cannot be used to support decision making for clinical purposes. There are different automated techniques available for the earlier prediction of disease. Ensemble learning is a data analysis technique that combines multiple techniques into a single optimal predictive system to evaluate bias and variation, and to improve predictions. Diabetes data, which included 17 variables, were gathered from the UCI repository of various datasets. The predictive models used in this study include AdaBoost, Bagging, and Random Forest, to compare the precision, recall, classification accuracy, and F1-score. Finally, the Random Forest Ensemble Method had the best accuracy (97%), whereas the AdaBoost and Bagging algorithms had lower accuracy, precision, recall, and F1-scores.

Journal ArticleDOI
01 Jan 2022-Sensors
TL;DR: A Deep Learning (DL)-based approach namely EfficientDet-D0 with EfficientNet-B0 as the backbone, which outperforms the newest frameworks and is more proficient in glaucoma classification and confirmed the robustness of the work by evaluating it on a challenging dataset.
Abstract: Glaucoma is an eye disease initiated due to excessive intraocular pressure inside it and caused complete sightlessness at its progressed stage. Whereas timely glaucoma screening-based treatment can save the patient from complete vision loss. Accurate screening procedures are dependent on the availability of human experts who performs the manual analysis of retinal samples to identify the glaucomatous-affected regions. However, due to complex glaucoma screening procedures and shortage of human resources, we often face delays which can increase the vision loss ratio around the globe. To cope with the challenges of manual systems, there is an urgent demand for designing an effective automated framework that can accurately identify the Optic Disc (OD) and Optic Cup (OC) lesions at the earliest stage. Efficient and effective identification and classification of glaucomatous regions is a complicated job due to the wide variations in the mass, shade, orientation, and shapes of lesions. Furthermore, the extensive similarity between the lesion and eye color further complicates the classification process. To overcome the aforementioned challenges, we have presented a Deep Learning (DL)-based approach namely EfficientDet-D0 with EfficientNet-B0 as the backbone. The presented framework comprises three steps for glaucoma localization and classification. Initially, the deep features from the suspected samples are computed with the EfficientNet-B0 feature extractor. Then, the Bi-directional Feature Pyramid Network (BiFPN) module of EfficientDet-D0 takes the computed features from the EfficientNet-B0 and performs the top-down and bottom-up keypoints fusion several times. In the last step, the resultant localized area containing glaucoma lesion with associated class is predicted. We have confirmed the robustness of our work by evaluating it on a challenging dataset namely an online retinal fundus image database for glaucoma analysis (ORIGA). Furthermore, we have performed cross-dataset validation on the High-Resolution Fundus (HRF), and Retinal Image database for Optic Nerve Evaluation (RIM ONE DL) datasets to show the generalization ability of our work. Both the numeric and visual evaluations confirm that EfficientDet-D0 outperforms the newest frameworks and is more proficient in glaucoma classification.

Journal ArticleDOI
25 Jan 2022-Sensors
TL;DR: The work identified that the heterogeneity of such an ecosystem does have issues and poses a great setback in the deployment of security and privacy mechanisms to counter security attacks and privacy leakages.
Abstract: The field of information security and privacy is currently attracting a lot of research interest. Simultaneously, different computing paradigms from Cloud computing to Edge computing are already forming a unique ecosystem with different architectures, storage, and processing capabilities. The heterogeneity of this ecosystem comes with certain limitations, particularly security and privacy challenges. This systematic literature review aims to identify similarities, differences, main attacks, and countermeasures in the various paradigms mentioned. The main determining outcome points out the essential security and privacy threats. The presented results also outline important similarities and differences in Cloud, Edge, and Fog computing paradigms. Finally, the work identified that the heterogeneity of such an ecosystem does have issues and poses a great setback in the deployment of security and privacy mechanisms to counter security attacks and privacy leakages. Different deployment techniques were found in the review studies as ways to mitigate and enhance security and privacy shortcomings.

Journal ArticleDOI
01 Apr 2022-Sensors
TL;DR: The EEG-based sleep stage prediction approach is expected to be utilized in a wearable sleep monitoring system and the neurological EEG-biomarkers may be considered biomarkers for their characteristics of attenuation in NREM sleep and subsequent increase in REM sleep.
Abstract: Electroencephalography (EEG) is immediate and sensitive to neurological changes resulting from sleep stages and is considered a computing tool for understanding the association between neurological outcomes and sleep stages. EEG is expected to be an efficient approach for sleep stage prediction outside a highly equipped clinical setting compared with multimodal physiological signal-based polysomnography. This study aims to quantify the neurological EEG-biomarkers and predict five-class sleep stages using sleep EEG data. We investigated the three-channel EEG sleep recordings of 154 individuals (mean age of 53.8 ± 15.4 years) from the Haaglanden Medisch Centrum (HMC, The Hague, The Netherlands) open-access public dataset of PhysioNet. The power of fast-wave alpha, beta, and gamma rhythms decreases; and the power of slow-wave delta and theta oscillations gradually increases as sleep becomes deeper. Delta wave power ratios (DAR, DTR, and DTABR) may be considered biomarkers for their characteristics of attenuation in NREM sleep and subsequent increase in REM sleep. The overall accuracy of the C5.0, Neural Network, and CHAID machine-learning models are 91%, 89%, and 84%, respectively, for multi-class classification of the sleep stages. The EEG-based sleep stage prediction approach is expected to be utilized in a wearable sleep monitoring system.

Journal ArticleDOI
01 Jan 2022-Sensors
TL;DR: In this paper , the authors present a general framework of digital twins in soil, irrigation, robotics, farm machineries, and food post-harvest processing in agricultural field, which can support farmers as a next generation of digitalization paradigm by continuous and real-time monitoring of physical world (farm) and updating the state of virtual world.
Abstract: Digitalization has impacted agricultural and food production systems, and makes application of technologies and advanced data processing techniques in agricultural field possible. Digital farming aims to use available information from agricultural assets to solve several existing challenges for addressing food security, climate protection, and resource management. However, the agricultural sector is complex, dynamic, and requires sophisticated management systems. The digital approaches are expected to provide more optimization and further decision-making supports. Digital twin in agriculture is a virtual representation of a farm with great potential for enhancing productivity and efficiency while declining energy usage and losses. This review describes the state-of-the-art of digital twin concepts along with different digital technologies and techniques in agricultural contexts. It presents a general framework of digital twins in soil, irrigation, robotics, farm machineries, and food post-harvest processing in agricultural field. Data recording, modeling including artificial intelligence, big data, simulation, analysis, prediction, and communication aspects (e.g., Internet of Things, wireless technologies) of digital twin in agriculture are discussed. Digital twin systems can support farmers as a next generation of digitalization paradigm by continuous and real-time monitoring of physical world (farm) and updating the state of virtual world.

Journal ArticleDOI
01 Jan 2022-Sensors
TL;DR: A novel developed transfer deep-learning model for the early diagnosis of brain tumors into their subclasses, such as pituitary, meningioma, and glioma is proposed.
Abstract: With the advancement in technology, machine learning can be applied to diagnose the mass/tumor in the brain using magnetic resonance imaging (MRI). This work proposes a novel developed transfer deep-learning model for the early diagnosis of brain tumors into their subclasses, such as pituitary, meningioma, and glioma. First, various layers of isolated convolutional-neural-network (CNN) models are built from scratch to check their performances for brain MRI images. Then, the 22-layer, binary-classification (tumor or no tumor) isolated-CNN model is re-utilized to re-adjust the neurons’ weights for classifying brain MRI images into tumor subclasses using the transfer-learning concept. As a result, the developed transfer-learned model has a high accuracy of 95.75% for the MRI images of the same MRI machine. Furthermore, the developed transfer-learned model has also been tested using the brain MRI images of another machine to validate its adaptability, general capability, and reliability for real-time application in the future. The results showed that the proposed model has a high accuracy of 96.89% for an unseen brain MRI dataset. Thus, the proposed deep-learning framework can help doctors and radiologists diagnose brain tumors early.

Journal ArticleDOI
01 May 2022-Sensors
TL;DR: Wang et al. as discussed by the authors proposed the MSFT-YOLO model for the industrial scenario in which the image background interference is great, the defect category is easily confused, defect scale changes a great deal, and the detection results of small defects are poor.
Abstract: With the development of artificial intelligence technology and the popularity of intelligent production projects, intelligent inspection systems have gradually become a hot topic in the industrial field. As a fundamental problem in the field of computer vision, how to achieve object detection in the industry while taking into account the accuracy and real-time detection is an important challenge in the development of intelligent detection systems. The detection of defects on steel surfaces is an important application of object detection in the industry. Correct and fast detection of surface defects can greatly improve productivity and product quality. To this end, this paper introduces the MSFT-YOLO model, which is improved based on the one-stage detector. The MSFT-YOLO model is proposed for the industrial scenario in which the image background interference is great, the defect category is easily confused, the defect scale changes a great deal, and the detection results of small defects are poor. By adding the TRANS module, which is designed based on Transformer, to the backbone and detection headers, the features can be combined with global information. The fusion of features at different scales by combining multi-scale feature fusion structures enhances the dynamic adjustment of the detector to objects at different scales. To further improve the performance of MSFT-YOLO, we also introduce plenty of effective strategies, such as data augmentation and multi-step training methods. The test results on the NEU-DET dataset show that MSPF-YOLO can achieve real-time detection, and the average detection accuracy of MSFT-YOLO is 75.2, improving about 7% compared to the baseline model (YOLOv5) and 18% compared to Faster R-CNN, which is advantageous and inspiring.

Journal ArticleDOI
01 Aug 2022-Sensors
TL;DR: In this paper , a temperature measurement method based on the diode laser absorption spectroscopy (TDLAS) was demonstrated, which could cover two water vapor (H2O) absorption lines located at 7153.749 cm−1 and 7154.354 cm −1 simultaneously.
Abstract: The rapidly changing and wide dynamic range of combustion temperature in scramjet engines presents a major challenge to existing test techniques. Tunable diode laser absorption spectroscopy (TDLAS) based temperature measurement has the advantages of high sensitivity, fast response, and compact structure. In this invited paper, a temperature measurement method based on the TDLAS technique with a single diode laser was demonstrated. A continuous-wave (CW), distributed feedback (DFB) diode laser with an emission wavelength near 1.4 μm was used for temperature measurement, which could cover two water vapor (H2O) absorption lines located at 7153.749 cm−1 and 7154.354 cm−1 simultaneously. The output wavelength of the diode laser was calibrated according to the two absorption peaks in the time domain. Using this strategy, the TDLAS system has the advantageous of immunization to laser wavelength shift, simple system structure, reduced cost, and increased system robustness. The line intensity of the two target absorption lines under room temperature was about one-thousandth of that under high temperature, which avoided the measuring error caused by H2O in the environment. The system was tested on a McKenna flat flame burner and a scramjet model engine, respectively. It was found that, compared to the results measured by CARS technique and theoretical calculation, this TDLAS system had less than 4% temperature error when the McKenna flat flame burner was used. When a scramjet model engine was adopted, the measured results showed that such TDLAS system had an excellent dynamic range and fast response. The TDLAS system reported here could be used in real engine in the future.

Journal ArticleDOI
20 Jan 2022-Sensors
TL;DR: 6G mobile technology is reviewed, including its vision, requirements, enabling technologies, and challenges, and a total of 11 communication technologies, including terahertz communication, visible light communication, multiple access, coding, cell-free massive multiple-input multiple-output (CF-mMIMO) zero-energy interface, intelligent reflecting surface (IRS), and infusion of AI/machine learning in wireless transmission techniques, are presented.
Abstract: Ever since the introduction of fifth generation (5G) mobile communications, the mobile telecommunications industry has been debating whether 5G is an “evolution” or “revolution” from the previous legacy mobile networks, but now that 5G has been commercially available for the past few years, the research direction has recently shifted towards the upcoming generation of mobile communication system, known as the sixth generation (6G), which is expected to drastically provide significant and evolutionary, if not revolutionary, improvements in mobile networks. The promise of extremely high data rates (in terabits), artificial intelligence (AI), ultra-low latency, near-zero/low energy, and immense connected devices is expected to enhance the connectivity, sustainability, and trustworthiness and provide some new services, such as truly immersive “extended reality” (XR), high-fidelity mobile hologram, and a new generation of entertainment. Sixth generation and its vision are still under research and open for developers and researchers to establish and develop their directions to realize future 6G technology, which is expected to be ready as early as 2028. This paper reviews 6G mobile technology, including its vision, requirements, enabling technologies, and challenges. Meanwhile, a total of 11 communication technologies, including terahertz (THz) communication, visible light communication (VLC), multiple access, coding, cell-free massive multiple-input multiple-output (CF-mMIMO) zero-energy interface, intelligent reflecting surface (IRS), and infusion of AI/machine learning (ML) in wireless transmission techniques, are presented. Moreover, this paper compares 5G and 6G in terms of services, key technologies, and enabling communications techniques. Finally, it discusses the crucial future directions and technology developments in 6G.

Journal ArticleDOI
01 Jun 2022-Sensors
TL;DR: The paper focuses on the current research challenges and future research directions related to integrating FL and blockchain for vehicular networks, and sheds light on the blockchain and FL with real-world implementations.
Abstract: The Internet of Things (IoT) revitalizes the world with tremendous capabilities and potential to be utilized in vehicular networks. The Smart Transport Infrastructure (STI) era depends mainly on the IoT. Advanced machine learning (ML) techniques are being used to strengthen the STI smartness further. However, some decisions are very challenging due to the vast number of STI components and big data generated from STIs. Computation cost, communication overheads, and privacy issues are significant concerns for wide-scale ML adoption within STI. These issues can be addressed using Federated Learning (FL) and blockchain. FL can be used to address the issues of privacy preservation and handling big data generated in STI management and control. Blockchain is a distributed ledger that can store data while providing trust and integrity assurance. Blockchain can be a solution to data integrity and can add more security to the STI. This survey initially explores the vehicular network and STI in detail and sheds light on the blockchain and FL with real-world implementations. Then, FL and blockchain applications in the Vehicular Ad Hoc Network (VANET) environment from security and privacy perspectives are discussed in detail. In the end, the paper focuses on the current research challenges and future research directions related to integrating FL and blockchain for vehicular networks.

Journal ArticleDOI
01 Mar 2022-Sensors
TL;DR: The paper summarizes the security evolution in legacy mobile networks and concludes with their security problems and the most essential 6G application services and their security requirements.
Abstract: After implementing 5G technology, academia and industry started researching 6th generation wireless network technology (6G). 6G is expected to be implemented around the year 2030. It will offer a significant experience for everyone by enabling hyper-connectivity between people and everything. In addition, it is expected to extend mobile communication possibilities where earlier generations could not have developed. Several potential technologies are predicted to serve as the foundation of 6G networks. These include upcoming and current technologies such as post-quantum cryptography, artificial intelligence (AI), machine learning (ML), enhanced edge computing, molecular communication, THz, visible light communication (VLC), and distributed ledger (DL) technologies such as blockchain. From a security and privacy perspective, these developments need a reconsideration of prior security traditional methods. New novel authentication, encryption, access control, communication, and malicious activity detection must satisfy the higher significant requirements of future networks. In addition, new security approaches are necessary to ensure trustworthiness and privacy. This paper provides insights into the critical problems and difficulties related to the security, privacy, and trust issues of 6G networks. Moreover, the standard technologies and security challenges per each technology are clarified. This paper introduces the 6G security architecture and improvements over the 5G architecture. We also introduce the security issues and challenges of the 6G physical layer. In addition, the AI/ML layers and the proposed security solution in each layer are studied. The paper summarizes the security evolution in legacy mobile networks and concludes with their security problems and the most essential 6G application services and their security requirements. Finally, this paper provides a complete discussion of 6G networks’ trustworthiness and solutions.

Journal ArticleDOI
28 Jan 2022-Sensors
TL;DR: In this paper , a detailed review of the development of distributed acoustic sensors and their newest scientific applications is presented, covering most areas of human activities, such as the engineering, material, and humanitarian sciences, geophysics, culture, biology, and applied mechanics.
Abstract: This work presents a detailed review of the development of distributed acoustic sensors (DAS) and their newest scientific applications. It covers most areas of human activities, such as the engineering, material, and humanitarian sciences, geophysics, culture, biology, and applied mechanics. It also provides the theoretical basis for most well-known DAS techniques and unveils the features that characterize each particular group of applications. After providing a summary of research achievements, the paper develops an initial perspective of the future work and determines the most promising DAS technologies that should be improved.

Journal ArticleDOI
01 Feb 2022-Sensors
TL;DR: A lightweight and less complex DCNN than other state-of-the-art methods to classify melanoma skin cancer with high efficiency could provide a less complex and advanced framework for automating the melanoma diagnostic process and expediting the identification process to save a life.
Abstract: Automatic melanoma detection from dermoscopic skin samples is a very challenging task. However, using a deep learning approach as a machine vision tool can overcome some challenges. This research proposes an automated melanoma classifier based on a deep convolutional neural network (DCNN) to accurately classify malignant vs. benign melanoma. The structure of the DCNN is carefully designed by organizing many layers that are responsible for extracting low to high-level features of the skin images in a unique fashion. Other vital criteria in the design of DCNN are the selection of multiple filters and their sizes, employing proper deep learning layers, choosing the depth of the network, and optimizing hyperparameters. The primary objective is to propose a lightweight and less complex DCNN than other state-of-the-art methods to classify melanoma skin cancer with high efficiency. For this study, dermoscopic images containing different cancer samples were obtained from the International Skin Imaging Collaboration datastores (ISIC 2016, ISIC2017, and ISIC 2020). We evaluated the model based on accuracy, precision, recall, specificity, and F1-score. The proposed DCNN classifier achieved accuracies of 81.41%, 88.23%, and 90.42% on the ISIC 2016, 2017, and 2020 datasets, respectively, demonstrating high performance compared with the other state-of-the-art networks. Therefore, this proposed approach could provide a less complex and advanced framework for automating the melanoma diagnostic process and expediting the identification process to save a life.