Showing papers in "Sensors in 2021"
[...]
TL;DR: In this paper, a vision-based real-time system that can detect social distancing violations and send nonintrusive audio-visual cues using state-of-the-art deep learning models is proposed.
Abstract: Social distancing (SD) is an effective measure to prevent the spread of the infectious Coronavirus Disease 2019 (COVID-19). However, a lack of spatial awareness may cause unintentional violations of this new measure. Against this backdrop, we propose an active surveillance system to slow the spread of COVID-19 by warning individuals in a region-of-interest. Our contribution is twofold. First, we introduce a vision-based real-time system that can detect SD violations and send non-intrusive audio-visual cues using state-of-the-art deep-learning models. Second, we define a novel critical social density value and show that the chance of SD violation occurrence can be held near zero if the pedestrian density is kept under this value. The proposed system is also ethically fair: it does not record data nor target individuals, and no human supervisor is present during the operation. The proposed system was evaluated across real-world datasets.
39 citations
[...]
TL;DR: Wang et al. as discussed by the authors proposed a local-global multiple correlation filters (LGCF) tracking algorithm for edge computing systems capturing moving targets, such as vehicles and pedestrians.
Abstract: Visual object tracking is a significant technology for camera-based sensor networks applications Multilayer convolutional features comprehensively used in correlation filter (CF)-based tracking algorithms have achieved excellent performance However, there are tracking failures in some challenging situations because ordinary features are not able to well represent the object appearance variations and the correlation filters are updated irrationally In this paper, we propose a local–global multiple correlation filters (LGCF) tracking algorithm for edge computing systems capturing moving targets, such as vehicles and pedestrians First, we construct a global correlation filter model with deep convolutional features, and choose horizontal or vertical division according to the aspect ratio to build two local filters with hand-crafted features Then, we propose a local–global collaborative strategy to exchange information between local and global correlation filters This strategy can avoid the wrong learning of the object appearance model Finally, we propose a time-space peak to sidelobe ratio (TSPSR) to evaluate the stability of the current CF When the estimated results of the current CF are not reliable, the Kalman filter redetection (KFR) model would be enabled to recapture the object The experimental results show that our presented algorithm achieves better performances on OTB-2013 and OTB-2015 compared with the other latest 12 tracking algorithms Moreover, our algorithm handles various challenges in object tracking well
32 citations
[...]
TL;DR: In this article, an innovative method called BCAoMID-F (Binarized Common Areas of Maximum Image Differences-Fusion) is proposed to extract features of thermal images of three angle grinders.
Abstract: The paper presents an analysis and classification method to evaluate the working condition of angle grinders by means of infrared (IR) thermography and IR image processing. An innovative method called BCAoMID-F (Binarized Common Areas of Maximum Image Differences—Fusion) is proposed in this paper. This method is used to extract features of thermal images of three angle grinders. The computed features are 1-element or 256-element vectors. Feature vectors are the sum of pixels of matrix V or PCA of matrix V or histogram of matrix V. Three different cases of thermal images were considered: healthy angle grinder, angle grinder with 1 blocked air inlet, angle grinder with 2 blocked air inlets. The classification of feature vectors was carried out using two classifiers: Support Vector Machine and Nearest Neighbor. Total recognition efficiency for 3 classes (TRAG) was in the range of 98.5–100%. The presented technique is efficient for fault diagnosis of electrical devices and electric power tools.
31 citations
[...]
TL;DR: In this article, a proof-of-concept label-free electrochemical immunoassay for the rapid detection of SARS-CoV-2 virus via the spike surface protein was presented.
Abstract: The outbreak of the coronavirus disease (COVID-19) pandemic caused by the novel coronavirus (SARS-CoV-2) has been declared an international public health crisis. It is essential to develop diagnostic tests that can quickly identify infected individuals to limit the spread of the virus and assign treatment options. Herein, we report a proof-of-concept label-free electrochemical immunoassay for the rapid detection of SARS-CoV-2 virus via the spike surface protein. The assay consists of a graphene working electrode functionalized with anti-spike antibodies. The concept of the immunosensor is to detect the signal perturbation obtained from ferri/ferrocyanide measurements after binding of the antigen during 45 min of incubation with a sample. The absolute change in the [Fe(CN)6]3-/4- current upon increasing antigen concentrations on the immunosensor surface was used to determine the detection range of the spike protein. The sensor was able to detect a specific signal above 260 nM (20 µg/mL) of subunit 1 of recombinant spike protein. Additionally, it was able to detect SARS-CoV-2 at a concentration of 5.5 × 105 PFU/mL, which is within the physiologically relevant concentration range. The novel immunosensor has a significantly faster analysis time than the standard qPCR and is operated by a portable device which can enable on-site diagnosis of infection.
30 citations
[...]
TL;DR: In this paper, a new infrastructure based on machine learning is introduced to analyze and monitor the output data of the smart meters to investigate if this data is real data or fake, and the proposed infrastructure validates the amount of data loss via communication channels and the internet connection.
Abstract: The modern control infrastructure that manages and monitors the communication between the smart machines represents the most effective way to increase the efficiency of the industrial environment, such as smart grids. The cyber-physical systems utilize the embedded software and internet to connect and control the smart machines that are addressed by the internet of things (IoT). These cyber-physical systems are the basis of the fourth industrial revolution which is indexed by industry 4.0. In particular, industry 4.0 relies heavily on the IoT and smart sensors such as smart energy meters. The reliability and security represent the main challenges that face the industry 4.0 implementation. This paper introduces a new infrastructure based on machine learning to analyze and monitor the output data of the smart meters to investigate if this data is real data or fake. The fake data are due to the hacking and the inefficient meters. The industrial environment affects the efficiency of the meters by temperature, humidity, and noise signals. Furthermore, the proposed infrastructure validates the amount of data loss via communication channels and the internet connection. The decision tree is utilized as an effective machine learning algorithm to carry out both regression and classification for the meters’ data. The data monitoring is carried based on the industrial digital twins’ platform. The proposed infrastructure results provide a reliable and effective industrial decision that enhances the investments in industry 4.0.
28 citations
[...]
TL;DR: In this paper, the authors proposed a deep learning-based people detection system utilizing the YOLOv3 algorithm to count the number of persons in a specific area, and the status of the air conditioners are published via the internet to the dashboard of the IoT platform.
Abstract: Worldwide, energy consumption and saving represent the main challenges for all sectors, most importantly in industrial and domestic sectors. The internet of things (IoT) is a new technology that establishes the core of Industry 4.0. The IoT enables the sharing of signals between devices and machines via the internet. Besides, the IoT system enables the utilization of artificial intelligence (AI) techniques to manage and control the signals between different machines based on intelligence decisions. The paper's innovation is to introduce a deep learning and IoT based approach to control the operation of air conditioners in order to reduce energy consumption. To achieve such an ambitious target, we have proposed a deep learning-based people detection system utilizing the YOLOv3 algorithm to count the number of persons in a specific area. Accordingly, the operation of the air conditioners could be optimally managed in a smart building. Furthermore, the number of persons and the status of the air conditioners are published via the internet to the dashboard of the IoT platform. The proposed system enhances decision making about energy consumption. To affirm the efficacy and effectiveness of the proposed approach, intensive test scenarios are simulated in a specific smart building considering the existence of air conditioners. The simulation results emphasize that the proposed deep learning-based recognition algorithm can accurately detect the number of persons in the specified area, thanks to its ability to model highly non-linear relationships in data. The detection status can also be successfully published on the dashboard of the IoT platform. Another vital application of the proposed promising approach is in the remote management of diverse controllable devices.
27 citations
[...]
TL;DR: A biosensor is an integrated receptor-transducer device, which can convert a biological response into an electrical signal as mentioned in this paper, which can transform biological signals into electrochemical, electrical, optical, gravimetric, or acoustic signals.
Abstract: A biosensor is an integrated receptor-transducer device, which can convert a biological response into an electrical signal The design and development of biosensors have taken a center stage for researchers or scientists in the recent decade owing to the wide range of biosensor applications, such as health care and disease diagnosis, environmental monitoring, water and food quality monitoring, and drug delivery The main challenges involved in the biosensor progress are (i) the efficient capturing of biorecognition signals and the transformation of these signals into electrochemical, electrical, optical, gravimetric, or acoustic signals (transduction process), (ii) enhancing transducer performance ie, increasing sensitivity, shorter response time, reproducibility, and low detection limits even to detect individual molecules, and (iii) miniaturization of the biosensing devices using micro-and nano-fabrication technologies Those challenges can be met through the integration of sensing technology with nanomaterials, which range from zero- to three-dimensional, possessing a high surface-to-volume ratio, good conductivities, shock-bearing abilities, and color tunability Nanomaterials (NMs) employed in the fabrication and nanobiosensors include nanoparticles (NPs) (high stability and high carrier capacity), nanowires (NWs) and nanorods (NRs) (capable of high detection sensitivity), carbon nanotubes (CNTs) (large surface area, high electrical and thermal conductivity), and quantum dots (QDs) (color tunability) Furthermore, these nanomaterials can themselves act as transduction elements This review summarizes the evolution of biosensors, the types of biosensors based on their receptors, transducers, and modern approaches employed in biosensors using nanomaterials such as NPs (eg, noble metal NPs and metal oxide NPs), NWs, NRs, CNTs, QDs, and dendrimers and their recent advancement in biosensing technology with the expansion of nanotechnology
27 citations
[...]
TL;DR: How well deep learning models trained on chest CT images can diagnose COVID-19 infected people in a fast and automated process is explored and a transfer learning strategy using custom-sized input tailored for each deep architecture to achieve the best performance is proposed.
Abstract: This paper explores how well deep learning models trained on chest CT images can diagnose COVID-19 infected people in a fast and automated process. To this end, we adopted advanced deep network architectures and proposed a transfer learning strategy using custom-sized input tailored for each deep architecture to achieve the best performance. We conducted extensive sets of experiments on two CT image datasets, namely, the SARS-CoV-2 CT-scan and the COVID19-CT. The results show superior performances for our models compared with previous studies. Our best models achieved average accuracy, precision, sensitivity, specificity, and F1-score values of 99.4%, 99.6%, 99.8%, 99.6%, and 99.4% on the SARS-CoV-2 dataset, and 92.9%, 91.3%, 93.7%, 92.2%, and 92.5% on the COVID19-CT dataset, respectively. For better interpretability of the results, we applied visualization techniques to provide visual explanations for the models' predictions. Feature visualizations of the learned features show well-separated clusters representing CT images of COVID-19 and non-COVID-19 cases. Moreover, the visualizations indicate that our models are not only capable of identifying COVID-19 cases but also provide accurate localization of the COVID-19-associated regions, as indicated by well-trained radiologists.
27 citations
[...]
TL;DR: In this paper, a fault detection and identification (FDI) method for quadcopter blades based on airframe vibration signals is proposed using the airborne acceleration sensor, which integrates multi-axis data information and effectively detects and identifies quad-copter blade faults through Long and Short-Term Memory (LSTM) network models.
Abstract: Quadcopters are widely used in a variety of military and civilian mission scenarios. Real-time online detection of the abnormal state of the quadcopter is vital to the safety of aircraft. Existing data-driven fault detection methods generally usually require numerous sensors to collect data. However, quadcopter airframe space is limited. A large number of sensors cannot be loaded, meaning that it is difficult to use additional sensors to capture fault signals for quadcopters. In this paper, without additional sensors, a Fault Detection and Identification (FDI) method for quadcopter blades based on airframe vibration signals is proposed using the airborne acceleration sensor. This method integrates multi-axis data information and effectively detects and identifies quadcopter blade faults through Long and Short-Term Memory (LSTM) network models. Through flight experiments, the quadcopter triaxial accelerometer data are collected for airframe vibration signals at first. Then, the wavelet packet decomposition method is employed to extract data features, and the standard deviations of the wavelet packet coefficients are employed to form the feature vector. Finally, the LSTM-based FDI model is constructed for quadcopter blade FDI. The results show that the method can effectively detect and identify quadcopter blade faults with a better FDI performance and a higher model accuracy compared with the Back Propagation (BP) neural network-based FDI model.
27 citations
[...]
TL;DR: In this article, the authors proposed a generic HAR framework for smartphone sensor data, based on Long Short-Term Memory (LSTM) networks for time-series domains, and a hybrid LSTM network was proposed to improve recognition performance.
Abstract: Human Activity Recognition (HAR) employing inertial motion data has gained considerable momentum in recent years, both in research and industrial applications. From the abstract perspective, this has been driven by an acceleration in the building of intelligent and smart environments and systems that cover all aspects of human life including healthcare, sports, manufacturing, commerce, etc. Such environments and systems necessitate and subsume activity recognition, aimed at recognizing the actions, characteristics, and goals of one or more individuals from a temporal series of observations streamed from one or more sensors. Due to the reliance of conventional Machine Learning (ML) techniques on handcrafted features in the extraction process, current research suggests that deep-learning approaches are more applicable to automated feature extraction from raw sensor data. In this work, the generic HAR framework for smartphone sensor data is proposed, based on Long Short-Term Memory (LSTM) networks for time-series domains. Four baseline LSTM networks are comparatively studied to analyze the impact of using different kinds of smartphone sensor data. In addition, a hybrid LSTM network called 4-layer CNN-LSTM is proposed to improve recognition performance. The HAR method is evaluated on a public smartphone-based dataset of UCI-HAR through various combinations of sample generation processes (OW and NOW) and validation protocols (10-fold and LOSO cross validation). Moreover, Bayesian optimization techniques are used in this study since they are advantageous for tuning the hyperparameters of each LSTM network. The experimental results indicate that the proposed 4-layer CNN-LSTM network performs well in activity recognition, enhancing the average accuracy by up to 2.24% compared to prior state-of-the-art approaches.
25 citations
[...]
TL;DR: In this article, the authors proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM), which proved to be efficient with better accuracy that can work on lightweight computational devices.
Abstract: Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region's image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.
[...]
TL;DR: In this paper, the authors present a review of the current literature concerning predictive maintenance and intelligent sensors in smart factories, focusing on contemporary trends to provide an overview of future research challenges and classification, using burst analysis, systematic review methodology, co-occurrence analysis of keywords, and cluster analysis.
Abstract: With the arrival of new technologies in modern smart factories, automated predictive maintenance is also related to production robotisation. Intelligent sensors make it possible to obtain an ever-increasing amount of data, which must be analysed efficiently and effectively to support increasingly complex systems' decision-making and management. The paper aims to review the current literature concerning predictive maintenance and intelligent sensors in smart factories. We focused on contemporary trends to provide an overview of future research challenges and classification. The paper used burst analysis, systematic review methodology, co-occurrence analysis of keywords, and cluster analysis. The results show the increasing number of papers related to key researched concepts. The importance of predictive maintenance is growing over time in relation to Industry 4.0 technologies. We proposed Smart and Intelligent Predictive Maintenance (SIPM) based on the full-text analysis of relevant papers. The paper's main contribution is the summary and overview of current trends in intelligent sensors used for predictive maintenance in smart factories.
[...]
TL;DR: The Azure Kinect as discussed by the authors is the successor of Kinect v1 and Kinect v2 and has been shown to have better performance in both indoor and outdoor environments, including direct and indirect sun conditions.
Abstract: The Azure Kinect is the successor of Kinect v1 and Kinect v2. In this paper we perform brief data analysis and comparison of all Kinect versions with focus on precision (repeatability) and various aspects of noise of these three sensors. Then we thoroughly evaluate the new Azure Kinect; namely its warm-up time, precision (and sources of its variability), accuracy (thoroughly, using a robotic arm), reflectivity (using 18 different materials), and the multipath and flying pixel phenomenon. Furthermore, we validate its performance in both indoor and outdoor environments, including direct and indirect sun conditions. We conclude with a discussion on its improvements in the context of the evolution of the Kinect sensor. It was shown that it is crucial to choose well designed experiments to measure accuracy, since the RGB and depth camera are not aligned. Our measurements confirm the officially stated values, namely standard deviation ≤17 mm, and distance error <11 mm in up to 3.5 meters distance from the sensor in all four supported modes. The device, however, has to be warmed up for at least 40-50 min to give stable results. Due to the time-of-flight technology, the Azure Kinect cannot be reliably used in direct sunlight. Therefore, it is convenient mostly for indoor applications.
[...]
TL;DR: In this article, two artificial intelligence-based maximum power point tracking systems are proposed for grid-connected photovoltaic units, one based on an optimized fuzzy logic control using genetic algorithm and particle swarm optimization, and the other based on the genetic algorithm-based artificial neural network.
Abstract: This paper addresses the improvement of tracking of the maximum power point upon the variations of the environmental conditions and hence improving photovoltaic efficiency Rather than the traditional methods of maximum power point tracking, artificial intelligence is utilized to design a high-performance maximum power point tracking control system In this paper, two artificial intelligence-based maximum power point tracking systems are proposed for grid-connected photovoltaic units The first design is based on an optimized fuzzy logic control using genetic algorithm and particle swarm optimization for the maximum power point tracking system In turn, the second design depends on the genetic algorithm-based artificial neural network Each of the two artificial intelligence-based systems has its privileged response according to the solar radiation and temperature levels Then, a novel combination of the two designs is introduced to maximize the efficiency of the maximum power point tracking system The novelty of this paper is to employ the metaheuristic optimization technique with the well-known artificial intelligence techniques to provide a better tracking system to be used to harvest the maximum possible power from photovoltaic (PV) arrays To affirm the efficiency of the proposed tracking systems, their simulation results are compared with some conventional tracking methods from the literature under different conditions The findings emphasize their superiority in terms of tracking speed and output DC power, which also improve photovoltaic system efficiency
[...]
TL;DR: In this paper, a nonlinear dynamics and robust positioning control of the over-actuated autonomous underwater vehicle (AUV) under the effects of ocean current and model uncertainties is presented.
Abstract: Underwater vehicles (UVs) are subjected to various environmental disturbances due to ocean currents, propulsion systems, and un-modeled disturbances. In practice, it is very challenging to design a control system to maintain UVs stayed at the desired static position permanently under these conditions. Therefore, in this study, a nonlinear dynamics and robust positioning control of the over-actuated autonomous underwater vehicle (AUV) under the effects of ocean current and model uncertainties are presented. First, a motion equation of the over-actuated AUV under the effects of ocean current disturbances is established, and a trajectory generation of the over-actuated AUV heading angle is constructed based on the line of sight (LOS) algorithm. Second, a dynamic positioning (DP) control system based on motion control and an allocation control is proposed. For this, motion control of the over-actuated AUV based on the dynamic sliding mode control (DSMC) theory is adopted to improve the system robustness under the effects of the ocean current and model uncertainties. In addition, the stability of the system is proved based on Lyapunov criteria. Then, using the generalized forces generated from the motion control module, two different methods for optimal allocation control module: the least square (LS) method and quadratic programming (QP) method are developed to distribute a proper thrust to each thruster of the over-actuated AUV. Simulation studies are conducted to examine the effectiveness and robustness of the proposed DP controller. The results show that the proposed DP controller using the QP algorithm provides higher stability with smaller steady-state error and stronger robustness.
[...]
TL;DR: In this article, a comprehensive review on system design for battery-free and energy-aware WSNs, making use of ambient energy or wireless energy transmission, is presented, which gives a deep insight in energy management methods as well as possibilities for energy saving on node and network level.
Abstract: Nowadays, wireless sensor networks are becoming increasingly important in several sectors including industry, transportation, environment and medicine. This trend is reinforced by the spread of Internet of Things (IoT) technologies in almost all sectors. Autonomous energy supply is thereby an essential aspect as it decides the flexible positioning and easy maintenance, which are decisive for the acceptance of this technology, its wide use and sustainability. Significant improvements made in the last years have shown interesting possibilities for realizing energy-aware wireless sensor nodes (WSNs) by designing manifold and highly efficient energy converters and reducing energy consumption of hardware, software and communication protocols. Using only a few of these techniques or focusing on only one aspect is not sufficient to realize practicable and market relevant solutions. This paper therefore provides a comprehensive review on system design for battery-free and energy-aware WSN, making use of ambient energy or wireless energy transmission. It addresses energy supply strategies and gives a deep insight in energy management methods as well as possibilities for energy saving on node and network level. The aim therefore is to provide deep insight into system design and increase awareness of suitable techniques for realizing battery-free and energy-aware wireless sensor nodes.
[...]
TL;DR: In this paper, an efficient cyberphysical platform for the smart management of smart territories is presented, which facilitates the implementation of data acquisition and data management methods, as well as data representation and dashboard configuration.
Abstract: This paper presents an efficient cyberphysical platform for the smart management of smart territories. It is efficient because it facilitates the implementation of data acquisition and data management methods, as well as data representation and dashboard configuration. The platform allows for the use of any type of data source, ranging from the measurements of a multi-functional IoT sensing devices to relational and non-relational databases. It is also smart because it incorporates a complete artificial intelligence suit for data analysis; it includes techniques for data classification, clustering, forecasting, optimization, visualization, etc. It is also compatible with the edge computing concept, allowing for the distribution of intelligence and the use of intelligent sensors. The concept of smart cities is evolving and adapting to new applications; the trend to create intelligent neighbourhoods, districts or territories is becoming increasingly popular, as opposed to the previous approach of managing an entire megacity. In this paper, the platform is presented, and its architecture and functionalities are described. Moreover, its operation has been validated in a case study where the bike renting service of Paris—Velib’ Metropole has been managed. This platform could enable smart territories to develop adapted knowledge management systems, adapt them to new requirements and to use multiple types of data, and execute efficient computational and artificial intelligence algorithms. The platform optimizes the decisions taken by human experts through explainable artificial intelligence models that obtain data from IoT sensors, databases, the Internet, etc. The global intelligence of the platform could potentially coordinate its decision-making processes with intelligent nodes installed in the edge, which would use the most advanced data processing techniques.
[...]
TL;DR: In this article, the authors reviewed the development of state estimation and future development trends and provided a more detailed overview of model-driven, data-driven and hybrid-driven approaches.
Abstract: State estimation is widely used in various automated systems, including IoT systems, unmanned systems, robots, etc. In traditional state estimation, measurement data are instantaneous and processed in real time. With modern systems' development, sensors can obtain more and more signals and store them. Therefore, how to use these measurement big data to improve the performance of state estimation has become a hot research issue in this field. This paper reviews the development of state estimation and future development trends. First, we review the model-based state estimation methods, including the Kalman filter, such as the extended Kalman filter (EKF), unscented Kalman filter (UKF), cubature Kalman filter (CKF), etc. Particle filters and Gaussian mixture filters that can handle mixed Gaussian noise are discussed, too. These methods have high requirements for models, while it is not easy to obtain accurate system models in practice. The emergence of robust filters, the interacting multiple model (IMM), and adaptive filters are also mentioned here. Secondly, the current research status of data-driven state estimation methods is introduced based on network learning. Finally, the main research results for hybrid filters obtained in recent years are summarized and discussed, which combine model-based methods and data-driven methods. This paper is based on state estimation research results and provides a more detailed overview of model-driven, data-driven, and hybrid-driven approaches. The main algorithm of each method is provided so that beginners can have a clearer understanding. Additionally, it discusses the future development trends for researchers in state estimation.
[...]
TL;DR: In this paper, a modified sparrow search algorithm named CASSA has been presented to deal with the problem of UAV route planning in complex three-dimensional (3D) flight environment.
Abstract: The unmanned aerial vehicle (UAV) route planning problem mainly centralizes on the process of calculating the best route between the departure point and target point as well as avoiding obstructions on route to avoid collisions within a given flight area A highly efficient route planning approach is required for this complex high dimensional optimization problem However, many algorithms are infeasible or have low efficiency, particularly in the complex three-dimensional (3d) flight environment In this paper, a modified sparrow search algorithm named CASSA has been presented to deal with this problem Firstly, the 3d task space model and the UAV route planning cost functions are established, and the problem of route planning is transformed into a multi-dimensional function optimization problem Secondly, the chaotic strategy is introduced to enhance the diversity of the population of the algorithm, and an adaptive inertia weight is used to balance the convergence rate and exploration capabilities of the algorithm Finally, the Cauchy-Gaussian mutation strategy is adopted to enhance the capability of the algorithm to get rid of stagnation The results of simulation demonstrate that the routes generated by CASSA are preferable to the sparrow search algorithm (SSA), particle swarm optimization (PSO), artificial bee colony (ABC), and whale optimization algorithm (WOA) under the identical environment, which means that CASSA is more efficient for solving UAV route planning problem when taking all kinds of constraints into consideration
[...]
TL;DR: In this article, the authors provide an end-to-end review of the hardware and software methods required for sensor fusion object detection in autonomous driving applications. And they conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
Abstract: With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
[...]
TL;DR: In this article, the authors present a comprehensive study of AV technologies and identify the main advantages, disadvantages, and challenges of AV communication technologies based on three main categories: long range, medium range, and short range.
Abstract: The Department of Transport in the United Kingdom recorded 25,080 motor vehicle fatalities in 2019. This situation stresses the need for an intelligent transport system (ITS) that improves road safety and security by avoiding human errors with the use of autonomous vehicles (AVs). Therefore, this survey discusses the current development of two main components of an ITS: (1) gathering of AVs surrounding data using sensors; and (2) enabling vehicular communication technologies. First, the paper discusses various sensors and their role in AVs. Then, various communication technologies for AVs to facilitate vehicle to everything (V2X) communication are discussed. Based on the transmission range, these technologies are grouped into three main categories: long-range, medium-range and short-range. The short-range group presents the development of Bluetooth, ZigBee and ultra-wide band communication for AVs. The medium-range examines the properties of dedicated short-range communications (DSRC). Finally, the long-range group presents the cellular-vehicle to everything (C-V2X) and 5G-new radio (5G-NR). An important characteristic which differentiates each category and its suitable application is latency. This research presents a comprehensive study of AV technologies and identifies the main advantages, disadvantages, and challenges.
[...]
TL;DR: In this paper, the authors provide a panoramic view of the enabling technologies proposed to facilitate 6G and introduce emerging 6G applications such as multi-sensory-extended reality, digital replica, and more.
Abstract: The 5G wireless communication network is currently faced with the challenge of limited data speed exacerbated by the proliferation of billions of data-intensive applications. To address this problem, researchers are developing cutting-edge technologies for the envisioned 6G wireless communication standards to satisfy the escalating wireless services demands. Though some of the candidate technologies in the 5G standards will apply to 6G wireless networks, key disruptive technologies that will guarantee the desired quality of physical experience to achieve ubiquitous wireless connectivity are expected in 6G. This article first provides a foundational background on the evolution of different wireless communication standards to have a proper insight into the vision and requirements of 6G. Second, we provide a panoramic view of the enabling technologies proposed to facilitate 6G and introduce emerging 6G applications such as multi-sensory–extended reality, digital replica, and more. Next, the technology-driven challenges, social, psychological, health and commercialization issues posed to actualizing 6G, and the probable solutions to tackle these challenges are discussed extensively. Additionally, we present new use cases of the 6G technology in agriculture, education, media and entertainment, logistics and transportation, and tourism. Furthermore, we discuss the multi-faceted communication capabilities of 6G that will contribute significantly to global sustainability and how 6G will bring about a dramatic change in the business arena. Finally, we highlight the research trends, open research issues, and key take-away lessons for future research exploration in 6G wireless communication.
[...]
TL;DR: In this paper, a feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%.
Abstract: Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient’s death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%).
[...]
TL;DR: In this article, the authors analyzed noncontact body temperature measurement issues from both clinical and metrological points of view with the aim to improve body temperature measurements accuracy; estimate the uncertainty of body temperature on the field; and propose a screening decision rule for the prevention of the spread of COVID-19.
Abstract: The need to measure body temperature contactless and quickly during the COVID-19 pandemic emergency has led to the widespread use of infrared thermometers, thermal imaging cameras and thermal scanners as an alternative to the traditional contact clinical thermometers. However, limits and issues of noncontact temperature measurement devices are not well known and technical-scientific literature itself sometimes provides conflicting reference values on the body and skin temperature of healthy subjects. To limit the risk of contagion, national authorities have set the obligation to measure body temperature of workers at the entrance to the workplace. In this paper, the authors analyze noncontact body temperature measurement issues from both clinical and metrological points of view with the aim to (i) improve body temperature measurements accuracy; (ii) estimate the uncertainty of body temperature measurement on the field; (iii) propose a screening decision rule for the prevention of the spread of COVID-19. The approach adopted in this paper takes into account both the traditional instrumental uncertainty sources and clinical-medical ones related to the subjectivity of the measurand. A proper screening protocol for body temperature measurement considering the role of uncertainty is essential to correctly choose the threshold temperature value and measurement method to access critical places during COVID-19 pandemic emergency.
[...]
TL;DR: In this paper, the authors present a comprehensive collection of recently published research articles on Structural Health Monitoring (SHM) campaigns performed by means of Distributed Optical Fiber Sensors (DOFS).
Abstract: The present work is a comprehensive collection of recently published research articles on Structural Health Monitoring (SHM) campaigns performed by means of Distributed Optical Fiber Sensors (DOFS). The latter are cutting-edge strain, temperature and vibration monitoring tools with a large potential pool, namely their minimal intrusiveness, accuracy, ease of deployment and more. Its most state-of-the-art feature, though, is the ability to perform measurements with very small spatial resolutions (as small as 0.63 mm). This review article intends to introduce, inform and advise the readers on various DOFS deployment methodologies for the assessment of the residual ability of a structure to continue serving its intended purpose. By collecting in a single place these recent efforts, advancements and findings, the authors intend to contribute to the goal of collective growth towards an efficient SHM. The current work is structured in a manner that allows for the single consultation of any specific DOFS application field, i.e., laboratory experimentation, the built environment (bridges, buildings, roads, etc.), geotechnical constructions, tunnels, pipelines and wind turbines. Beforehand, a brief section was constructed around the recent progress on the study of the strain transfer mechanisms occurring in the multi-layered sensing system inherent to any DOFS deployment (different kinds of fiber claddings, coatings and bonding adhesives). Finally, a section is also dedicated to ideas and concepts for those novel DOFS applications which may very well represent the future of SHM.
[...]
TL;DR: In this article, the authors review current advances in this field with a special focus on polymer/carbon nanotubes (CNTs) based sensors and explain underlying principles for pressure and strain sensors, highlighting the influence of the manufacturing processes on the achieved sensing properties and the manifold possibilities to realize sensors using different shapes, dimensions and measurement procedures.
Abstract: In the last decade, significant developments of flexible and stretchable force sensors have been witnessed in order to satisfy the demand of several applications in robotic, prosthetics, wearables and structural health monitoring bringing decisive advantages due to their manifold customizability, easy integration and outstanding performance in terms of sensor properties and low-cost realization. In this paper, we review current advances in this field with a special focus on polymer/carbon nanotubes (CNTs) based sensors. Based on the electrical properties of polymer/CNTs nanocomposite, we explain underlying principles for pressure and strain sensors. We highlight the influence of the manufacturing processes on the achieved sensing properties and the manifold possibilities to realize sensors using different shapes, dimensions and measurement procedures. After an intensive review of the realized sensor performances in terms of sensitivity, stretchability, stability and durability, we describe perspectives and provide novel trends for future developments in this intriguing field.
[...]
TL;DR: In this paper, a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE.
Abstract: Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.
[...]
TL;DR: In this article, the temporal context of the time series data is chosen as the useful aspect of the data that is passed through the network for learning by exploiting the compositional locality of the temporal data at each level of the network.
Abstract: A neural network that matches with a complex data function is likely to boost the classification performance as it is able to learn the useful aspect of the highly varying data. In this work, the temporal context of the time series data is chosen as the useful aspect of the data that is passed through the network for learning. By exploiting the compositional locality of the time series data at each level of the network, shift-invariant features can be extracted layer by layer at different time scales. The temporal context is made available to the deeper layers of the network by a set of data processing operations based on the concatenation operation. A matching learning algorithm for the revised network is described in this paper. It uses gradient routing in the backpropagation path. The framework as proposed in this work attains better generalization without overfitting the network to the data, as the weights can be pretrained appropriately. It can be used end-to-end with multivariate time series data in their raw form, without the need for manual feature crafting or data transformation. Data experiments with electroencephalogram signals and human activity signals show that with the right amount of concatenation in the deeper layers of the proposed network, it can improve the performance in signal classification.
[...]
TL;DR: Extensive experiments on several subsets of the unconstrained Alex and Robert (AR) and Labeled Faces in the Wild (LFW) databases show that the MB-C-BSIF achieves superior and competitive results in unconStrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion.
Abstract: Single-Sample Face Recognition (SSFR) is a computer vision challenge. In this scenario, there is only one example from each individual on which to train the system, making it difficult to identify persons in unconstrained environments, mainly when dealing with changes in facial expression, posture, lighting, and occlusion. This paper discusses the relevance of an original method for SSFR, called Multi-Block Color-Binarized Statistical Image Features (MB-C-BSIF), which exploits several kinds of features, namely, local, regional, global, and textured-color characteristics. First, the MB-C-BSIF method decomposes a facial image into three channels (e.g., red, green, and blue), then it divides each channel into equal non-overlapping blocks to select the local facial characteristics that are consequently employed in the classification phase. Finally, the identity is determined by calculating the similarities among the characteristic vectors adopting a distance measurement of the K-nearest neighbors (K-NN) classifier. Extensive experiments on several subsets of the unconstrained Alex and Robert (AR) and Labeled Faces in the Wild (LFW) databases show that the MB-C-BSIF achieves superior and competitive results in unconstrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion. The average classification accuracies are 96.17% and 99% for the AR database with two specific protocols (i.e., Protocols I and II, respectively), and 38.01% for the challenging LFW database. These performances are clearly superior to those obtained by state-of-the-art methods. Furthermore, the proposed method uses algorithms based only on simple and elementary image processing operations that do not imply higher computational costs as in holistic, sparse or deep learning methods, making it ideal for real-time identification.
[...]
TL;DR: Wang et al. as mentioned in this paper presented the mechanism and correlation of pain and stress, their assessment and detection approach with medical devices and wearable sensors Various physiological signals (i.e., heart activity, brain activity, muscle activity, electrodermal activity, respiratory, blood volume pulse, skin temperature) and behavioral signals are organized for wearables sensors detection.
Abstract: Pain is a subjective feeling; it is a sensation that every human being must have experienced all their life Yet, its mechanism and the way to immune to it is still a question to be answered This review presents the mechanism and correlation of pain and stress, their assessment and detection approach with medical devices and wearable sensors Various physiological signals (ie, heart activity, brain activity, muscle activity, electrodermal activity, respiratory, blood volume pulse, skin temperature) and behavioral signals are organized for wearables sensors detection By reviewing the wearable sensors used in the healthcare domain, we hope to find a way for wearable healthcare-monitoring system to be applied on pain and stress detection Since pain leads to multiple consequences or symptoms such as muscle tension and depression that are stress related, there is a chance to find a new approach for chronic pain detection using daily life sensors or devices Then by integrating modern computing techniques, there is a chance to handle pain and stress management issue