scispace - formally typeset
Search or ask a question

Showing papers in "Sensors in 2021"


Journal ArticleDOI
05 Feb 2021-Sensors
TL;DR: A biosensor is an integrated receptor-transducer device, which can convert a biological response into an electrical signal as mentioned in this paper, which can transform biological signals into electrochemical, electrical, optical, gravimetric, or acoustic signals.
Abstract: A biosensor is an integrated receptor-transducer device, which can convert a biological response into an electrical signal The design and development of biosensors have taken a center stage for researchers or scientists in the recent decade owing to the wide range of biosensor applications, such as health care and disease diagnosis, environmental monitoring, water and food quality monitoring, and drug delivery The main challenges involved in the biosensor progress are (i) the efficient capturing of biorecognition signals and the transformation of these signals into electrochemical, electrical, optical, gravimetric, or acoustic signals (transduction process), (ii) enhancing transducer performance ie, increasing sensitivity, shorter response time, reproducibility, and low detection limits even to detect individual molecules, and (iii) miniaturization of the biosensing devices using micro-and nano-fabrication technologies Those challenges can be met through the integration of sensing technology with nanomaterials, which range from zero- to three-dimensional, possessing a high surface-to-volume ratio, good conductivities, shock-bearing abilities, and color tunability Nanomaterials (NMs) employed in the fabrication and nanobiosensors include nanoparticles (NPs) (high stability and high carrier capacity), nanowires (NWs) and nanorods (NRs) (capable of high detection sensitivity), carbon nanotubes (CNTs) (large surface area, high electrical and thermal conductivity), and quantum dots (QDs) (color tunability) Furthermore, these nanomaterials can themselves act as transduction elements This review summarizes the evolution of biosensors, the types of biosensors based on their receptors, transducers, and modern approaches employed in biosensors using nanomaterials such as NPs (eg, noble metal NPs and metal oxide NPs), NWs, NRs, CNTs, QDs, and dendrimers and their recent advancement in biosensing technology with the expansion of nanotechnology

401 citations


Journal ArticleDOI
18 Apr 2021-Sensors
TL;DR: In this article, the authors proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM), which proved to be efficient with better accuracy that can work on lightweight computational devices.
Abstract: Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region's image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.

251 citations


Journal ArticleDOI
01 Oct 2021-Sensors
TL;DR: The use of nanomaterials such as nanoparticles, nanotubes, nanowires, and nanocomposites provided catalytic activity, enhanced sensing elements immobilization, promoted faster electron transfer, and increased reliability and accuracy of the reported EIS sensors as discussed by the authors.
Abstract: Electrochemical impedance spectroscopy (EIS) is a powerful technique used for the analysis of interfacial properties related to bio-recognition events occurring at the electrode surface, such as antibody-antigen recognition, substrate-enzyme interaction, or whole cell capturing. Thus, EIS could be exploited in several important biomedical diagnosis and environmental applications. However, the EIS is one of the most complex electrochemical methods, therefore, this review introduced the basic concepts and the theoretical background of the impedimetric technique along with the state of the art of the impedimetric biosensors and the impact of nanomaterials on the EIS performance. The use of nanomaterials such as nanoparticles, nanotubes, nanowires, and nanocomposites provided catalytic activity, enhanced sensing elements immobilization, promoted faster electron transfer, and increased reliability and accuracy of the reported EIS sensors. Thus, the EIS was used for the effective quantitative and qualitative detections of pathogens, DNA, cancer-associated biomarkers, etc. Through this review article, intensive literature review is provided to highlight the impact of nanomaterials on enhancing the analytical features of impedimetric biosensors.

178 citations


Journal ArticleDOI
18 Mar 2021-Sensors
TL;DR: In this article, the authors provide an end-to-end review of the hardware and software methods required for sensor fusion object detection in autonomous driving applications. And they conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.
Abstract: With the significant advancement of sensor and communication technology and the reliable application of obstacle detection techniques and algorithms, automated driving is becoming a pivotal technology that can revolutionize the future of transportation and mobility. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented. This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles, primarily focusing on a large selection of vision cameras, LiDAR sensors, and radar sensors and the various conditions in which such sensors may operate in practice. We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial sensors. We also summarize the three main approaches to sensor fusion and review current state-of-the-art multi-sensor fusion techniques and algorithms for object detection in autonomous driving applications. The current paper, therefore, provides an end-to-end review of the hardware and software methods required for sensor fusion object detection. We conclude by highlighting some of the challenges in the sensor fusion field and propose possible future research directions for automated driving systems.

162 citations


Journal ArticleDOI
27 Apr 2021-Sensors
TL;DR: In this paper, a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE.
Abstract: Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.

162 citations


Journal ArticleDOI
28 May 2021-Sensors
TL;DR: In this paper, a review of the recent literature on machine learning in agriculture is presented, where a plethora of machine learning algorithms are used, with those belonging to Artificial Neural Networks being more efficient.
Abstract: The digital transformation of agriculture has evolved various aspects of management into artificial intelligent systems for the sake of making value from the ever-increasing data originated from numerous sources. A subset of artificial intelligence, namely machine learning, has a considerable potential to handle numerous challenges in the establishment of knowledge-based farming systems. The present study aims at shedding light on machine learning in agriculture by thoroughly reviewing the recent scholarly literature based on keywords’ combinations of “machine learning” along with “crop management”, “water management”, “soil management”, and “livestock management”, and in accordance with PRISMA guidelines. Only journal papers were considered eligible that were published within 2018–2020. The results indicated that this topic pertains to different disciplines that favour convergence research at the international level. Furthermore, crop management was observed to be at the centre of attention. A plethora of machine learning algorithms were used, with those belonging to Artificial Neural Networks being more efficient. In addition, maize and wheat as well as cattle and sheep were the most investigated crops and animals, respectively. Finally, a variety of sensors, attached on satellites and unmanned ground and aerial vehicles, have been utilized as a means of getting reliable input data for the data analyses. It is anticipated that this study will constitute a beneficial guide to all stakeholders towards enhancing awareness of the potential advantages of using machine learning in agriculture and contributing to a more systematic research on this topic.

138 citations


Journal ArticleDOI
08 Jan 2021-Sensors
TL;DR: In this article, a proof-of-concept label-free electrochemical immunoassay for the rapid detection of SARS-CoV-2 virus via the spike surface protein was presented.
Abstract: The outbreak of the coronavirus disease (COVID-19) pandemic caused by the novel coronavirus (SARS-CoV-2) has been declared an international public health crisis. It is essential to develop diagnostic tests that can quickly identify infected individuals to limit the spread of the virus and assign treatment options. Herein, we report a proof-of-concept label-free electrochemical immunoassay for the rapid detection of SARS-CoV-2 virus via the spike surface protein. The assay consists of a graphene working electrode functionalized with anti-spike antibodies. The concept of the immunosensor is to detect the signal perturbation obtained from ferri/ferrocyanide measurements after binding of the antigen during 45 min of incubation with a sample. The absolute change in the [Fe(CN)6]3-/4- current upon increasing antigen concentrations on the immunosensor surface was used to determine the detection range of the spike protein. The sensor was able to detect a specific signal above 260 nM (20 µg/mL) of subunit 1 of recombinant spike protein. Additionally, it was able to detect SARS-CoV-2 at a concentration of 5.5 × 105 PFU/mL, which is within the physiologically relevant concentration range. The novel immunosensor has a significantly faster analysis time than the standard qPCR and is operated by a portable device which can enable on-site diagnosis of infection.

132 citations


Journal ArticleDOI
22 Mar 2021-Sensors
TL;DR: In this paper, the authors proposed a method for brain tumor classification using an ensemble of deep features and machine learning classifiers, where the top three deep features which perform well on several machine-learning classifiers are selected and concatenated as an ensemble-of-deep features which is then fed into several machine learning classes to predict the final output.
Abstract: Brain tumor classification plays an important role in clinical diagnosis and effective treatment. In this work, we propose a method for brain tumor classification using an ensemble of deep features and machine learning classifiers. In our proposed framework, we adopt the concept of transfer learning and uses several pre-trained deep convolutional neural networks to extract deep features from brain magnetic resonance (MR) images. The extracted deep features are then evaluated by several machine learning classifiers. The top three deep features which perform well on several machine learning classifiers are selected and concatenated as an ensemble of deep features which is then fed into several machine learning classifiers to predict the final output. To evaluate the different kinds of pre-trained models as a deep feature extractor, machine learning classifiers, and the effectiveness of an ensemble of deep feature for brain tumor classification, we use three different brain magnetic resonance imaging (MRI) datasets that are openly accessible from the web. Experimental results demonstrate that an ensemble of deep features can help improving performance significantly, and in most cases, support vector machine (SVM) with radial basis function (RBF) kernel outperforms other machine learning classifiers, especially for large datasets.

128 citations


Journal ArticleDOI
11 Jan 2021-Sensors
TL;DR: How well deep learning models trained on chest CT images can diagnose COVID-19 infected people in a fast and automated process is explored and a transfer learning strategy using custom-sized input tailored for each deep architecture to achieve the best performance is proposed.
Abstract: This paper explores how well deep learning models trained on chest CT images can diagnose COVID-19 infected people in a fast and automated process. To this end, we adopted advanced deep network architectures and proposed a transfer learning strategy using custom-sized input tailored for each deep architecture to achieve the best performance. We conducted extensive sets of experiments on two CT image datasets, namely, the SARS-CoV-2 CT-scan and the COVID19-CT. The results show superior performances for our models compared with previous studies. Our best models achieved average accuracy, precision, sensitivity, specificity, and F1-score values of 99.4%, 99.6%, 99.8%, 99.6%, and 99.4% on the SARS-CoV-2 dataset, and 92.9%, 91.3%, 93.7%, 92.2%, and 92.5% on the COVID19-CT dataset, respectively. For better interpretability of the results, we applied visualization techniques to provide visual explanations for the models' predictions. Feature visualizations of the learned features show well-separated clusters representing CT images of COVID-19 and non-COVID-19 cases. Moreover, the visualizations indicate that our models are not only capable of identifying COVID-19 cases but also provide accurate localization of the COVID-19-associated regions, as indicated by well-trained radiologists.

127 citations


Journal ArticleDOI
30 Jul 2021-Sensors
TL;DR: The Lateral Flow Immunoassay (LFIA) is by far one of the most successful analytical platforms to perform the on-site detection of target substances as mentioned in this paper, which can be considered as a sort of lab-in-a-hand and, together with other point-of-need tests, has represented a paradigm shift from sample-to-lab to lab-tosample aiming to improve decision making and turnaround time.
Abstract: The Lateral Flow Immunoassay (LFIA) is by far one of the most successful analytical platforms to perform the on-site detection of target substances. LFIA can be considered as a sort of lab-in-a-hand and, together with other point-of-need tests, has represented a paradigm shift from sample-to-lab to lab-to-sample aiming to improve decision making and turnaround time. The features of LFIAs made them a very attractive tool in clinical diagnostic where they can improve patient care by enabling more prompt diagnosis and treatment decisions. The rapidity, simplicity, relative cost-effectiveness, and the possibility to be used by nonskilled personnel contributed to the wide acceptance of LFIAs. As a consequence, from the detection of molecules, organisms, and (bio)markers for clinical purposes, the LFIA application has been rapidly extended to other fields, including food and feed safety, veterinary medicine, environmental control, and many others. This review aims to provide readers with a 10-years overview of applications, outlining the trends for the main application fields and the relative compounded annual growth rates. Moreover, future perspectives and challenges are discussed.

125 citations


Journal ArticleDOI
18 Apr 2021-Sensors
TL;DR: In this article, an innovative method called BCAoMID-F (Binarized Common Areas of Maximum Image Differences-Fusion) is proposed to extract features of thermal images of three angle grinders.
Abstract: The paper presents an analysis and classification method to evaluate the working condition of angle grinders by means of infrared (IR) thermography and IR image processing. An innovative method called BCAoMID-F (Binarized Common Areas of Maximum Image Differences—Fusion) is proposed in this paper. This method is used to extract features of thermal images of three angle grinders. The computed features are 1-element or 256-element vectors. Feature vectors are the sum of pixels of matrix V or PCA of matrix V or histogram of matrix V. Three different cases of thermal images were considered: healthy angle grinder, angle grinder with 1 blocked air inlet, angle grinder with 2 blocked air inlets. The classification of feature vectors was carried out using two classifiers: Support Vector Machine and Nearest Neighbor. Total recognition efficiency for 3 classes (TRAG) was in the range of 98.5–100%. The presented technique is efficient for fault diagnosis of electrical devices and electric power tools.

Journal ArticleDOI
08 Jan 2021-Sensors
TL;DR: The Azure Kinect as discussed by the authors is the successor of Kinect v1 and Kinect v2 and has been shown to have better performance in both indoor and outdoor environments, including direct and indirect sun conditions.
Abstract: The Azure Kinect is the successor of Kinect v1 and Kinect v2. In this paper we perform brief data analysis and comparison of all Kinect versions with focus on precision (repeatability) and various aspects of noise of these three sensors. Then we thoroughly evaluate the new Azure Kinect; namely its warm-up time, precision (and sources of its variability), accuracy (thoroughly, using a robotic arm), reflectivity (using 18 different materials), and the multipath and flying pixel phenomenon. Furthermore, we validate its performance in both indoor and outdoor environments, including direct and indirect sun conditions. We conclude with a discussion on its improvements in the context of the evolution of the Kinect sensor. It was shown that it is crucial to choose well designed experiments to measure accuracy, since the RGB and depth camera are not aligned. Our measurements confirm the officially stated values, namely standard deviation ≤17 mm, and distance error <11 mm in up to 3.5 meters distance from the sensor in all four supported modes. The device, however, has to be warmed up for at least 40-50 min to give stable results. Due to the time-of-flight technology, the Azure Kinect cannot be reliably used in direct sunlight. Therefore, it is convenient mostly for indoor applications.

Journal ArticleDOI
08 May 2021-Sensors
TL;DR: In this article, an improved CSPDarkNet53 is introduced into the trunk feature extraction network, which reduces the computing cost of the network and improves the learning ability of the model.
Abstract: To solve the problems of low accuracy, low real-time performance, poor robustness and others caused by the complex environment, this paper proposes a face mask recognition and standard wear detection algorithm based on the improved YOLO-v4. Firstly, an improved CSPDarkNet53 is introduced into the trunk feature extraction network, which reduces the computing cost of the network and improves the learning ability of the model. Secondly, the adaptive image scaling algorithm can reduce computation and redundancy effectively. Thirdly, the improved PANet structure is introduced so that the network has more semantic information in the feature layer. At last, a face mask detection data set is made according to the standard wearing of masks. Based on the object detection algorithm of deep learning, a variety of evaluation indexes are compared to evaluate the effectiveness of the model. The results of the comparations show that the mAP of face mask recognition can reach 98.3% and the frame rate is high at 54.57 FPS, which are more accurate compared with the exiting algorithm.

Journal ArticleDOI
26 Feb 2021-Sensors
TL;DR: In this article, the authors proposed a generic HAR framework for smartphone sensor data, based on Long Short-Term Memory (LSTM) networks for time-series domains, and a hybrid LSTM network was proposed to improve recognition performance.
Abstract: Human Activity Recognition (HAR) employing inertial motion data has gained considerable momentum in recent years, both in research and industrial applications. From the abstract perspective, this has been driven by an acceleration in the building of intelligent and smart environments and systems that cover all aspects of human life including healthcare, sports, manufacturing, commerce, etc. Such environments and systems necessitate and subsume activity recognition, aimed at recognizing the actions, characteristics, and goals of one or more individuals from a temporal series of observations streamed from one or more sensors. Due to the reliance of conventional Machine Learning (ML) techniques on handcrafted features in the extraction process, current research suggests that deep-learning approaches are more applicable to automated feature extraction from raw sensor data. In this work, the generic HAR framework for smartphone sensor data is proposed, based on Long Short-Term Memory (LSTM) networks for time-series domains. Four baseline LSTM networks are comparatively studied to analyze the impact of using different kinds of smartphone sensor data. In addition, a hybrid LSTM network called 4-layer CNN-LSTM is proposed to improve recognition performance. The HAR method is evaluated on a public smartphone-based dataset of UCI-HAR through various combinations of sample generation processes (OW and NOW) and validation protocols (10-fold and LOSO cross validation). Moreover, Bayesian optimization techniques are used in this study since they are advantageous for tuning the hyperparameters of each LSTM network. The experimental results indicate that the proposed 4-layer CNN-LSTM network performs well in activity recognition, enhancing the average accuracy by up to 2.24% compared to prior state-of-the-art approaches.

Journal ArticleDOI
12 Jul 2021-Sensors
TL;DR: In this article, the authors present a survey of the existing literature in applying deep convolutional neural networks to predict plant diseases from leaf images, and highlight the advantages and disadvantages of different techniques and models.
Abstract: In the modern era, deep learning techniques have emerged as powerful tools in image recognition. Convolutional Neural Networks, one of the deep learning tools, have attained an impressive outcome in this area. Applications such as identifying objects, faces, bones, handwritten digits, and traffic signs signify the importance of Convolutional Neural Networks in the real world. The effectiveness of Convolutional Neural Networks in image recognition motivates the researchers to extend its applications in the field of agriculture for recognition of plant species, yield management, weed detection, soil, and water management, fruit counting, diseases, and pest detection, evaluating the nutrient status of plants, and much more. The availability of voluminous research works in applying deep learning models in agriculture leads to difficulty in selecting a suitable model according to the type of dataset and experimental environment. In this manuscript, the authors present a survey of the existing literature in applying deep Convolutional Neural Networks to predict plant diseases from leaf images. This manuscript presents an exemplary comparison of the pre-processing techniques, Convolutional Neural Network models, frameworks, and optimization techniques applied to detect and classify plant diseases using leaf images as a data set. This manuscript also presents a survey of the datasets and performance metrics used to evaluate the efficacy of models. The manuscript highlights the advantages and disadvantages of different techniques and models proposed in the existing literature. This survey will ease the task of researchers working in the field of applying deep learning techniques for the identification and classification of plant leaf diseases.

Journal ArticleDOI
05 Mar 2021-Sensors
TL;DR: In this paper, the authors present a comprehensive collection of recently published research articles on Structural Health Monitoring (SHM) campaigns performed by means of Distributed Optical Fiber Sensors (DOFS).
Abstract: The present work is a comprehensive collection of recently published research articles on Structural Health Monitoring (SHM) campaigns performed by means of Distributed Optical Fiber Sensors (DOFS). The latter are cutting-edge strain, temperature and vibration monitoring tools with a large potential pool, namely their minimal intrusiveness, accuracy, ease of deployment and more. Its most state-of-the-art feature, though, is the ability to perform measurements with very small spatial resolutions (as small as 0.63 mm). This review article intends to introduce, inform and advise the readers on various DOFS deployment methodologies for the assessment of the residual ability of a structure to continue serving its intended purpose. By collecting in a single place these recent efforts, advancements and findings, the authors intend to contribute to the goal of collective growth towards an efficient SHM. The current work is structured in a manner that allows for the single consultation of any specific DOFS application field, i.e., laboratory experimentation, the built environment (bridges, buildings, roads, etc.), geotechnical constructions, tunnels, pipelines and wind turbines. Beforehand, a brief section was constructed around the recent progress on the study of the strain transfer mechanisms occurring in the multi-layered sensing system inherent to any DOFS deployment (different kinds of fiber claddings, coatings and bonding adhesives). Finally, a section is also dedicated to ideas and concepts for those novel DOFS applications which may very well represent the future of SHM.

Journal ArticleDOI
Guiyun Liu1, Shu Cong1, Zhongwei Liang1, Baihao Peng1, Lefeng Cheng1 
09 Feb 2021-Sensors
TL;DR: In this paper, a modified sparrow search algorithm named CASSA has been presented to deal with the problem of UAV route planning in complex three-dimensional (3D) flight environment.
Abstract: The unmanned aerial vehicle (UAV) route planning problem mainly centralizes on the process of calculating the best route between the departure point and target point as well as avoiding obstructions on route to avoid collisions within a given flight area A highly efficient route planning approach is required for this complex high dimensional optimization problem However, many algorithms are infeasible or have low efficiency, particularly in the complex three-dimensional (3d) flight environment In this paper, a modified sparrow search algorithm named CASSA has been presented to deal with this problem Firstly, the 3d task space model and the UAV route planning cost functions are established, and the problem of route planning is transformed into a multi-dimensional function optimization problem Secondly, the chaotic strategy is introduced to enhance the diversity of the population of the algorithm, and an adaptive inertia weight is used to balance the convergence rate and exploration capabilities of the algorithm Finally, the Cauchy-Gaussian mutation strategy is adopted to enhance the capability of the algorithm to get rid of stagnation The results of simulation demonstrate that the routes generated by CASSA are preferable to the sparrow search algorithm (SSA), particle swarm optimization (PSO), artificial bee colony (ABC), and whale optimization algorithm (WOA) under the identical environment, which means that CASSA is more efficient for solving UAV route planning problem when taking all kinds of constraints into consideration

Journal ArticleDOI
08 Jul 2021-Sensors
TL;DR: In this paper, the authors discuss the mechanisms of electrochemical glucose sensing with a focus on the different generations of enzymatic-based sensors, their recent advances, and provide an overview of the next generation of non-enzymatic sensors.
Abstract: The detection of glucose is crucial in the management of diabetes and other medical conditions but also crucial in a wide range of industries such as food and beverages. The development of glucose sensors in the past century has allowed diabetic patients to effectively manage their disease and has saved lives. First-generation glucose sensors have considerable limitations in sensitivity and selectivity which has spurred the development of more advanced approaches for both the medical and industrial sectors. The wide range of application areas has resulted in a range of materials and fabrication techniques to produce novel glucose sensors that have higher sensitivity and selectivity, lower cost, and are simpler to use. A major focus has been on the development of enzymatic electrochemical sensors, typically using glucose oxidase. However, non-enzymatic approaches using direct electrochemistry of glucose on noble metals are now a viable approach in glucose biosensor design. This review discusses the mechanisms of electrochemical glucose sensing with a focus on the different generations of enzymatic-based sensors, their recent advances, and provides an overview of the next generation of non-enzymatic sensors. Advancements in manufacturing techniques and materials are key in propelling the field of glucose sensing, however, significant limitations remain which are highlighted in this review and requires addressing to obtain a more stable, sensitive, selective, cost efficient, and real-time glucose sensor.

Journal ArticleDOI
15 Mar 2021-Sensors
TL;DR: Mixed reality education and training of aircraft maintenance for Boeing 737 in smart glasses, enhanced with a deep learning speech interaction module for trainee engineers to control virtual assets and workflow using speech commands, enabling them to operate with both hands.
Abstract: Metaverses embedded in our lives create virtual experiences inside of the physical world. Moving towards metaverses in aircraft maintenance, mixed reality (MR) creates enormous opportunities for the interaction with virtual airplanes (digital twin) that deliver a near-real experience, keeping physical distancing during pandemics. 3D twins of modern machines exported to MR can be easily manipulated, shared, and updated, which creates colossal benefits for aviation colleges who still exploit retired models for practicing. Therefore, we propose mixed reality education and training of aircraft maintenance for Boeing 737 in smart glasses, enhanced with a deep learning speech interaction module for trainee engineers to control virtual assets and workflow using speech commands, enabling them to operate with both hands. With the use of the convolutional neural network (CNN) architecture for audio features and learning and classification parts for commands and language identification, the speech module handles intermixed requests in English and Korean languages, giving corresponding feedback. Evaluation with test data showed high accuracy of prediction, having on average 95.7% and 99.6% on the F1-Score metric for command and language prediction, respectively. The proposed speech interaction module in the aircraft maintenance metaverse further improved education and training, giving intuitive and efficient control over the operation, enhancing interaction with virtual objects in mixed reality.

Journal ArticleDOI
20 Feb 2021-Sensors
TL;DR: In this paper, the authors present a review of the current literature concerning predictive maintenance and intelligent sensors in smart factories, focusing on contemporary trends to provide an overview of future research challenges and classification, using burst analysis, systematic review methodology, co-occurrence analysis of keywords, and cluster analysis.
Abstract: With the arrival of new technologies in modern smart factories, automated predictive maintenance is also related to production robotisation. Intelligent sensors make it possible to obtain an ever-increasing amount of data, which must be analysed efficiently and effectively to support increasingly complex systems' decision-making and management. The paper aims to review the current literature concerning predictive maintenance and intelligent sensors in smart factories. We focused on contemporary trends to provide an overview of future research challenges and classification. The paper used burst analysis, systematic review methodology, co-occurrence analysis of keywords, and cluster analysis. The results show the increasing number of papers related to key researched concepts. The importance of predictive maintenance is growing over time in relation to Industry 4.0 technologies. We proposed Smart and Intelligent Predictive Maintenance (SIPM) based on the full-text analysis of relevant papers. The paper's main contribution is the summary and overview of current trends in intelligent sensors used for predictive maintenance in smart factories.

Journal ArticleDOI
05 Jul 2021-Sensors
TL;DR: In this paper, a vision-based real-time system that can detect social distancing violations and send nonintrusive audio-visual cues using state-of-the-art deep learning models is proposed.
Abstract: Social distancing (SD) is an effective measure to prevent the spread of the infectious Coronavirus Disease 2019 (COVID-19). However, a lack of spatial awareness may cause unintentional violations of this new measure. Against this backdrop, we propose an active surveillance system to slow the spread of COVID-19 by warning individuals in a region-of-interest. Our contribution is twofold. First, we introduce a vision-based real-time system that can detect SD violations and send non-intrusive audio-visual cues using state-of-the-art deep-learning models. Second, we define a novel critical social density value and show that the chance of SD violation occurrence can be held near zero if the pedestrian density is kept under this value. The proposed system is also ethically fair: it does not record data nor target individuals, and no human supervisor is present during the operation. The proposed system was evaluated across real-world datasets.

Journal ArticleDOI
10 Feb 2021-Sensors
TL;DR: In this paper, a survey of the field of discrete speech emotion recognition is presented, followed by a multi-aspect comparison between practical neural network approaches in speech emotion classification. And then, the authors present a multiscale comparison between machine learning techniques and deep learning techniques for speech emotion detection.
Abstract: The advancements in neural networks and the on-demand need for accurate and near real-time Speech Emotion Recognition (SER) in human–computer interactions make it mandatory to compare available methods and databases in SER to achieve feasible solutions and a firmer understanding of this open-ended problem The current study reviews deep learning approaches for SER with available datasets, followed by conventional machine learning techniques for speech emotion recognition Ultimately, we present a multi-aspect comparison between practical neural network approaches in speech emotion recognition The goal of this study is to provide a survey of the field of discrete speech emotion recognition

Journal ArticleDOI
06 Jan 2021-Sensors
TL;DR: In this article, the authors review current advances in this field with a special focus on polymer/carbon nanotubes (CNTs) based sensors and explain underlying principles for pressure and strain sensors, highlighting the influence of the manufacturing processes on the achieved sensing properties and the manifold possibilities to realize sensors using different shapes, dimensions and measurement procedures.
Abstract: In the last decade, significant developments of flexible and stretchable force sensors have been witnessed in order to satisfy the demand of several applications in robotic, prosthetics, wearables and structural health monitoring bringing decisive advantages due to their manifold customizability, easy integration and outstanding performance in terms of sensor properties and low-cost realization. In this paper, we review current advances in this field with a special focus on polymer/carbon nanotubes (CNTs) based sensors. Based on the electrical properties of polymer/CNTs nanocomposite, we explain underlying principles for pressure and strain sensors. We highlight the influence of the manufacturing processes on the achieved sensing properties and the manifold possibilities to realize sensors using different shapes, dimensions and measurement procedures. After an intensive review of the realized sensor performances in terms of sensitivity, stretchability, stability and durability, we describe perspectives and provide novel trends for future developments in this intriguing field.

Journal ArticleDOI
14 Jun 2021-Sensors
TL;DR: The most recent advances in terahertz (THz) imaging with particular attention paid to the optimization and miniaturization of the THz imaging systems are discussed in this article.
Abstract: In this roadmap article, we have focused on the most recent advances in terahertz (THz) imaging with particular attention paid to the optimization and miniaturization of the THz imaging systems. Such systems entail enhanced functionality, reduced power consumption, and increased convenience, thus being geared toward the implementation of THz imaging systems in real operational conditions. The article will touch upon the advanced solid-state-based THz imaging systems, including room temperature THz sensors and arrays, as well as their on-chip integration with diffractive THz optical components. We will cover the current-state of compact room temperature THz emission sources, both optolectronic and electrically driven; particular emphasis is attributed to the beam-forming role in THz imaging, THz holography and spatial filtering, THz nano-imaging, and computational imaging. A number of advanced THz techniques, such as light-field THz imaging, homodyne spectroscopy, and phase sensitive spectrometry, THz modulated continuous wave imaging, room temperature THz frequency combs, and passive THz imaging, as well as the use of artificial intelligence in THz data processing and optics development, will be reviewed. This roadmap presents a structured snapshot of current advances in THz imaging as of 2021 and provides an opinion on contemporary scientific and technological challenges in this field, as well as extrapolations of possible further evolution in THz imaging.

Journal ArticleDOI
21 Jan 2021-Sensors
TL;DR: In this article, the authors present a comprehensive study of AV technologies and identify the main advantages, disadvantages, and challenges of AV communication technologies based on three main categories: long range, medium range, and short range.
Abstract: The Department of Transport in the United Kingdom recorded 25,080 motor vehicle fatalities in 2019. This situation stresses the need for an intelligent transport system (ITS) that improves road safety and security by avoiding human errors with the use of autonomous vehicles (AVs). Therefore, this survey discusses the current development of two main components of an ITS: (1) gathering of AVs surrounding data using sensors; and (2) enabling vehicular communication technologies. First, the paper discusses various sensors and their role in AVs. Then, various communication technologies for AVs to facilitate vehicle to everything (V2X) communication are discussed. Based on the transmission range, these technologies are grouped into three main categories: long-range, medium-range and short-range. The short-range group presents the development of Bluetooth, ZigBee and ultra-wide band communication for AVs. The medium-range examines the properties of dedicated short-range communications (DSRC). Finally, the long-range group presents the cellular-vehicle to everything (C-V2X) and 5G-new radio (5G-NR). An important characteristic which differentiates each category and its suitable application is latency. This research presents a comprehensive study of AV technologies and identifies the main advantages, disadvantages, and challenges.

Journal ArticleDOI
03 Feb 2021-Sensors
TL;DR: In this paper, the authors proposed a deep learning-based people detection system utilizing the YOLOv3 algorithm to count the number of persons in a specific area, and the status of the air conditioners are published via the internet to the dashboard of the IoT platform.
Abstract: Worldwide, energy consumption and saving represent the main challenges for all sectors, most importantly in industrial and domestic sectors. The internet of things (IoT) is a new technology that establishes the core of Industry 4.0. The IoT enables the sharing of signals between devices and machines via the internet. Besides, the IoT system enables the utilization of artificial intelligence (AI) techniques to manage and control the signals between different machines based on intelligence decisions. The paper's innovation is to introduce a deep learning and IoT based approach to control the operation of air conditioners in order to reduce energy consumption. To achieve such an ambitious target, we have proposed a deep learning-based people detection system utilizing the YOLOv3 algorithm to count the number of persons in a specific area. Accordingly, the operation of the air conditioners could be optimally managed in a smart building. Furthermore, the number of persons and the status of the air conditioners are published via the internet to the dashboard of the IoT platform. The proposed system enhances decision making about energy consumption. To affirm the efficacy and effectiveness of the proposed approach, intensive test scenarios are simulated in a specific smart building considering the existence of air conditioners. The simulation results emphasize that the proposed deep learning-based recognition algorithm can accurately detect the number of persons in the specified area, thanks to its ability to model highly non-linear relationships in data. The detection status can also be successfully published on the dashboard of the IoT platform. Another vital application of the proposed promising approach is in the remote management of diverse controllable devices.

Journal ArticleDOI
20 Feb 2021-Sensors
TL;DR: In this paper, a feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%.
Abstract: Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient’s death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%).

Journal ArticleDOI
13 Apr 2021-Sensors
TL;DR: In this paper, a systematic review of wearable sensors and techniques used in real-time gait analysis, and their application to pathological gait was presented, and they found that heel strike and toe off are the most sought-after gait events.
Abstract: Gait analysis has traditionally been carried out in a laboratory environment using expensive equipment, but, recently, reliable, affordable, and wearable sensors have enabled integration into clinical applications as well as use during activities of daily living. Real-time gait analysis is key to the development of gait rehabilitation techniques and assistive devices such as neuroprostheses. This article presents a systematic review of wearable sensors and techniques used in real-time gait analysis, and their application to pathological gait. From four major scientific databases, we identified 1262 articles of which 113 were analyzed in full-text. We found that heel strike and toe off are the most sought-after gait events. Inertial measurement units (IMU) are the most widely used wearable sensors and the shank and foot are the preferred placements. Insole pressure sensors are the most common sensors for ground-truth validation for IMU-based gait detection. Rule-based techniques relying on threshold or peak detection are the most widely used gait detection method. The heterogeneity of evaluation criteria prevented quantitative performance comparison of all methods. Although most studies predicted that the proposed methods would work on pathological gait, less than one third were validated on such data. Clinical applications of gait detection algorithms were considered, and we recommend a combination of IMU and rule-based methods as an optimal solution.

Journal ArticleDOI
26 May 2021-Sensors
TL;DR: In this article, two deep learning-based models, CNN512 and YOLOv3, were fused to classify diabetic retinopathy (DR) images and localize DR lesions.
Abstract: Diabetic retinopathy (DR) is a disease resulting from diabetes complications, causing non-reversible damage to retina blood vessels. DR is a leading cause of blindness if not detected early. The currently available DR treatments are limited to stopping or delaying the deterioration of sight, highlighting the importance of regular scanning using high-efficiency computer-based systems to diagnose cases early. The current work presented fully automatic diagnosis systems that exceed manual techniques to avoid misdiagnosis, reducing time, effort and cost. The proposed system classifies DR images into five stages-no-DR, mild, moderate, severe and proliferative DR-as well as localizing the affected lesions on retain surface. The system comprises two deep learning-based models. The first model (CNN512) used the whole image as an input to the CNN model to classify it into one of the five DR stages. It achieved an accuracy of 88.6% and 84.1% on the DDR and the APTOS Kaggle 2019 public datasets, respectively, compared to the state-of-the-art results. Simultaneously, the second model used an adopted YOLOv3 model to detect and localize the DR lesions, achieving a 0.216 mAP in lesion localization on the DDR dataset, which improves the current state-of-the-art results. Finally, both of the proposed structures, CNN512 and YOLOv3, were fused to classify DR images and localize DR lesions, obtaining an accuracy of 89% with 89% sensitivity, 97.3 specificity and that exceeds the current state-of-the-art results.

Journal ArticleDOI
01 Jun 2021-Sensors
TL;DR: In this article, the authors discuss the potential impact of the pandemic on the adoption of the Internet of Things (IoT) in various broad sectors, namely healthcare, smart homes, smart buildings, smart cities, transportation and industrial IoT.
Abstract: COVID-19 has disrupted normal life and has enforced a substantial change in the policies, priorities and activities of individuals, organisations and governments. These changes are proving to be a catalyst for technology and innovation. In this paper, we discuss the pandemic’s potential impact on the adoption of the Internet of Things (IoT) in various broad sectors, namely healthcare, smart homes, smart buildings, smart cities, transportation and industrial IoT. Our perspective and forecast of this impact on IoT adoption is based on a thorough research literature review, a careful examination of reports from leading consulting firms and interactions with several industry experts. For each of these sectors, we also provide the details of notable IoT initiatives taken in the wake of COVID-19. We also highlight the challenges that need to be addressed and important research directions that will facilitate accelerated IoT adoption.