scispace - formally typeset
Search or ask a question

Showing papers in "Sensors in 2020"


Journal ArticleDOI
11 Aug 2020-Sensors
TL;DR: This work evaluates the speed–accuracy tradeoff of three popular deep learning-based face detectors on the WIDER Face and UFDD data sets in several CPUs and GPUs and develops a regression model capable to estimate the performance, both in terms of processing time and accuracy.
Abstract: Face recognition is a valuable forensic tool for criminal investigators since it certainly helps in identifying individuals in scenarios of criminal activity like fugitives or child sexual abuse. It is, however, a very challenging task as it must be able to handle low-quality images of real world settings and fulfill real time requirements. Deep learning approaches for face detection have proven to be very successful but they require large computation power and processing time. In this work, we evaluate the speed-accuracy tradeoff of three popular deep-learning-based face detectors on the WIDER Face and UFDD data sets in several CPUs and GPUs. We also develop a regression model capable to estimate the performance, both in terms of processing time and accuracy. We expect this to become a very useful tool for the end user in forensic laboratories in order to estimate the performance for different face detection options. Experimental results showed that the best speed-accuracy tradeoff is achieved with images resized to 50% of the original size in GPUs and images resized to 25% of the original size in CPUs. Moreover, performance can be estimated using multiple linear regression models with a Mean Absolute Error (MAE) of 0.113, which is very promising for the forensic field.

267 citations


Journal ArticleDOI
14 Feb 2020-Sensors
TL;DR: A survey aimed at summarizing the current state of the art regarding smart irrigation systems, which determines the parameters that are monitored in irrigation systems regarding water quantity and quality, soil characteristics and weather conditions.
Abstract: Water management is paramount in countries with water scarcity. This also affects agriculture, as a large amount of water is dedicated to that use. The possible consequences of global warming lead to the consideration of creating water adaptation measures to ensure the availability of water for food production and consumption. Thus, studies aimed at saving water usage in the irrigation process have increased over the years. Typical commercial sensors for agriculture irrigation systems are very expensive, making it impossible for smaller farmers to implement this type of system. However, manufacturers are currently offering low-cost sensors that can be connected to nodes to implement affordable systems for irrigation management and agriculture monitoring. Due to the recent advances in IoT and WSN technologies that can be applied in the development of these systems, we present a survey aimed at summarizing the current state of the art regarding smart irrigation systems. We determine the parameters that are monitored in irrigation systems regarding water quantity and quality, soil characteristics and weather conditions. We provide an overview of the most utilized nodes and wireless technologies. Lastly, we will discuss the challenges and the best practices for the implementation of sensor-based irrigation systems.

264 citations


Journal ArticleDOI
07 Jan 2020-Sensors
TL;DR: This survey is to review some well-known techniques for each approach and to give the taxonomy of their categories and a solid discussion is given about future directions in terms of techniques to be used for face recognition.
Abstract: Over the past few decades, interest in theories and algorithms for face recognition has been growing rapidly. Video surveillance, criminal identification, building access control, and unmanned and autonomous vehicles are just a few examples of concrete applications that are gaining attraction among industries. Various techniques are being developed including local, holistic, and hybrid approaches, which provide a face image description using only a few face image features or the whole facial features. The main contribution of this survey is to review some well-known techniques for each approach and to give the taxonomy of their categories. In the paper, a detailed comparison between these techniques is exposed by listing the advantages and the disadvantages of their schemes in terms of robustness, accuracy, complexity, and discrimination. One interesting feature mentioned in the paper is about the database used for face recognition. An overview of the most commonly used databases, including those of supervised and unsupervised learning, is given. Numerical results of the most interesting techniques are given along with the context of experiments and challenges handled by these techniques. Finally, a solid discussion is given in the paper about future directions in terms of techniques to be used for face recognition.

257 citations


Journal ArticleDOI
21 Jun 2020-Sensors
TL;DR: The goal of this paper is to review current methods of energy harvesting, while focusing on piezoelectric energy harvesting and present several circuits used to maximize the energy harvested.
Abstract: The goal of this paper is to review current methods of energy harvesting, while focusing on piezoelectric energy harvesting The piezoelectric energy harvesting technique is based on the materials' property of generating an electric field when a mechanical force is applied This phenomenon is known as the direct piezoelectric effect Piezoelectric transducers can be of different shapes and materials, making them suitable for a multitude of applications To optimize the use of piezoelectric devices in applications, a model is needed to observe the behavior in the time and frequency domain In addition to different aspects of piezoelectric modeling, this paper also presents several circuits used to maximize the energy harvested

244 citations


Journal ArticleDOI
07 Sep 2020-Sensors
TL;DR: The history of how the 3D CNN was developed from its machine learning roots is traced, a brief mathematical description of3D CNN is provided and the preprocessing steps required for medical images before feeding them to 3DCNNs are provided.
Abstract: The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.

238 citations


Journal ArticleDOI
13 May 2020-Sensors
TL;DR: The procedure and application of vibration-based, vision-based monitoring, along with some of the recent technologies used for SHM, such as sensors, unmanned aerial vehicles (UAVs), etc. are discussed.
Abstract: Data-driven methods in structural health monitoring (SHM) is gaining popularity due to recent technological advancements in sensors, as well as high-speed internet and cloud-based computation. Since the introduction of deep learning (DL) in civil engineering, particularly in SHM, this emerging and promising tool has attracted significant attention among researchers. The main goal of this paper is to review the latest publications in SHM using emerging DL-based methods and provide readers with an overall understanding of various SHM applications. After a brief introduction, an overview of various DL methods (e.g., deep neural networks, transfer learning, etc.) is presented. The procedure and application of vibration-based, vision-based monitoring, along with some of the recent technologies used for SHM, such as sensors, unmanned aerial vehicles (UAVs), etc. are discussed. The review concludes with prospects and potential limitations of DL-based methods in SHM applications.

232 citations


Journal ArticleDOI
12 May 2020-Sensors
TL;DR: This paper presents a comprehensive overview of the key enabling technologies required for 5G and 6G networks, highlighting the massive MIMO systems and discusses all the fundamental challenges related to pilot contamination, channel estimation, precoding, user scheduling, energy efficiency, and signal detection.
Abstract: The global bandwidth shortage in the wireless communication sector has motivated the study and exploration of wireless access technology known as massive Multiple-Input Multiple-Output (MIMO). Massive MIMO is one of the key enabling technology for next-generation networks, which groups together antennas at both transmitter and the receiver to provide high spectral and energy efficiency using relatively simple processing. Obtaining a better understating of the massive MIMO system to overcome the fundamental issues of this technology is vital for the successful deployment of 5G—and beyond—networks to realize various applications of the intelligent sensing system. In this paper, we present a comprehensive overview of the key enabling technologies required for 5G and 6G networks, highlighting the massive MIMO systems. We discuss all the fundamental challenges related to pilot contamination, channel estimation, precoding, user scheduling, energy efficiency, and signal detection in a massive MIMO system and discuss some state-of-the-art mitigation techniques. We outline recent trends such as terahertz communication, ultra massive MIMO (UM-MIMO), visible light communication (VLC), machine learning, and deep learning for massive MIMO systems. Additionally, we discuss crucial open research issues that direct future research in massive MIMO systems for 5G and beyond networks.

228 citations


Journal ArticleDOI
21 Jan 2020-Sensors
TL;DR: This paper covers a few classes of sensors, using contactless methods as well as contact and skin-penetrating electrodes for human emotion detection and the measurement of their intensity and proposes their classification.
Abstract: Automated emotion recognition (AEE) is an important issue in various fields of activities which use human emotional reactions as a signal for marketing, technical equipment, or human–robot interaction. This paper analyzes scientific research and technical papers for sensor use analysis, among various methods implemented or researched. This paper covers a few classes of sensors, using contactless methods as well as contact and skin-penetrating electrodes for human emotion detection and the measurement of their intensity. The results of the analysis performed in this paper present applicable methods for each type of emotion and their intensity and propose their classification. The classification of emotion sensors is presented to reveal area of application and expected outcomes from each method, as well as their limitations. This paper should be relevant for researchers using human emotion evaluation and analysis, when there is a need to choose a proper method for their purposes or to find alternative decisions. Based on the analyzed human emotion recognition sensors and methods, we developed some practical applications for humanizing the Internet of Things (IoT) and affective computing systems.

227 citations


Journal ArticleDOI
31 May 2020-Sensors
TL;DR: The authors have critically studied how the advances in sensor technology, IoT and machine learning methods make environment monitoring a truly smart monitoring system.
Abstract: Air quality, water pollution, and radiation pollution are major factors that pose genuine challenges in the environment. Suitable monitoring is necessary so that the world can achieve sustainable growth, by maintaining a healthy society. In recent years, the environment monitoring has turned into a smart environment monitoring (SEM) system, with the advances in the internet of things (IoT) and the development of modern sensors. Under this scenario, the present manuscript aims to accomplish a critical review of noteworthy contributions and research studies on SEM, that involve monitoring of air quality, water quality, radiation pollution, and agriculture systems. The review is divided on the basis of the purposes where SEM methods are applied, and then each purpose is further analyzed in terms of the sensors used, machine learning techniques involved, and classification methods used. The detailed analysis follows the extensive review which has suggested major recommendations and impacts of SEM research on the basis of discussion results and research trends analyzed. The authors have critically studied how the advances in sensor technology, IoT and machine learning methods make environment monitoring a truly smart monitoring system. Finally, the framework of robust methods of machine learning; denoising methods and development of suitable standards for wireless sensor networks (WSNs), has been suggested.

220 citations


Journal ArticleDOI
27 Aug 2020-Sensors
TL;DR: It is shown that doping leads not only to a decrease in the concentration of manganese in model solutions, but also to an increase in the efficiency of adsorption from 11% to 75%.
Abstract: The main purpose of this work is to study the effectiveness of using FeCeOx nanocomposites doped with Nb2O5 for the purification of aqueous solutions from manganese. X-ray diffraction, energy–dispersive analysis, scanning electron microscopy, vibrational magnetic spectroscopy, and mossbauer spectroscopy were used as research methods. It is shown that an increase in the dopant concentration leads to the transformation of the shape of nanoparticles from spherical to cubic and rhombic, followed by an increase in the size of the nanoparticles. The spherical shape of the nanoparticles is characteristic of a structure consisting of a mixture of two phases of hematite (Fe2O3) and cerium oxide CeO2. The cubic shape of nanoparticles is typical for spinel-type FeNbO4 structures, the phase contribution of which increases with increasing dopant concentration. It is shown that doping leads not only to a decrease in the concentration of manganese in model solutions, but also to an increase in the efficiency of adsorption from 11% to 75%.

211 citations


Journal ArticleDOI
10 Apr 2020-Sensors
TL;DR: A dense architecture is incorporated into YOLOv3 to facilitate the reuse of features and help to learn a more compact and accurate model for tomato detection, and it had the best detection performance.
Abstract: Automatic fruit detection is a very important benefit of harvesting robots. However, complicated environment conditions, such as illumination variation, branch, and leaf occlusion as well as tomato overlap, have made fruit detection very challenging. In this study, an improved tomato detection model called YOLO-Tomato is proposed for dealing with these problems, based on YOLOv3. A dense architecture is incorporated into YOLOv3 to facilitate the reuse of features and help to learn a more compact and accurate model. Moreover, the model replaces the traditional rectangular bounding box (R-Bbox) with a circular bounding box (C-Bbox) for tomato localization. The new bounding boxes can then match the tomatoes more precisely, and thus improve the Intersection-over-Union (IoU) calculation for the Non-Maximum Suppression (NMS). They also reduce prediction coordinates. An ablation study demonstrated the efficacy of these modifications. The YOLO-Tomato was compared to several state-of-the-art detection methods and it had the best detection performance.

Journal ArticleDOI
29 Apr 2020-Sensors
TL;DR: A detailed review on models, architecture, and requirements on solutions that implement edge machine learning on Internet of Things devices is presented, with the main goal to define the state of the art and envisioning development requirements.
Abstract: In a few years, the world will be populated by billions of connected devices that will be placed in our homes, cities, vehicles, and industries. Devices with limited resources will interact with the surrounding environment and users. Many of these devices will be based on machine learning models to decode meaning and behavior behind sensors’ data, to implement accurate predictions and make decisions. The bottleneck will be the high level of connected things that could congest the network. Hence, the need to incorporate intelligence on end devices using machine learning algorithms. Deploying machine learning on such edge devices improves the network congestion by allowing computations to be performed close to the data sources. The aim of this work is to provide a review of the main techniques that guarantee the execution of machine learning models on hardware with low performances in the Internet of Things paradigm, paving the way to the Internet of Conscious Things. In this work, a detailed review on models, architecture, and requirements on solutions that implement edge machine learning on Internet of Things devices is presented, with the main goal to define the state of the art and envisioning development requirements. Furthermore, an example of edge machine learning implementation on a microcontroller will be provided, commonly regarded as the machine learning “Hello World”.

Journal ArticleDOI
31 May 2020-Sensors
TL;DR: The proof-of-concept development of a biosensor able to detect the SARS-CoV-2 S1 spike protein expressed on the surface of the virus is reported, offering a possible solution for the timely monitoring and eventual control of the global coronavirus pandemic.
Abstract: One of the key challenges of the recent COVID-19 pandemic is the ability to accurately estimate the number of infected individuals, particularly asymptomatic and/or early-stage patients. We herewith report the proof-of-concept development of a biosensor able to detect the SARS-CoV-2 S1 spike protein expressed on the surface of the virus. The biosensor is based on membrane-engineered mammalian cells bearing the human chimeric spike S1 antibody. We demonstrate that the attachment of the protein to the membrane-bound antibodies resulted in a selective and considerable change in the cellular bioelectric properties measured by means of a Bioelectric Recognition Assay. The novel biosensor provided results in an ultra-rapid manner (3 min), with a detection limit of 1 fg/mL and a semi-linear range of response between 10 fg and 1 μg/mL. In addition, no cross-reactivity was observed against the SARS-CoV-2 nucleocapsid protein. Furthermore, the biosensor was configured as a ready-to-use platform, including a portable read-out device operated via smartphone/tablet. In this way, we demonstrate that the novel biosensor can be potentially applied for the mass screening of SARS-CoV-2 surface antigens without prior sample processing, therefore offering a possible solution for the timely monitoring and eventual control of the global coronavirus pandemic.

Journal ArticleDOI
07 Apr 2020-Sensors
TL;DR: A complete review of the state-of-the-art of SLAM research, focusing on solutions using vision, LiDAR, and a sensor fusion of both modalities is given.
Abstract: Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. In these domains, both visual and visual-IMU SLAM are well studied, and improvements are regularly proposed in the literature. However, LiDAR-SLAM techniques seem to be relatively the same as ten or twenty years ago. Moreover, few research works focus on vision-LiDAR approaches, whereas such a fusion would have many advantages. Indeed, hybridized solutions offer improvements in the performance of SLAM, especially with respect to aggressive motion, lack of light, or lack of visual features. This study provides a comprehensive survey on visual-LiDAR SLAM. After a summary of the basic idea of SLAM and its implementation, we give a complete review of the state-of-the-art of SLAM research, focusing on solutions using vision, LiDAR, and a sensor fusion of both modalities.

Journal ArticleDOI
29 Jul 2020-Sensors
TL;DR: This article provides a comprehensive review of the state-of-the-art methods utilized to improve the performance of AV systems in short-range or local vehicle environments and focuses on recent studies that use deep learning sensor fusion algorithms for perception, localization, and mapping.
Abstract: Autonomous vehicles (AV) are expected to improve, reshape, and revolutionize the future of ground transportation. It is anticipated that ordinary vehicles will one day be replaced with smart vehicles that are able to make decisions and perform driving tasks on their own. In order to achieve this objective, self-driving vehicles are equipped with sensors that are used to sense and perceive both their surroundings and the faraway environment, using further advances in communication technologies, such as 5G. In the meantime, local perception, as with human beings, will continue to be an effective means for controlling the vehicle at short range. In the other hand, extended perception allows for anticipation of distant events and produces smarter behavior to guide the vehicle to its destination while respecting a set of criteria (safety, energy management, traffic optimization, comfort). In spite of the remarkable advancements of sensor technologies in terms of their effectiveness and applicability for AV systems in recent years, sensors can still fail because of noise, ambient conditions, or manufacturing defects, among other factors; hence, it is not advisable to rely on a single sensor for any of the autonomous driving tasks. The practical solution is to incorporate multiple competitive and complementary sensors that work synergistically to overcome their individual shortcomings. This article provides a comprehensive review of the state-of-the-art methods utilized to improve the performance of AV systems in short-range or local vehicle environments. Specifically, it focuses on recent studies that use deep learning sensor fusion algorithms for perception, localization, and mapping. The article concludes by highlighting some of the current trends and possible future research directions.

Journal ArticleDOI
11 Nov 2020-Sensors
TL;DR: An overview of the optical remote sensing technologies used in early fire warning systems is presented and an extensive survey on both flame and smoke detection algorithms employed by each technology is provided.
Abstract: The environmental challenges the world faces nowadays have never been greater or more complex. Global areas covered by forests and urban woodlands are threatened by natural disasters that have increased dramatically during the last decades, in terms of both frequency and magnitude. Large-scale forest fires are one of the most harmful natural hazards affecting climate change and life around the world. Thus, to minimize their impacts on people and nature, the adoption of well-planned and closely coordinated effective prevention, early warning, and response approaches are necessary. This paper presents an overview of the optical remote sensing technologies used in early fire warning systems and provides an extensive survey on both flame and smoke detection algorithms employed by each technology. Three types of systems are identified, namely terrestrial, airborne, and spaceborne-based systems, while various models aiming to detect fire occurrences with high accuracy in challenging environments are studied. Finally, the strengths and weaknesses of fire detection systems based on optical remote sensing are discussed aiming to contribute to future research projects for the development of early warning fire systems.

Journal ArticleDOI
06 Mar 2020-Sensors
TL;DR: This paper reviews automated visual-based defect detection approaches applicable to various materials, such as metals, ceramics and textiles, and describes artificial visual processing techniques that are aimed at understanding of the captured scenery in a mathematical/logical way.
Abstract: This paper reviews automated visual-based defect detection approaches applicable to various materials, such as metals, ceramics and textiles. In the first part of the paper, we present a general taxonomy of the different defects that fall in two classes: visible (e.g., scratches, shape error, etc.) and palpable (e.g., crack, bump, etc.) defects. Then, we describe artificial visual processing techniques that are aimed at understanding of the captured scenery in a mathematical/logical way. We continue with a survey of textural defect detection based on statistical, structural and other approaches. Finally, we report the state of the art for approaching the detection and classification of defects through supervised and non-supervised classifiers and deep learning.

Journal ArticleDOI
15 Jan 2020-Sensors
TL;DR: With many researchers from the social sciences, engineering, medicine, and other areas recently working with EDA, it is timely to summarize and review the recent developments and provide an updated and synthesized framework for all researchers interested in incorporating EDA into their research.
Abstract: The electrodermal activity (EDA) signal is an electrical manifestation of the sympathetic innervation of the sweat glands. EDA has a history in psychophysiological (including emotional or cognitive stress) research since 1879, but it was not until recent years that researchers began using EDA for pathophysiological applications like the assessment of fatigue, pain, sleepiness, exercise recovery, diagnosis of epilepsy, neuropathies, depression, and so forth. The advent of new devices and applications for EDA has increased the development of novel signal processing techniques, creating a growing pool of measures derived mathematically from the EDA. For many years, simply computing the mean of EDA values over a period was used to assess arousal. Much later, researchers found that EDA contains information not only in the slow changes (tonic component) that the mean value represents, but also in the rapid or phasic changes of the signal. The techniques that have ensued have intended to provide a more sophisticated analysis of EDA, beyond the traditional tonic/phasic decomposition of the signal. With many researchers from the social sciences, engineering, medicine, and other areas recently working with EDA, it is timely to summarize and review the recent developments and provide an updated and synthesized framework for all researchers interested in incorporating EDA into their research.

Journal ArticleDOI
21 May 2020-Sensors
TL;DR: TENG and PENG, as effective mechanical-to-electricity energy conversion technologies, have been used not only as power sources but also as active sensing devices in many application fields, including physical sensors, wearable devices, biomedical and health care, human–machine interface, chemical and environmental monitoring, smart traffic, smart cities, robotics, and fiber and fabric sensors.
Abstract: Sensor networks are essential for the development of the Internet of Things and the smart city. A general sensor, especially a mobile sensor, has to be driven by a power unit. When considering the high mobility, wide distribution and wireless operation of the sensors, their sustainable operation remains a critical challenge owing to the limited lifetime of an energy storage unit. In 2006, Wang proposed the concept of self-powered sensors/system, which harvests ambient energy to continuously drive a sensor without the use of an external power source. Based on the piezoelectric nanogenerator (PENG) and triboelectric nanogenerator (TENG), extensive studies have focused on self-powered sensors. TENG and PENG, as effective mechanical-to-electricity energy conversion technologies, have been used not only as power sources but also as active sensing devices in many application fields, including physical sensors, wearable devices, biomedical and health care, human-machine interface, chemical and environmental monitoring, smart traffic, smart cities, robotics, and fiber and fabric sensors. In this review, we systematically summarize the progress made by TENG and PENG in those application fields. A perspective will be given about the future of self-powered sensors.

Journal ArticleDOI
28 Apr 2020-Sensors
TL;DR: Two emerging trends in IoT are reviewed: the combination of RFID and WSNs in order to exploit their advantages and complement their limitations, and wearable sensors, which enable new promising IoT applications.
Abstract: Radio frequency identification (RFID) and wireless sensors networks (WSNs) are two fundamental pillars that enable the Internet of Things (IoT). RFID systems are able to identify and track devices, whilst WSNs cooperate to gather and provide information from interconnected sensors. This involves challenges, for example, in transforming RFID systems with identification capabilities into sensing and computational platforms, as well as considering them as architectures of wirelessly connected sensing tags. This, together with the latest advances in WSNs and with the integration of both technologies, has resulted in the opportunity to develop novel IoT applications. This paper presents a review of these two technologies and the obstacles and challenges that need to be overcome. Some of these challenges are the efficiency of the energy harvesting, communication interference, fault tolerance, higher capacities to handling data processing, cost feasibility, and an appropriate integration of these factors. Additionally, two emerging trends in IoT are reviewed: the combination of RFID and WSNs in order to exploit their advantages and complement their limitations, and wearable sensors, which enable new promising IoT applications.

Journal ArticleDOI
13 Apr 2020-Sensors
TL;DR: A novel platform for monitoring patient vital signs using smart contracts based on blockchain is proposed using hyperledger fabric, which is an enterprise-distributed ledger framework for developing blockchain-based applications and provides several benefits to the patients, such as an extensive, immutable history log, and global access to medical information from anywhere at any time.
Abstract: Over the past several years, many healthcare applications have been developed to enhancethe healthcare industry. Recent advancements in information technology and blockchain technologyhave revolutionized electronic healthcare research and industry. The innovation of miniaturizedhealthcare sensors for monitoring patient vital signs has improved and secured the human healthcaresystem. The increase in portable health devices has enhanced the quality of health-monitoringstatus both at an activity/fitness level for self-health tracking and at a medical level, providing moredata to clinicians with potential for earlier diagnosis and guidance of treatment. When sharingpersonal medical information, data security and comfort are essential requirements for interactionwith and collection of electronic medical records. However, it is hard for current systems to meetthese requirements because they have inconsistent security policies and access control structures.The new solutions should be directed towards improving data access, and should be managed bythe government in terms of privacy and security requirements to ensure the reliability of data formedical purposes. Blockchain paves the way for a revolution in the traditional pharmaceuticalindustry and benefits from unique features such as privacy and transparency of data. In this paper,we propose a novel platform for monitoring patient vital signs using smart contracts based onblockchain. The proposed system is designed and developed using hyperledger fabric, which isan enterprise-distributed ledger framework for developing blockchain-based applications. Thisapproach provides several benefits to the patients, such as an extensive, immutable history log, andglobal access to medical information from anywhere at any time. The Libelium e-Health toolkitis used to acquire physiological data. The performance of the designed and developed system isevaluated in terms of transaction per second, transaction latency, and resource utilization usinga standard benchmark tool known as Hyperledger Caliper. It is found that the proposed systemoutperforms the traditional health care system for monitoring patient data.

Journal ArticleDOI
04 Dec 2020-Sensors
TL;DR: The research progress and major challenges of non-invasive blood glucose detection technology in recent years are reviewed, and it is divided into three categories: optics, microwave and electrochemistry, based on the detection principle.
Abstract: In recent years, with the rise of global diabetes, a growing number of subjects are suffering from pain and infections caused by the invasive nature of mainstream commercial glucose meters. Non-invasive blood glucose monitoring technology has become an international research topic and a new method which could bring relief to a vast number of patients. This paper reviews the research progress and major challenges of non-invasive blood glucose detection technology in recent years, and divides it into three categories: optics, microwave and electrochemistry, based on the detection principle. The technology covers medical, materials, optics, electromagnetic wave, chemistry, biology, computational science and other related fields. The advantages and limitations of non-invasive and invasive technologies as well as electrochemistry and optics in non-invasives are compared horizontally in this paper. In addition, the current research achievements and limitations of non-invasive electrochemical glucose sensing systems in continuous monitoring, point-of-care and clinical settings are highlighted, so as to discuss the development tendency in future research. With the rapid development of wearable technology and transdermal biosensors, non-invasive blood glucose monitoring will become more efficient, affordable, robust, and more competitive on the market.

Journal ArticleDOI
15 May 2020-Sensors
TL;DR: The present work proposes a cervical cancer prediction model (CCPM) that offers early prediction of cervical cancer using risk factors as inputs and employs random forest (RF) as a classifier.
Abstract: Globally, cervical cancer remains as the foremost prevailing cancer in females. Hence, it is necessary to distinguish the importance of risk factors of cervical cancer to classify potential patients. The present work proposes a cervical cancer prediction model (CCPM) that offers early prediction of cervical cancer using risk factors as inputs. The CCPM first removes outliers by using outlier detection methods such as density-based spatial clustering of applications with noise (DBSCAN) and isolation forest (iForest) and by increasing the number of cases in the dataset in a balanced way, for example, through synthetic minority over-sampling technique (SMOTE) and SMOTE with Tomek link (SMOTETomek). Finally, it employs random forest (RF) as a classifier. Thus, CCPM lies on four scenarios: (1) DBSCAN + SMOTETomek + RF, (2) DBSCAN + SMOTE+ RF, (3) iForest + SMOTETomek + RF, and (4) iForest + SMOTE + RF. A dataset of 858 potential patients was used to validate the performance of the proposed method. We found that combinations of iForest with SMOTE and iForest with SMOTETomek provided better performances than those of DBSCAN with SMOTE and DBSCAN with SMOTETomek. We also observed that RF performed the best among several popular machine learning classifiers. Furthermore, the proposed CCPM showed better accuracy than previously proposed methods for forecasting cervical cancer. In addition, a mobile application that can collect cervical cancer risk factors data and provides results from CCPM is developed for instant and proper action at the initial stage of cervical cancer.

Journal ArticleDOI
12 Jun 2020-Sensors
TL;DR: A CNN architecture is proposed in order to achieve accuracy even better than that of ensemble architectures, along with reduced operational complexity and cost.
Abstract: Traditional systems of handwriting recognition have relied on handcrafted features and a large amount of prior knowledge. Training an Optical character recognition (OCR) system based on these prerequisites is a challenging task. Research in the handwriting recognition field is focused around deep learning techniques and has achieved breakthrough performance in the last few years. Still, the rapid growth in the amount of handwritten data and the availability of massive processing power demands improvement in recognition accuracy and deserves further investigation. Convolutional neural networks (CNNs) are very effective in perceiving the structure of handwritten characters/words in ways that help in automatic extraction of distinct features and make CNN the most suitable approach for solving handwriting recognition problems. Our aim in the proposed work is to explore the various design options like number of layers, stride size, receptive field, kernel size, padding and dilution for CNN-based handwritten digit recognition. In addition, we aim to evaluate various SGD optimization algorithms in improving the performance of handwritten digit recognition. A network's recognition accuracy increases by incorporating ensemble architecture. Here, our objective is to achieve comparable accuracy by using a pure CNN architecture without ensemble architecture, as ensemble architectures introduce increased computational cost and high testing complexity. Thus, a CNN architecture is proposed in order to achieve accuracy even better than that of ensemble architectures, along with reduced operational complexity and cost. Moreover, we also present an appropriate combination of learning parameters in designing a CNN that leads us to reach a new absolute record in classifying MNIST handwritten digits. We carried out extensive experiments and achieved a recognition accuracy of 99.87% for a MNIST dataset.

Journal ArticleDOI
14 May 2020-Sensors
TL;DR: A new facemask-wearing condition identification method by combining image super-resolution and classification networks (SRCNet), which quantifies a three-category classification problem based on unconstrained 2D facial images, thus having potential applications in epidemic prevention involving COVID-19.
Abstract: The rapid worldwide spread of Coronavirus Disease 2019 (COVID-19) has resulted in a global pandemic. Correct facemask wearing is valuable for infectious disease control, but the effectiveness of facemasks has been diminished, mostly due to improper wearing. However, there have not been any published reports on the automatic identification of facemask-wearing conditions. In this study, we develop a new facemask-wearing condition identification method by combining image super-resolution and classification networks (SRCNet), which quantifies a three-category classification problem based on unconstrained 2D facial images. The proposed algorithm contains four main steps: Image pre-processing, facial detection and cropping, image super-resolution, and facemask-wearing condition identification. Our method was trained and evaluated on the public dataset Medical Masks Dataset containing 3835 images with 671 images of no facemask-wearing, 134 images of incorrect facemask-wearing, and 3030 images of correct facemask-wearing. Finally, the proposed SRCNet achieved 98.70% accuracy and outperformed traditional end-to-end image classification methods using deep learning without image super-resolution by over 1.5% in kappa. Our findings indicate that the proposed SRCNet can achieve high-accuracy identification of facemask-wearing conditions, thus having potential applications in epidemic prevention involving COVID-19.

Journal ArticleDOI
07 Apr 2020-Sensors
TL;DR: An IoT-based WSN framework as an application to smart agriculture comprising different design levels is proposed and it is proved that the proposed framework significantly enhanced the communication performance as well as the energy consumption and routing overheads for smart agriculture, as compared to other solutions.
Abstract: Wireless sensor networks (WSNs) have demonstrated research and developmental interests in numerous fields, like communication, agriculture, industry, smart health, monitoring, and surveillance. In the area of agriculture production, IoT-based WSN has been used to observe the yields condition and automate agriculture precision using various sensors. These sensors are deployed in the agricultural environment to improve production yields through intelligent farming decisions and obtain information regarding crops, plants, temperature measurement, humidity, and irrigation systems. However, sensors have limited resources concerning processing, energy, transmitting, and memory capabilities that can negatively impact agriculture production. Besides efficiency, the protection and security of these IoT-based agricultural sensors are also important from malicious adversaries. In this article, we proposed an IoT-based WSN framework as an application to smart agriculture comprising different design levels. Firstly, agricultural sensors capture relevant data and determine a set of cluster heads based on multi-criteria decision function. Additionally, the strength of the signals on the transmission links is measured while using signal to noise ratio (SNR) to achieve consistent and efficient data transmissions. Secondly, security is provided for data transmission from agricultural sensors towards base stations (BS) while using the recurrence of the linear congruential generator. The simulated results proved that the proposed framework significantly enhanced the communication performance as an average of 13.5% in the network throughput, 38.5% in the packets drop ratio, 13.5% in the network latency, 16% in the energy consumption, and 26% in the routing overheads for smart agriculture, as compared to other solutions.

Journal ArticleDOI
16 May 2020-Sensors
TL;DR: The current state-of-the-art, as well as trends and effective practices for the future of effective, accessible, and acceptable eldercare with technology are outlined.
Abstract: The increasing ageing global population is causing an upsurge in ailments related to old age, primarily dementia and Alzheimer’s disease, frailty, Parkinson’s, and cardiovascular disease, but also a general need for general eldercare as well as active and healthy ageing. In turn, there is a need for constant monitoring and assistance, intervention, and support, causing a considerable financial and human burden on individuals and their caregivers. Interconnected sensing technology, such as IoT wearables and devices, present a promising solution for objective, reliable, and remote monitoring, assessment, and support through ambient assisted living. This paper presents a review of such solutions including both earlier review studies and individual case studies, rapidly evolving in the last decade. In doing so, it examines and categorizes them according to common aspects of interest such as health focus, from specific ailments to general eldercare; IoT technologies, from wearables to smart home sensors; aims, from assessment to fall detection and indoor positioning to intervention; and experimental evaluation participants duration and outcome measures, from acceptability to accuracy. Statistics drawn from this categorization aim to outline the current state-of-the-art, as well as trends and effective practices for the future of effective, accessible, and acceptable eldercare with technology.

Journal ArticleDOI
16 Apr 2020-Sensors
TL;DR: The paper is aimed at those who want to undertake studies on UOWC and offers an overview on the current technologies and those potentially available soon, especially on the use of single-photon receivers.
Abstract: Underwater Optical Wireless Communication (UOWC) is not a new idea, but it has recently attracted renewed interest since seawater presents a reduced absorption window for blue-green light. Due to its higher bandwidth, underwater optical wireless communications can support higher data rates at low latency levels compared to acoustic and RF counterparts. The paper is aimed at those who want to undertake studies on UOWC. It offers an overview on the current technologies and those potentially available soon. Particular attention has been given to offering a recent bibliography, especially on the use of single-photon receivers.

Journal ArticleDOI
29 Jul 2020-Sensors
TL;DR: The review aims to identify the main devices, platforms, network protocols, processing data technologies and the applicability of smart farming with IoT to agriculture and shows an evolution in the way data is processed in recent years.
Abstract: The world population growth is increasing the demand for food production. Furthermore, the reduction of the workforce in rural areas and the increase in production costs are challenges for food production nowadays. Smart farming is a farm management concept that may use Internet of Things (IoT) to overcome the current challenges of food production. This work uses the preferred reporting items for systematic reviews (PRISMA) methodology to systematically review the existing literature on smart farming with IoT. The review aims to identify the main devices, platforms, network protocols, processing data technologies and the applicability of smart farming with IoT to agriculture. The review shows an evolution in the way data is processed in recent years. Traditional approaches mostly used data in a reactive manner. In more recent approaches, however, new technological developments allowed the use of data to prevent crop problems and to improve the accuracy of crop diagnosis.

Journal ArticleDOI
04 Feb 2020-Sensors
TL;DR: This paper identifies current research challenges and solutions in relation to 5G-enabled Industrial IoT, based on the initial requirements and promises of both domains, and provides meaningful comparisons for each of these areas to draw conclusions on current research gaps.
Abstract: Industrial IoT has special communication requirements, including high reliability, low latency, flexibility, and security. These are instinctively provided by the 5G mobile technology, making it a successful candidate for supporting Industrial IoT (IIoT) scenarios. The aim of this paper is to identify current research challenges and solutions in relation to 5G-enabled Industrial IoT, based on the initial requirements and promises of both domains. The methodology of the paper follows the steps of surveying state-of-the art, comparing results to identify further challenges, and drawing conclusions as lessons learned for each research domain. These areas include IIoT applications and their requirements; mobile edge cloud; back-end performance tuning; network function virtualization; and security, blockchains for IIoT, Artificial Intelligence support for 5G, and private campus networks. Beside surveying the current challenges and solutions, the paper aims to provide meaningful comparisons for each of these areas (in relation to 5G-enabled IIoT) to draw conclusions on current research gaps.