scispace - formally typeset
Search or ask a question

Showing papers in "Sensors in 2023"


Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: A survey of explainable AI techniques used in healthcare and related medical imaging applications can be found in this paper , where the authors provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis.
Abstract: Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.

24 citations


Journal ArticleDOI
22 Jan 2023-Sensors
TL;DR: In this paper , the authors present a possible solution for reducing the time needed for quality management by using 3D laser Vibrometry System, which is the perfect application in the case of a factory quality management system based on Industry 4.0 concept.
Abstract: In the current economic situation of many companies, the need to reduce production time is a critical element. However, this cannot usually be carried out with a decrease in the quality of the final product. This article presents a possible solution for reducing the time needed for quality management. With the use of modern solutions such as optical measurement systems, quality control can be performed without additional stoppage time. In the case of single-point measurement with the Laser Doppler Vibrometer, the measurement can be performed quickly in a matter of milliseconds for each product. This article presents an example of such quality assurance measurements, with the use of fully non-contact methods, together with a proposed evaluation criterion for quality assessment. The proposed quality assurance algorithm allows the comparison of each of the products’ modal responses with the ideal template and stores this information in the cloud, e.g., in the company’s supervisory system. This makes the presented 3D Laser Vibrometry System an advanced instrumentation and data acquisition system which is the perfect application in the case of a factory quality management system based on the Industry 4.0 concept.

13 citations


Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this paper , an algorithm for equalizing FBMC signals with offset-QAM modulation (OQAM) with self-interference compensation due to its correlation properties in a MIMO channel with memory is presented.
Abstract: Increasing the data transfer rate is an urgent task in cellular, high-frequency (HF) and special communication systems. The most common way to increase the data rate is to expand the bandwidth of the transmitted signal, which is often achieved through the use of multitone systems. One such system is the filter bank multicarrier (FBMC). In addition, speed improvements are achieved using multi-input–multi-output (MIMO) systems. In this study, we developed an algorithm for equalizing FBMC signals with offset-QAM modulation (OQAM) with self-interference compensation due to its correlation properties in a MIMO channel with memory. An analytical derivation of the proposed algorithm and an analysis of the computational complexity are given. According to the results of simulation modeling and a comparative analysis of performance in terms of the bit error rate and error vector magnitude with solutions with similar computational complexity, a similar level of performance was shown compared to a more complex parallel multistage algorithm, and a better performance was demonstrated compared to a one-tap algorithm.

12 citations


Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this paper , a real-time view of blockchain-based applications for Industry 4.0 and Society 5.0 is presented, where open issues, challenges, and research opportunities are discussed.
Abstract: Today, blockchain is becoming more popular in academia and industry because it is a distributed, decentralised technology which is changing many industries in terms of security, building trust, etc. A few blockchain applications are banking, insurance, logistics, transportation, etc. Many insurance companies have been thinking about how blockchain could help them be more efficient. There is still a lot of hype about this immutable technology, even though it has not been utilised to its full potential. Insurers have to decide whether or not to use blockchain, just like many other businesses do. This technology keeps a distributed ledger on each blockchain node, making it more secure and transparent. The blockchain network can operate smart contracts and convince others to agree, so criminals cannot make mistakes. On another side, the Internet of Things (IoT) might make a real-time application work faster through its automation. With the integration of blockchain and IoT, there will always be a problem with technology regarding IoT devices and mining the blockchain. This paper gives a real-time view of blockchain—IoT-based applications for Industry 4.0 and Society 5.0. The last few sections discuss essential topics such as open issues, challenges, and research opportunities for future researchers to expand research in blockchain—IoT-based applications.

12 citations


Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this article , the authors reviewed recent advances in sensor technologies for non-destructive testing (NDT) and structural health monitoring (SHM) of civil structures, highlighting limitations, advantages, and disadvantages.
Abstract: This paper reviews recent advances in sensor technologies for non-destructive testing (NDT) and structural health monitoring (SHM) of civil structures. The article is motivated by the rapid developments in sensor technologies and data analytics leading to ever-advancing systems for assessing and monitoring structures. Conventional and advanced sensor technologies are systematically reviewed and evaluated in the context of providing input parameters for NDT and SHM systems and for their suitability to determine the health state of structures. The presented sensing technologies and monitoring systems are selected based on their capabilities, reliability, maturity, affordability, popularity, ease of use, resilience, and innovation. A significant focus is placed on evaluating the selected technologies and associated data analytics, highlighting limitations, advantages, and disadvantages. The paper presents sensing techniques such as fiber optics, laser vibrometry, acoustic emission, ultrasonics, thermography, drones, microelectromechanical systems (MEMS), magnetostrictive sensors, and next-generation technologies.

12 citations


Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: An overview of the recent progress in piezoelectric materials and sensors for structural health monitoring can be found in this article , where several challenges along with opportunities for future research and development of high-performance piezolectric material and sensors are highlighted.
Abstract: Structural health monitoring technology can assess the status and integrity of structures in real time by advanced sensors, evaluate the remaining life of structure, and make the maintenance decisions on the structures. Piezoelectric materials, which can yield electrical output in response to mechanical strain/stress, are at the heart of structural health monitoring. Here, we present an overview of the recent progress in piezoelectric materials and sensors for structural health monitoring. The article commences with a brief introduction of the fundamental physical science of piezoelectric effect. Emphases are placed on the piezoelectric materials engineered by various strategies and the applications of piezoelectric sensors for structural health monitoring. Finally, challenges along with opportunities for future research and development of high-performance piezoelectric materials and sensors for structural health monitoring are highlighted.

10 citations


Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this paper , a hybrid function learning model was developed by customizing the transfer learning model together with hyperparameters to detect monkeypox disease in case of a possible pandemic through skin lesions in a fast and safe way.
Abstract: Monkeypox disease is caused by a virus that causes lesions on the skin and has been observed on the African continent in the past years. The fatal consequences caused by virus infections after the COVID pandemic have caused fear and panic among the public. As a result of COVID reaching the pandemic dimension, the development and implementation of rapid detection methods have become important. In this context, our study aims to detect monkeypox disease in case of a possible pandemic through skin lesions with deep-learning methods in a fast and safe way. Deep-learning methods were supported with transfer learning tools and hyperparameter optimization was provided. In the CNN structure, a hybrid function learning model was developed by customizing the transfer learning model together with hyperparameters. Implemented on the custom model MobileNetV3-s, EfficientNetV2, ResNET50, Vgg19, DenseNet121, and Xception models. In our study, AUC, accuracy, recall, loss, and F1-score metrics were used for evaluation and comparison. The optimized hybrid MobileNetV3-s model achieved the best score, with an average F1-score of 0.98, AUC of 0.99, accuracy of 0.96, and recall of 0.97. In this study, convolutional neural networks were used in conjunction with optimization of hyperparameters and a customized hybrid function transfer learning model to achieve striking results when a custom CNN model was developed. The custom CNN model design we have proposed is proof of how successfully and quickly the deep learning methods can achieve results in classification and discrimination.

10 citations


Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this article , the authors proposed the amalgamation of artificial intelligence and blockchain in the metaverse to provide better, faster, and more secure healthcare facilities in digital space with a realistic experience.
Abstract: Digitization and automation have always had an immense impact on healthcare. It embraces every new and advanced technology. Recently the world has witnessed the prominence of the metaverse which is an emerging technology in digital space. The metaverse has huge potential to provide a plethora of health services seamlessly to patients and medical professionals with an immersive experience. This paper proposes the amalgamation of artificial intelligence and blockchain in the metaverse to provide better, faster, and more secure healthcare facilities in digital space with a realistic experience. Our proposed architecture can be summarized as follows. It consists of three environments, namely the doctor’s environment, the patient’s environment, and the metaverse environment. The doctors and patients interact in a metaverse environment assisted by blockchain technology which ensures the safety, security, and privacy of data. The metaverse environment is the main part of our proposed architecture. The doctors, patients, and nurses enter this environment by registering on the blockchain and they are represented by avatars in the metaverse environment. All the consultation activities between the doctor and the patient will be recorded and the data, i.e., images, speech, text, videos, clinical data, etc., will be gathered, transferred, and stored on the blockchain. These data are used for disease prediction and diagnosis by explainable artificial intelligence (XAI) models. The GradCAM and LIME approaches of XAI provide logical reasoning for the prediction of diseases and ensure trust, explainability, interpretability, and transparency regarding the diagnosis and prediction of diseases. Blockchain technology provides data security for patients while enabling transparency, traceability, and immutability regarding their data. These features of blockchain ensure trust among the patients regarding their data. Consequently, this proposed architecture ensures transparency and trust regarding both the diagnosis of diseases and the data security of the patient. We also explored the building block technologies of the metaverse. Furthermore, we also investigated the advantages and challenges of a metaverse in healthcare.

10 citations


Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this paper , the authors provide a review of state-of-the-art approaches of blockchain and IoT integration, specifically in order to solve certain security and privacy related issues.
Abstract: As the Internet of Things (IoT) concept materialized worldwide in complex ecosystems, the related data security and privacy issues became apparent. While the system elements and their communication paths could be protected individually, generic, ecosystem-wide approaches were sought after as well. On a parallel timeline to IoT, the concept of distributed ledgers and blockchains came into the technological limelight. Blockchains offer many advantageous features in relation to enhanced security, anonymity, increased capacity, and peer-to-peer capabilities. Although blockchain technology can provide IoT with effective and efficient solutions, there are many challenges related to various aspects of integrating these technologies. While security, anonymity/data privacy, and smart contract-related features are apparently advantageous for blockchain technologies (BCT), there are challenges in relation to storage capacity/scalability, resource utilization, transaction rate scalability, predictability, and legal issues. This paper provides a systematic review on state-of-the-art approaches of BCT and IoT integration, specifically in order to solve certain security- and privacy-related issues. The paper first provides a brief overview of BCT and IoT’s basic principles, including their architecture, protocols and consensus algorithms, characteristics, and the challenges of integrating them. Afterwards, it describes the survey methodology, including the search strategy, eligibility criteria, selection results, and characteristics of the included articles. Later, we highlight the findings of this study which illustrates different works that addressed the integration of blockchain technology and IoT to tackle various aspects of privacy and security, which are followed by a categorization of applications that have been investigated with different characteristics, such as their primary information, objective, development level, target application, type of blockchain and platform, consensus algorithm, evaluation environment and metrics, future works or open issues (if any), and further notes for consideration. Furthermore, a detailed discussion of all articles is included from an architectural and operational perspective. Finally, we cover major gaps and future considerations that can be taken into account when integrating blockchain technology with IoT.

10 citations


Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this article , two deep neural network-based model architectures were proposed for audio-visual speech and gesture recognition. And they achieved state-of-the-art performance on two large-scale corpora (LRW and AUTSL).
Abstract: Audio-visual speech recognition (AVSR) is one of the most promising solutions for reliable speech recognition, particularly when audio is corrupted by noise. Additional visual information can be used for both automatic lip-reading and gesture recognition. Hand gestures are a form of non-verbal communication and can be used as a very important part of modern human–computer interaction systems. Currently, audio and video modalities are easily accessible by sensors of mobile devices. However, there is no out-of-the-box solution for automatic audio-visual speech and gesture recognition. This study introduces two deep neural network-based model architectures: one for AVSR and one for gesture recognition. The main novelty regarding audio-visual speech recognition lies in fine-tuning strategies for both visual and acoustic features and in the proposed end-to-end model, which considers three modality fusion approaches: prediction-level, feature-level, and model-level. The main novelty in gesture recognition lies in a unique set of spatio-temporal features, including those that consider lip articulation information. As there are no available datasets for the combined task, we evaluated our methods on two different large-scale corpora—LRW and AUTSL—and outperformed existing methods on both audio-visual speech recognition and gesture recognition tasks. We achieved AVSR accuracy for the LRW dataset equal to 98.76% and gesture recognition rate for the AUTSL dataset equal to 98.56%. The results obtained demonstrate not only the high performance of the proposed methodology, but also the fundamental possibility of recognizing audio-visual speech and gestures by sensors of mobile devices.

9 citations


Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this paper , a comprehensive review of artificial intelligence with specific attention to machine learning, deep learning, image processing, object detection, image segmentation, and few-shot learning studies were utilized in several tasks related to the COVID-19 pandemic.
Abstract: Artificial intelligence has significantly enhanced the research paradigm and spectrum with a substantiated promise of continuous applicability in the real world domain. Artificial intelligence, the driving force of the current technological revolution, has been used in many frontiers, including education, security, gaming, finance, robotics, autonomous systems, entertainment, and most importantly the healthcare sector. With the rise of the COVID-19 pandemic, several prediction and detection methods using artificial intelligence have been employed to understand, forecast, handle, and curtail the ensuing threats. In this study, the most recent related publications, methodologies and medical reports were investigated with the purpose of studying artificial intelligence’s role in the pandemic. This study presents a comprehensive review of artificial intelligence with specific attention to machine learning, deep learning, image processing, object detection, image segmentation, and few-shot learning studies that were utilized in several tasks related to COVID-19. In particular, genetic analysis, medical image analysis, clinical data analysis, sound analysis, biomedical data classification, socio-demographic data analysis, anomaly detection, health monitoring, personal protective equipment (PPE) observation, social control, and COVID-19 patients’ mortality risk approaches were used in this study to forecast the threatening factors of COVID-19. This study demonstrates that artificial-intelligence-based algorithms integrated into Internet of Things wearable devices were quite effective and efficient in COVID-19 detection and forecasting insights which were actionable through wide usage. The results produced by the study prove that artificial intelligence is a promising arena of research that can be applied for disease prognosis, disease forecasting, drug discovery, and to the development of the healthcare sector on a global scale. We prove that artificial intelligence indeed played a significantly important role in helping to fight against COVID-19, and the insightful knowledge provided here could be extremely beneficial for practitioners and research experts in the healthcare domain to implement the artificial-intelligence-based systems in curbing the next pandemic or healthcare disaster.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this paper , an improved lightweight BECA attention mechanism module was added to the backbone feature extraction network, and an improved dense SPP network and a yolo detection layer were added to detection layer, and k-means++ clustering was used to obtain prior boxes that were better suited for traffic sign detection.
Abstract: Recognizing traffic signs is an essential component of intelligent driving systems’ environment perception technology. In real-world applications, traffic sign recognition is easily influenced by variables such as light intensity, extreme weather, and distance, which increase the safety risks associated with intelligent vehicles. A Chinese traffic sign detection algorithm based on YOLOv4-tiny is proposed to overcome these challenges. An improved lightweight BECA attention mechanism module was added to the backbone feature extraction network, and an improved dense SPP network was added to the enhanced feature extraction network. A yolo detection layer was added to the detection layer, and k-means++ clustering was used to obtain prior boxes that were better suited for traffic sign detection. The improved algorithm, TSR-YOLO, was tested and assessed with the CCTSDB2021 dataset and showed a detection accuracy of 96.62%, a recall rate of 79.73%, an F-1 Score of 87.37%, and a mAP value of 92.77%, which outperformed the original YOLOv4-tiny network, and its FPS value remained around 81 f/s. Therefore, the proposed method can improve the accuracy of recognizing traffic signs in complex scenarios and can meet the real-time requirements of intelligent vehicles for traffic sign recognition tasks.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: Wang et al. as mentioned in this paper proposed a sequential convolution LSTM network for gait recognition using multimodal wearable inertial sensors, which is called SConvLSTM.
Abstract: Some recent studies use a convolutional neural network (CNN) or long short-term memory (LSTM) to extract gait features, but the methods based on the CNN and LSTM have a high loss rate of time-series and spatial information, respectively. Since gait has obvious time-series characteristics, while CNN only collects waveform characteristics, and only uses CNN for gait recognition, this leads to a certain lack of time-series characteristics. LSTM can collect time-series characteristics, but LSTM results in performance degradation when processing long sequences. However, using CNN can compress the length of feature vectors. In this paper, a sequential convolution LSTM network for gait recognition using multimodal wearable inertial sensors is proposed, which is called SConvLSTM. Based on 1D-CNN and a bidirectional LSTM network, the method can automatically extract features from the raw acceleration and gyroscope signals without a manual feature design. 1D-CNN is first used to extract the high-dimensional features of the inertial sensor signals. While retaining the time-series features of the data, the dimension of the features is expanded, and the length of the feature vectors is compressed. Then, the bidirectional LSTM network is used to extract the time-series features of the data. The proposed method uses fixed-length data frames as the input and does not require gait cycle detection, which avoids the impact of cycle detection errors on the recognition accuracy. We performed experiments on three public benchmark datasets: UCI-HAR, HuGaDB, and WISDM. The results show that SConvLSTM performs better than most of those reporting the best performance methods, at present, on the three datasets.

Journal ArticleDOI
29 Jan 2023-Sensors
TL;DR: Li et al. as discussed by the authors proposed an improved forest fire detection method to classify fires based on a new version of the Detectron2 platform (a ground-up rewrite of Detectron library) using deep learning approaches.
Abstract: With an increase in both global warming and the human population, forest fires have become a major global concern. This can lead to climatic shifts and the greenhouse effect, among other adverse outcomes. Surprisingly, human activities have caused a disproportionate number of forest fires. Fast detection with high accuracy is the key to controlling this unexpected event. To address this, we proposed an improved forest fire detection method to classify fires based on a new version of the Detectron2 platform (a ground-up rewrite of the Detectron library) using deep learning approaches. Furthermore, a custom dataset was created and labeled for the training model, and it achieved higher precision than the other models. This robust result was achieved by improving the Detectron2 model in various experimental scenarios with a custom dataset and 5200 images. The proposed model can detect small fires over long distances during the day and night. The advantage of using the Detectron2 algorithm is its long-distance detection of the object of interest. The experimental results proved that the proposed forest fire detection method successfully detected fires with an improved precision of 99.3%.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this article , the authors proposed text classification with the use of bidirectional encoder representations from transformers (BERT) for natural language processing with other variants, such as CNN, RNN, and BiLSTM.
Abstract: Sentiment analysis has been widely used in microblogging sites such as Twitter in recent decades, where millions of users express their opinions and thoughts because of its short and simple manner of expression. Several studies reveal the state of sentiment which does not express sentiment based on the user context because of different lengths and ambiguous emotional information. Hence, this study proposes text classification with the use of bidirectional encoder representations from transformers (BERT) for natural language processing with other variants. The experimental findings demonstrate that the combination of BERT with CNN, BERT with RNN, and BERT with BiLSTM performs well in terms of accuracy rate, precision rate, recall rate, and F1-score compared to when it was used with Word2vec and when it was used with no variant.

Journal ArticleDOI
26 Jan 2023-Sensors
TL;DR: In this paper , a graph signal processing (GSP) method is proposed to overcome the lack of information in power system state estimation, where the graph smoothness property of the states (i.e., voltages) is validated through empirical and theoretical analysis.
Abstract: This paper considers the problem of estimating the states in an unobservable power system, where the number of measurements is not sufficiently large for conventional state estimation. Existing methods are either based on pseudo-data that is inaccurate or depends on a large amount of data that is unavailable in current systems. This study proposes novel graph signal processing (GSP) methods to overcome the lack of information. To this end, first, the graph smoothness property of the states (i.e., voltages) is validated through empirical and theoretical analysis. Then, the regularized GSP weighted least squares (GSP-WLS) state estimator is developed by utilizing the state smoothness. In addition, a sensor placement strategy that aims to optimize the estimation performance of the GSP-WLS estimator is proposed. Simulation results on the IEEE 118-bus system show that the GSP methods reduce the estimation error magnitude by up to two orders of magnitude compared to existing methods, using only 70 sampled buses, and increase of up to 30% in the probability of bad data detection for the same probability of false alarms in unobservable systems The results conclude that the proposed methods enable an accurate state estimation, even when the system is unobservable, and significantly reduce the required measurement sensors.

Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this article , a novel fingerprinting-based indoor 2D positioning method, which utilizes the fusion of RSSI and magnetometer measurements, is proposed for mobile robots, which applies multilayer perceptron (MLP) feedforward neural networks to determine the 2D position, based on both the magnetometer data and the RSSI values measured between the mobile unit and anchor nodes.
Abstract: Received signal strength indicator (RSSI)-based fingerprinting is a widely used technique for indoor localization, but these methods suffer from high error rates due to various reflections, interferences, and noises. The use of disturbances in the magnetic field in indoor localization methods has gained increasing attention in recent years, since this technology provides stable measurements with low random fluctuations. In this paper, a novel fingerprinting-based indoor 2D positioning method, which utilizes the fusion of RSSI and magnetometer measurements, is proposed for mobile robots. The method applies multilayer perceptron (MLP) feedforward neural networks to determine the 2D position, based on both the magnetometer data and the RSSI values measured between the mobile unit and anchor nodes. The magnetic field strength is measured on the mobile node, and it provides information about the disturbance levels in the given position. The proposed method is validated using data collected in two realistic indoor scenarios with multiple static objects. The magnetic field measurements are examined in three different combinations, i.e., the measurements of the three sensor axes are tested together, the magnetic field magnitude is used alone, and the Z-axis-based measurements are used together with the magnitude in the X-Y plane. The obtained results show that significant improvement can be achieved by fusing the two data types in scenarios where the magnetic field has high variance. The achieved results show that the improvement can be above 35% compared to results obtained by utilizing only RSSI or magnetic sensor data.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: A systematic literature review of smart wearable applications for cardiovascular disease detection and prediction is presented in this paper , where the results show that smart wearables are quite accurate in detecting, predicting, and even treating cardiovascular disease, further research is needed to improve their use.
Abstract: Background: The advancement of information and communication technologies and the growing power of artificial intelligence are successfully transforming a number of concepts that are important to our daily lives. Many sectors, including education, healthcare, industry, and others, are benefiting greatly from the use of such resources. The healthcare sector, for example, was an early adopter of smart wearables, which primarily serve as diagnostic tools. In this context, smart wearables have demonstrated their effectiveness in detecting and predicting cardiovascular diseases (CVDs), the leading cause of death worldwide. Objective: In this study, a systematic literature review of smart wearable applications for cardiovascular disease detection and prediction is presented. After conducting the required search, the documents that met the criteria were analyzed to extract key criteria such as the publication year, vital signs recorded, diseases studied, hardware used, smart models used, datasets used, and performance metrics. Methods: This study followed the PRISMA guidelines by searching IEEE, PubMed, and Scopus for publications published between 2010 and 2022. Once records were located, they were reviewed to determine which ones should be included in the analysis. Finally, the analysis was completed, and the relevant data were included in the review along with the relevant articles. Results: As a result of the comprehensive search procedures, 87 papers were deemed relevant for further review. In addition, the results are discussed to evaluate the development and use of smart wearable devices for cardiovascular disease management, and the results demonstrate the high efficiency of such wearable devices. Conclusions: The results clearly show that interest in this topic has increased. Although the results show that smart wearables are quite accurate in detecting, predicting, and even treating cardiovascular disease, further research is needed to improve their use.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: Zhang et al. as discussed by the authors conducted a comparative study of the accuracy-complexity trade-off between CNN and Transformer for action recognition in video clips, and based on the performance analysis's outcome, the question of whether CNN or Vision Transformers will win the race was discussed.
Abstract: Understanding actions in videos remains a significant challenge in computer vision, which has been the subject of several pieces of research in the last decades. Convolutional neural networks (CNN) are a significant component of this topic and play a crucial role in the renown of Deep Learning. Inspired by the human vision system, CNN has been applied to visual data exploitation and has solved various challenges in various computer vision tasks and video/image analysis, including action recognition (AR). However, not long ago, along with the achievement of the transformer in natural language processing (NLP), it began to set new trends in vision tasks, which has created a discussion around whether the Vision Transformer models (ViT) will replace CNN in action recognition in video clips. This paper conducts this trending topic in detail, the study of CNN and Transformer for Action Recognition separately and a comparative study of the accuracy-complexity trade-off. Finally, based on the performance analysis’s outcome, the question of whether CNN or Vision Transformers will win the race will be discussed.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this paper , a single-stage face detector based on RetinaNet was proposed to handle the problem of face detection in untamed situations and achieved state-of-the-art performance on the WIDER FACE and FDDB datasets.
Abstract: Most facial recognition and face analysis systems start with facial detection. Early techniques, such as Haar cascades and histograms of directed gradients, mainly rely on features that had been manually developed from particular images. However, these techniques are unable to correctly synthesize images taken in untamed situations. However, deep learning’s quick development in computer vision has also sped up the development of a number of deep learning-based face detection frameworks, many of which have significantly improved accuracy in recent years. When detecting faces in face detection software, the difficulty of detecting small, scale, position, occlusion, blurring, and partially occluded faces in uncontrolled conditions is one of the problems of face identification that has been explored for many years but has not yet been entirely resolved. In this paper, we propose Retina net baseline, a single-stage face detector, to handle the challenging face detection problem. We made network improvements that boosted detection speed and accuracy. In Experiments, we used two popular datasets, such as WIDER FACE and FDDB. Specifically, on the WIDER FACE benchmark, our proposed method achieves AP of 41.0 at speed of 11.8 FPS with a single-scale inference strategy and AP of 44.2 with multi-scale inference strategy, which are results among one-stage detectors. Then, we trained our model during the implementation using the PyTorch framework, which provided an accuracy of 95.6% for the faces, which are successfully detected. Visible experimental results show that our proposed model outperforms seamless detection and recognition results achieved using performance evaluation matrices.

Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this paper , a Deep Belief Network (DBN) was used to classify six tool conditions (one healthy and five faulty) through image-based vibration signals acquired in real time.
Abstract: The controlled interaction of work material and cutting tool is responsible for the precise outcome of machining activity. Any deviation in cutting parameters such as speed, feed, and depth of cut causes a disturbance to the machining. This leads to the deterioration of a cutting edge and unfinished work material. Recognition and description of tool failure are essential and must be addressed using intelligent techniques. Deep learning is an efficient method that assists in dealing with a large amount of dynamic data. The manufacturing industry generates momentous information every day and has enormous scope for data analysis. Most intelligent systems have been applied toward the prediction of tool conditions; however, they must be explored for descriptive analytics for on-board pattern recognition. In an attempt to recognize the variation in milling operation leading to tool faults, the development of a Deep Belief Network (DBN) is presented. The network intends to classify in total six tool conditions (one healthy and five faulty) through image-based vibration signals acquired in real time. The model was designed, trained, tested, and validated through datasets collected considering diverse input parameters.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this paper , the authors used machine learning models, such as BERT neural language model and XGBoost, to extract updated information from the Natural Language documents largely available on the web, evaluating at the same time the level of the identified threats and vulnerabilities that can impact on the healthcare system.
Abstract: Digitization in healthcare systems, with the wid adoption of Electronic Health Records, connected medical devices, software and systems providing efficient healthcare service delivery and management. On the other hand, the use of these systems has significantly increased cyber threats in the healthcare sector. Vulnerabilities in the existing and legacy systems are one of the key causes for the threats and related risks. Understanding and addressing the threats from the connected medical devices and other parts of the ICT health infrastructure are of paramount importance for ensuring security within the overall healthcare ecosystem. Threat and vulnerability analysis provides an effective way to lower the impact of risks relating to the existing vulnerabilities. However, this is a challenging task due to the availability of massive data which makes it difficult to identify potential patterns of security issues. This paper contributes towards an effective threats and vulnerabilities analysis by adopting Machine Learning models, such as the BERT neural language model and XGBoost, to extract updated information from the Natural Language documents largely available on the web, evaluating at the same time the level of the identified threats and vulnerabilities that can impact on the healthcare system, providing the required information for the most appropriate management of the risk. Experiments were performed based on CS news extracted from the Hacker News website and on Common Vulnerabilities and Exposures (CVE) vulnerability reports. The results demonstrate the effectiveness of the proposed approach, which provides a realistic manner to assess the threats and vulnerabilities from Natural Language texts, allowing adopting it in real-world Healthcare ecosystems.

Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this article , the authors investigated early and full-blown PD patients based on the analysis of their voice characteristics with the aid of the most commonly employed machine learning (ML) techniques and found that feature-based ML and deep learning achieved comparable results in terms of classification, with KNN, SVM and naïve Bayes classifiers performing similarly, with a slight edge for KNN.
Abstract: Parkinson’s Disease (PD) is one of the most common non-curable neurodegenerative diseases. Diagnosis is achieved clinically on the basis of different symptoms with considerable delays from the onset of neurodegenerative processes in the central nervous system. In this study, we investigated early and full-blown PD patients based on the analysis of their voice characteristics with the aid of the most commonly employed machine learning (ML) techniques. A custom dataset was made with hi-fi quality recordings of vocal tasks gathered from Italian healthy control subjects and PD patients, divided into early diagnosed, off-medication patients on the one hand, and mid-advanced patients treated with L-Dopa on the other. Following the current state-of-the-art, several ML pipelines were compared usingdifferent feature selection and classification algorithms, and deep learning was also explored with a custom CNN architecture. Results show how feature-based ML and deep learning achieve comparable results in terms of classification, with KNN, SVM and naïve Bayes classifiers performing similarly, with a slight edge for KNN. Much more evident is the predominance of CFS as the best feature selector. The selected features act as relevant vocal biomarkers capable of differentiating healthy subjects, early untreated PD patients and mid-advanced L-Dopa treated patients.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this article , a platform based on the IoT technology is developed, allowing the automation of charging and discharging cycles of each independent cell according to some parameters given by the user, and monitoring the real-time data of such battery cells.
Abstract: Since 1997, when the first hybrid vehicle was launched on the market, until today, the number of NIMH batteries that have been discarded due to their obsolescence has not stopped increasing, with an even faster rate more recently due to the progressive disappearance of thermal vehicles on the market. The battery technologies used are mostly NIMH for hybrid vehicles and Li ion for pure electric vehicles, making recycling difficult due to the hazardous materials they contain. For this reason, and with the aim of extending the life of the batteries, even including a second life within electric vehicle applications, this paper describes and evaluates a low-cost system to characterize individual cells of commercial electric vehicle batteries by identifying such abnormally performing cells that are out of use, minimizing regeneration costs in a more sustainable manner. A platform based on the IoT technology is developed, allowing the automation of charging and discharging cycles of each independent cell according to some parameters given by the user, and monitoring the real-time data of such battery cells. A case study based on a commercial Toyota Prius battery is also included in the paper. The results show the suitability of the proposed solution as an alternative way to characterize individual cells for subsequent electric vehicle applications, decreasing operating costs and providing an autonomous, flexible, and reliable system.

Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: A comprehensive review of the state-of-the-art in the literature on the impact and implementation of the aforementioned technologies into AV architectures, along with the challenges faced by each of them is provided in this paper .
Abstract: The wave of modernization around us has put the automotive industry on the brink of a paradigm shift. Leveraging the ever-evolving technologies, vehicles are steadily transitioning towards automated driving to constitute an integral part of the intelligent transportation system (ITS). The term autonomous vehicle has become ubiquitous in our lives, owing to the extensive research and development that frequently make headlines. Nonetheless, the flourishing of AVs hinges on many factors due to the extremely stringent demands for safety, security, and reliability. Cutting-edge technologies play critical roles in tackling complicated issues. Assimilating trailblazing technologies such as the Internet of Things (IoT), edge intelligence (EI), 5G, and Blockchain into the AV architecture will unlock the potential of an efficient and sustainable transportation system. This paper provides a comprehensive review of the state-of-the-art in the literature on the impact and implementation of the aforementioned technologies into AV architectures, along with the challenges faced by each of them. We also provide insights into the technological offshoots concerning their seamless integration to fulfill the requirements of AVs. Finally, the paper sheds light on future research directions and opportunities that will spur further developments. Exploring the integration of key enabling technologies in a single work will serve as a valuable reference for the community interested in the relevant issues surrounding AV research.

Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this article , the authors provide a comprehensive review of the applications of smart meters in the control and optimisation of power grids to support a smooth energy transition towards the renewable energy future.
Abstract: This paper provides a comprehensive review of the applications of smart meters in the control and optimisation of power grids to support a smooth energy transition towards the renewable energy future. The smart grids become more complicated due to the presence of small-scale low inertia generators and the implementation of electric vehicles (EVs), which are mainly based on intermittent and variable renewable energy resources. Optimal and reliable operation of this environment using conventional model-based approaches is very difficult. Advancements in measurement and communication technologies have brought the opportunity of collecting temporal or real-time data from prosumers through Advanced Metering Infrastructure (AMI). Smart metering brings the potential of applying data-driven algorithms for different power system operations and planning services, such as infrastructure sizing and upgrade and generation forecasting. It can also be used for demand-side management, especially in the presence of new technologies such as EVs, 5G/6G networks and cloud computing. These algorithms face privacy-preserving and cybersecurity challenges that need to be well addressed. This article surveys the state-of-the-art of each of these topics, reviewing applications, challenges and opportunities of using smart meters to address them. It also stipulates the challenges that smart grids present to smart meters and the benefits that smart meters can bring to smart grids. Furthermore, the paper is concluded with some expected future directions and potential research questions for smart meters, smart grids and their interplay.

Journal ArticleDOI
25 Feb 2023-Sensors
TL;DR: In this article , an exact Dubins multi-robot coverage path planning (EDM) algorithm based on mixed linear integer programming (MILP) was proposed, which searches the entire solution space to obtain the shortest Dubins coverage path.
Abstract: Coverage path planning (CPP) of multiple Dubins robots has been extensively applied in aerial monitoring, marine exploration, and search and rescue. Existing multi-robot coverage path planning (MCPP) research use exact or heuristic algorithms to address coverage applications. However, several exact algorithms always provide precise area division rather than coverage paths, and heuristic methods face the challenge of balancing accuracy and complexity. This paper focuses on the Dubins MCPP problem of known environments. Firstly, we present an exact Dubins multi-robot coverage path planning (EDM) algorithm based on mixed linear integer programming (MILP). The EDM algorithm searches the entire solution space to obtain the shortest Dubins coverage path. Secondly, a heuristic approximate credit-based Dubins multi-robot coverage path planning (CDM) algorithm is presented, which utilizes the credit model to balance tasks among robots and a tree partition strategy to reduce complexity. Comparison experiments with other exact and approximate algorithms demonstrate that EDM provides the least coverage time in small scenes, and CDM produces a shorter coverage time and less computation time in large scenes. Feasibility experiments demonstrate the applicability of EDM and CDM to a high-fidelity fixed-wing unmanned aerial vehicle (UAV) model.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this article , the authors proposed a new deep learning (DL) framework for the analysis of lung diseases, including COVID-19 and pneumonia, from chest CT scans and X-ray (CXR) images.
Abstract: This paper proposes a new deep learning (DL) framework for the analysis of lung diseases, including COVID-19 and pneumonia, from chest CT scans and X-ray (CXR) images. This framework is termed optimized DenseNet201 for lung diseases (LDDNet). The proposed LDDNet was developed using additional layers of 2D global average pooling, dense and dropout layers, and batch normalization to the base DenseNet201 model. There are 1024 Relu-activated dense layers and 256 dense layers using the sigmoid activation method. The hyper-parameters of the model, including the learning rate, batch size, epochs, and dropout rate, were tuned for the model. Next, three datasets of lung diseases were formed from separate open-access sources. One was a CT scan dataset containing 1043 images. Two X-ray datasets comprising images of COVID-19-affected lungs, pneumonia-affected lungs, and healthy lungs exist, with one being an imbalanced dataset with 5935 images and the other being a balanced dataset with 5002 images. The performance of each model was analyzed using the Adam, Nadam, and SGD optimizers. The best results have been obtained for both the CT scan and CXR datasets using the Nadam optimizer. For the CT scan images, LDDNet showed a COVID-19-positive classification accuracy of 99.36%, a 100% precision recall of 98%, and an F1 score of 99%. For the X-ray dataset of 5935 images, LDDNet provides a 99.55% accuracy, 73% recall, 100% precision, and 85% F1 score using the Nadam optimizer in detecting COVID-19-affected patients. For the balanced X-ray dataset, LDDNet provides a 97.07% classification accuracy. For a given set of parameters, the performance results of LDDNet are better than the existing algorithms of ResNet152V2 and XceptionNet.

Journal ArticleDOI
27 Jan 2023-Sensors
TL;DR: In this paper , the authors present an up-to-date and content-relevant analysis of the existing literature in this field, which includes bibliometric performance analysis and a review of the systematic literature.
Abstract: Recently, there has been a growing interest in issues related to maintenance performance management, which is confirmed by a significant number of publications and reports devoted to these problems. However, theoretical and application studies indicate a lack of research on the systematic literature reviews and surveys of studies that would focus on the evolution of Industry 4.0 technologies used in the maintenance area in a cross-sectional manner. Therefore, the paper reviews the existing literature to present an up-to-date and content-relevant analysis in this field. The proposed methodology includes bibliometric performance analysis and a review of the systematic literature. First, the general bibliometric analysis was conducted based on the literature in Scopus and Web of Science databases. Later, the systematic search was performed using the Primo multi-search tool following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The main inclusion criteria included the publication dates (studies published from 2012–2022), studies published in English, and studies found in the selected databases. In addition, the authors focused on research work within the scope of the Maintenance 4.0 study. Therefore, papers within the following research fields were selected: (a) augmented reality, (b) virtual reality, (c) system architecture, (d) data-driven decision, (e) Operator 4.0, and (f) cybersecurity. This resulted in the selection of the 214 most relevant papers in the investigated area. Finally, the selected articles in this review were categorized into five groups: (1) Data-driven decision-making in Maintenance 4.0, (2) Operator 4.0, (3) Virtual and Augmented reality in maintenance, (4) Maintenance system architecture, and (5) Cybersecurity in maintenance. The obtained results have led the authors to specify the main research problems and trends related to the analyzed area and to identify the main research gaps for future investigation from academic and engineering perspectives.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this paper , the authors presented the prototype of a Video Surveillance Unit (VSU) for detecting and signalling the presence of forest fires by exploiting two embedded Machine Learning (ML) algorithms running on a low power device.
Abstract: Forest fires are the main cause of desertification, and they have a disastrous impact on agricultural and forest ecosystems. Modern fire detection and warning systems rely on several techniques: satellite monitoring, sensor networks, image processing, data fusion, etc. Recently, Artificial Intelligence (AI) algorithms have been applied to fire recognition systems, enhancing their efficiency and reliability. However, these devices usually need constant data transmission along with a proper amount of computing power, entailing high costs and energy consumption. This paper presents the prototype of a Video Surveillance Unit (VSU) for recognising and signalling the presence of forest fires by exploiting two embedded Machine Learning (ML) algorithms running on a low power device. The ML models take audio samples and images as their respective inputs, allowing for timely fire detection. The main result is that while the performances of the two models are comparable when they work independently, their joint usage according to the proposed methodology provides a higher accuracy, precision, recall and F1 score (96.15%, 92.30%, 100.00%, and 96.00%, respectively). Eventually, each event is remotely signalled by making use of the Long Range Wide Area Network (LoRaWAN) protocol to ensure that the personnel in charge are able to operate promptly.