scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Sensors in 2022"


Journal ArticleDOI
TL;DR: A comprehensive review of various equal clustering, unequal clusters, and hybrid clustering approaches with their clustering attributes is presented to mitigate hotspot issues in heterogeneous WSNs by using various parameters such as cluster head selection, number of clusters, zone formation, transmission, and routing parameters.
Abstract: Wireless Sensor Networks (WSNs) consist of a spatially distributed set of autonomous connected sensor nodes. The deployed sensor nodes are extensively used for sensing and monitoring for environmental surveillance, military operations, transportation monitoring, and healthcare monitoring. The sensor nodes in these networks have limited resources in terms of battery, storage, and processing. In some scenarios, the sensor nodes are deployed closer to the base station and responsible to forward their own and neighbor nodes’ data towards the base station and depleted energy. This issue is called a hotspot in the network. Hotspot issues mainly appear in those locations where traffic load is more on the sensor nodes. The dynamic and unequal clustering techniques have been used and mitigate the hotspot issues. However, with few benefits, these solutions have suffered from coverage overhead, network connection issues, unbalanced energy utilization among the sink nodes, and network stability issues. In this paper, a comprehensive review of various equal clustering, unequal clustering, and hybrid clustering approaches with their clustering attributes is presented to mitigate hotspot issues in heterogeneous WSNs by using various parameters such as cluster head selection, number of clusters, zone formation, transmission, and routing parameters. This review provides a detailed platform for new researchers to explore the new and novel solutions to solve the hotspot issues in these networks.

49 citations


Journal ArticleDOI
TL;DR: This new factory model succinctly demonstrates the advancements in manufacturing introduced by these modern technologies, which qualifies this as a seminal industrial revolutionary event in human history.
Abstract: Every so often, a confluence of novel technologies emerges that radically transforms every aspect of the industry, the global economy, and finally, the way we live. These sharp leaps of human ingenuity are known as industrial revolutions, and we are currently in the midst of the fourth such revolution, coined Industry 4.0 by the World Economic Forum. Building on their guideline set of technologies that encompass Industry 4.0, we present a full set of pillar technologies on which Industry 4.0 project portfolio management rests as well as the foundation technologies that support these pillars. A complete model of an Industry 4.0 factory which relies on these pillar technologies is presented. The full set of pillars encompasses cyberphysical systems and Internet of Things (IoT), artificial intelligence (AI), machine learning (ML) and big data, robots and drones, cloud computing, 5G and 6G networks, 3D printing, virtual and augmented reality, and blockchain technology. These technologies are based on a set of foundation technologies which include advances in computing, nanotechnology, biotechnology, materials, energy, and finally cube satellites. We illustrate the confluence of all these technologies in a single model factory. This new factory model succinctly demonstrates the advancements in manufacturing introduced by these modern technologies, which qualifies this as a seminal industrial revolutionary event in human history.

37 citations


Journal ArticleDOI
TL;DR: In this article, a deep learning framework called Fire-Net was proposed to detect active forest fires and burning biomass from Landsat-8 images. But, it was only able to detect small active fires.
Abstract: Forest conservation is crucial for the maintenance of a healthy and thriving ecosystem. The field of remote sensing (RS) has been integral with the wide adoption of computer vision and sensor technologies for forest land observation. One critical area of interest is the detection of active forest fires. A forest fire, which occurs naturally or manually induced, can quickly sweep through vast amounts of land, leaving behind unfathomable damage and loss of lives. Automatic detection of active forest fires (and burning biomass) is hence an important area to pursue to avoid unwanted catastrophes. Early fire detection can also be useful for decision makers to plan mitigation strategies as well as extinguishing efforts. In this paper, we present a deep learning framework called Fire-Net, that is trained on Landsat-8 imagery for the detection of active fires and burning biomass. Specifically, we fuse the optical (Red, Green, and Blue) and thermal modalities from the images for a more effective representation. In addition, our network leverages the residual convolution and separable convolution blocks, enabling deeper features from coarse datasets to be extracted. Experimental results show an overall accuracy of 97.35%, while also being able to robustly detect small active fires. The imagery for this study is taken from Australian and North American forests regions, the Amazon rainforest, Central Africa and Chernobyl (Ukraine), where forest fires are actively reported.

34 citations


Journal ArticleDOI
TL;DR: A new hybrid algorithm scored regional congestion-aware and neighbors-on-path (ScRN) is introduced to choose better output channel and thus improve NOC performance and the proposed solution was more successful in terms of delay time, throughput, and energy consumption in comparison to other solutions.
Abstract: Networks on chip (NoCs) are an idea for implementing multiprocessor systems that have been able to handle the communication between processing cores, inspired by computer networks. Efficient nonstop routing is one of the most significant applications of NOC. In fact, there are different routes to reach from one node to another node in these networks; therefore, there should be a function that can help to build the best route to reach the destination. In the current study, a new hybrid algorithm scored regional congestion-aware and neighbors-on-path (ScRN) is introduced to choose better output channel and thus improve NOC performance. Having utilized the ScRN algorithm, first an analyzer is used to inspect the traffic packets, and then the NoC traffic locality or nonlocality is determined based on the number of the hops. Finally, if the traffic is local, a scoring technique will choose better output channel; however, if the traffic is nonlocal, the best output channel will be chosen based on a particular parameter introduced here as well as the system status using NoP or RCA selection functions. In the end, via Nirgam simulation, the proposed approach was assessed in traffic scenarios through various selection functions. The simulation results showed that the solution was more successful in terms of delay time, throughput, and energy consumption in comparison to other solutions. It showed a reduction of 38% in packet latency, and the throughput increased by 20%. By considering these two parameters, energy consumption decreased by 10% on average.

22 citations


Journal ArticleDOI
TL;DR: The simulation results show that the proposed OEERP algorithm outperforms existing state-of-the-art algorithms in terms of accuracy, energy efficiency, and network lifetime extension.
Abstract: The battery power limits the energy consumption of wireless sensor networks (WSN). As a result, its network performance suffered significantly. Therefore, this paper proposes an opportunistic energy-efficient routing protocol (OEERP) algorithm for reducing network energy consumption. It provides accurate target location detection, energy efficiency, and network lifespan extension. It is intended to schedule idle nodes into a sleep state, thereby optimising network energy consumption. Sleep is dynamically adjusted based on the network’s residual energy (RE) and flow rate (FR). It saves energy for a longer period. The sleep nodes are triggered to wake up after a certain time interval. The simulation results show that the proposed OEERP algorithm outperforms existing state-of-the-art algorithms in terms of accuracy, energy efficiency, and network lifetime extension.

17 citations


Journal ArticleDOI
TL;DR: The most relevant tasks to analyse how IoT blockchain can improve are reviewed and current research concerns and developments in the use of blockchain-related techniques and technologies in the context of IoT security in depth are examined.
Abstract: The Internet of Things (IoT) refers to the interconnection of smart devices to collect data and make intelligent decisions. However, a lack of intrinsic security measures makes next generation IoT more vulnerable to privacy and security threats. With its “security by design,” Blockchain (BC) can help in addressing major security requirements in IoT. Blockchain is an ever-growing list of records that are linked and protected using cryptographic methods. It offers its users the flexibility to conduct transactions with lower costs and faster speeds. Blockchain ledgers are also decentralized and a ledger is maintained at each node in the network. Blockchain’s security and adaptability help in making even entire systems on it a much easily task with the benefit of decentralization. BC capabilities like immutability, transparency, auditability, data encryption, and operational resilience can help solve most architectural shortcomings of IoT. In the vision of the Internet of Things, traditional devices are becoming smarter and more autonomous. This vision is becoming reality as technology advances but there are still challenges to be resolved. This is especially true in a security domain like data trust, and with the expected evolution of the IoT in the coming years, it is important to ensure that this great source of data arrives. This paper began with an overview of blockchain and IoT, as well as explore the IoT blockchain application challenges. This article also focuses to review the most relevant tasks to analyse how IoT blockchain can improve and examine current research concerns and developments in the use of blockchain-related techniques and technologies in the context of IoT security in depth. One of the best parts of working or learning about blockchain and its application is the curiosity about how it can impact the things that we have been accustomed to without trying to improve and make things more efficient and productive.

12 citations


Journal ArticleDOI
TL;DR: The surveyed results significantly enhance the interpretability of data-driven FDD methods for M&E services, potentially enhance the FDD performance in terms of accuracy and promote the data- driven FDD approaches to real-world facility management practices.
Abstract: Data-driven fault detection and diagnosis (FDD) methods, referring to the newer generation of artificial intelligence (AI) empowered classification methods, such as data science analysis, big data, Internet of things (IoT), industry 4.0, etc., become increasingly important for facility management in the smart building design and smart city construction. While data-driven FDD methods nowadays outperform the majority of traditional FDD approaches, such as the physically based models and mathematically based models, in terms of both efficiency and accuracy, the interpretability of those methods does not grow significantly. Instead, according to the literature survey, the interpretability of the data-driven FDD methods becomes the main concern and creates barriers for those methods to be adopted in real-world industrial applications. In this study, we reviewed the existing data-driven FDD approaches for building mechanical & electrical engineering (M&E) services faults and discussed the interpretability of the modern data-driven FDD methods. Two data-driven FDD strategies integrating the expert reasoning of the faults were proposed. Lists of expert rules, knowledge of maintainability, international/local standards were concluded for various M&E services, including heating, ventilation air-conditioning (HVAC), plumbing, fire safety, electrical and elevator systems based on surveys of 110 buildings in Singapore. The surveyed results significantly enhance the interpretability of data-driven FDD methods for M&E services, potentially enhance the FDD performance in terms of accuracy and promote the data-driven FDD approaches to real-world facility management practices.

12 citations


Journal ArticleDOI
TL;DR: This research proposes an improved way of detecting breast cancer using machine learning approaches and uses the Synthetic Minority Oversampling Technique (SMOTE) to deal with the problem of imbalanced data in the class and noise.
Abstract: Breast cancer (BC) disease is the most common and rapidly spreading disease across the globe. This disease can be prevented if identified early, and this eventually reduces the death rate. Machine learning (ML) is the most frequently utilized technology in research. Cancer patients can benefit from early detection and diagnosis. Using machine learning approaches, this research proposes an improved way of detecting breast cancer. To deal with the problem of imbalanced data in the class and noise, the Synthetic Minority Oversampling Technique (SMOTE) has been used. There are two steps in the suggested task. In the first phase, SMOTE is utilized to decrease the influence of imbalance data issues, and subsequently, in the next phase, data is classified using the Naive Bayes classifier, decision trees classifier, Random Forest, and their ensembles. According to the experimental analysis, the XGBoost-Random Forest ensemble classifier outperforms with 98.20% accuracy in the early detection of breast cancer.

12 citations


Journal ArticleDOI
TL;DR: Improved IID (IIWD) is offered as an enhancement to the original IID to replicate the influence of heterogeneity in the environment and significantly improves the accuracy and effectiveness of the IWD method in comparison.
Abstract: This paper provides a novel implementation of the intelligent water drops (IWD) method for resolving data aggregation issues in heterogeneous wireless sensor networks (WSN). When the aggregating node is utilized to transmit the data to the base station, the research attempts to show that the traffic situations of WSN may be modified appropriately by parameter tuning and algorithm modification. IWD is used to generate an optimum data aggregation tree in WSN as one of its applications. IWD assumes that all nodes in the environment are identical, resulting in identical parameter updates for all nodes. In practical scenarios, however, diverse nodes with variable beginning energy, communication range, and sensing range characteristics are deployed. In order to replicate the influence of heterogeneity in the environment, improved IID (IIWD) is offered as an enhancement to the original IID. The suggested enhancement is appropriate for scenarios in which the aggregation node is utilized to transmit data to the base station in heterogeneous configurations. In terms of residual energy, dead nodes, payload, and network lifespan, a series of simulation results demonstrates that the proposed IIWD significantly improves the accuracy and effectiveness of the IWD method in comparison.

12 citations


Journal ArticleDOI
TL;DR: An AI-based intelligent feature learning mechanism named Probabilistic Super Learning- (PSL-) Random Hashing (RH) for improving the security of healthcare data stored in IoT-cloud and reducing the cost of IoT sensors by implementing the proposed learning model.
Abstract: Providing security to the healthcare data stored in an IoT-cloud environment is one of the most challenging and demanding tasks in recent days. Because the IoT-cloud framework is constructed with an enormous number of sensors that are used to generate a massive amount of data, however, it is more susceptible to vulnerabilities and attacks, which degrades the security level of the network by performing malicious activities. Hence, Artificial Intelligence (AI) technology is the most suitable option for healthcare applications because it provides the best solution for improving the security and reliability of data. Due to this fact, various AI-based security mechanisms are implemented in the conventional works for the IoT-cloud framework. However, it faces significant problems of increased complexity in algorithm design, inefficient data handling, not being suitable for processing the unstructured data, increased cost of IoT sensors, and more time consumption. Therefore, this paper proposed an AI-based intelligent feature learning mechanism named Probabilistic Super Learning- (PSL-) Random Hashing (RH) for improving the security of healthcare data stored in IoT-cloud. Also, this paper is aimed at reducing the cost of IoT sensors by implementing the proposed learning model. Here, the training model has been maintained for detecting the attacks at the initial stage, where the properties of the reported attack are updated for learning the characteristics of attacks. In addition to that, the random key is generated based on the hash value of the data matrix, which is incorporated with the standard Elliptic Curve Cryptography (ECC) technique for data security. Then, the enhanced ECC-RH mechanism performs the data encryption and decryption processes with the generated random hash key. During performance evaluation, the results of both existing and proposed techniques are validated and compared using different performance indicators.

12 citations


Journal ArticleDOI
TL;DR: An energy-efficient framework (M-DSDV-RMCP routing protocol) for WBAN is proposed based on the DSDV routing protocolbased on the RMCP routing Protocol.
Abstract: This paper provides the deep detailed information related to the importance of link-aware energy-efficient communication between sensor nodes working in the health care domain. Today, in the modern field of science and technology, wireless sensor networks are playing a vital role in real-life applications including medicine, health care, and disaster management. WBAN applications nowadays are successfully used in medical health care. WBAN application is classified into two subtypes which include wearable WBAN and implantable WBAN. For this reason, the patient health condition can be monitored anywhere and at any time. Mostly, the researchers, in the health care domain, are using latest communication standards including 3G, WiMax, Bluetooth, and Zigbee standards. In this paper, we had proposed an energy-efficient framework (M-DSDV-RMCP routing protocol) for WBAN. The proposed framework is based on the DSDV routing protocol based on the RMCP routing protocol.

Journal ArticleDOI
TL;DR: The hardware design of a fall-acceleration sensing wearable for the elderly is described and a novel algorithm for real-time filtering of the measurement data as well as on a strategy to confirm the detected fall events, based on changes in the person’s orientation are focused on.
Abstract: Due to the ever growing population of elderly people, there is a dramatic increase in fall accidents. Currently, multiple ideas exist to prevent the elderly from falling, by means of technology or individualised fall prevention training programs. Most of them are costly, difficult to implement or less used by the elderly, and they do not deliver the required results. Furthermore, the increasingly older population will also impact the workload of the medical and nursing personnel. Therefore, we propose a novel fall detection and warning system for nursing homes, relying on Bluetooth Low Energy wireless communication. This paper describes the hardware design of a fall-acceleration sensing wearable for the elderly. Moreover, the paper also focuses on a novel algorithm for real-time filtering of the measurement data as well as on a strategy to confirm the detected fall events, based on changes in the person’s orientation. In addition, we compare the performance of the algorithm to a machine learning procedure using a convolutional neural network. Finally, the proposed filtering technique is validated via measurements and simulation. The results show that the proposed algorithm as well as the convolutional neural network both results in an excellent accuracy when validating on a common database.

Journal ArticleDOI
TL;DR: In this paper , the Grasshopper Optimization Algorithm (GOA) was used to train multilayer perceptron Artificial Neural Network (MLP-NN) and also to select optimal features in big data sonar.
Abstract: The complexity and high dimensions of big data sonar, as well as the unavoidable presence of unwanted signals such as noise, clutter, and reverberation in the environment of sonar propagation, have made the classification of big data sonar one of the most interesting and applicable topics for active researchers in this field. This paper proposes the use of the Grasshopper Optimization Algorithm (GOA) to train Multilayer Perceptron Artificial Neural Network (MLP-NN) and also to select optimal features in big data sonar (called GMLP-GOA). GMLP-GOA hybrid classifier first extracts the features of experimental sonar data using MFCC. Then, the most optimal features are selected using GOA. In the last step, MLP-NN trained with GOA is used to classify big data sonar. To evaluate the performance of GMLP-GOA, this classifier is compared with MLP-GOA, MLP-GWO, MLP-PSO, MLP-ACO, and MLP-GSA classifiers in terms of classification rate, convergence rate, local optimization avoidance power, and processing time. The results indicated that GMLP-GOA achieved a classification rate of 98.12% in a processing time of 3.14 s.

Journal ArticleDOI
TL;DR: This survey extensively reviewed and created a comprehensive taxonomy of various smart healthcare technologies, along with their security aspects and solutions for the smart healthcare system, and proposes an AI-based architecture with the 6G network interface to secure the data exchange between patients and medical practitioners.
Abstract: There is a massive transformation in the traditional healthcare system from the specialist-centric approach to the patient-centric approach by adopting modern and intelligent healthcare solutions to build a smart healthcare system. It permits patients to directly share their medical data with the specialist for remote diagnosis without any human intervention. Furthermore, the remote monitoring of patients utilizing wearable sensors, Internet of Things (IoT) technologies, and artificial intelligence (AI) has made the treatment readily accessible and affordable. However, the advancement also brings several security and privacy concerns that poorly maneuvered the effective performance of the smart healthcare system. An attacker can exploit the IoT infrastructure, perform an adversarial attack on AI models, and proliferate resource starvation attacks in smart healthcare system. To overcome the aforementioned issues, in this survey, we extensively reviewed and created a comprehensive taxonomy of various smart healthcare technologies such as wearable devices, digital healthcare, and body area networks (BANs), along with their security aspects and solutions for the smart healthcare system. Moreover, we propose an AI-based architecture with the 6G network interface to secure the data exchange between patients and medical practitioners. We have examined our proposed architecture with the case study based on the COVID-19 pandemic by adopting unmanned aerial vehicles (UAVs) for data exchange. The performance of the proposed architecture is evaluated using various machine learning (ML) classification algorithms such as random forest (RF), naive Bayes (NB), logistic regression (LR), linear discriminant analysis (LDA), and perceptron. The RF classification algorithm outperforms the conventional algorithms in terms of accuracy, i.e., 98%. Finally, we present open issues and research challenges associated with smart healthcare technologies.

Journal ArticleDOI
TL;DR: Intelligent learning methods and swarm intelligence bionic optimization algorithms are introduced to address reliability issues such as mobile wireless sensor network fault prediction methods and topology reliability assessment methods in industrial application environments, and the impact of mobile path optimization of mobile wireless Sensor networks on data collection efficiency and network reliability.
Abstract: With the rapid development of the Internet in recent years, people are using the Internet less and less frequently. People publish and obtain information through various channels on the Internet, and online social networks have become one of the most important channels. Many nodes in social networks and frequent interactions between nodes create great difficulties for privacy protection, and some of the existing studies also have problems such as cumbersome computational steps and low efficiency. In this paper, we take the complex environment of social networks as the research background and focus on the key issues of mobile wireless sensor network reliability from the mobile wireless sensor networks that apply to large-scale, simpler information, and delay tolerance. By introducing intelligent learning methods and swarm intelligence bionic optimization algorithms, we address reliability issues such as mobile wireless sensor network fault prediction methods and topology reliability assessment methods in industrial application environments, the impact of mobile path optimization of mobile wireless sensor networks on data collection efficiency and network reliability, reliable data transmission based on data fusion methods, and intelligent fault tolerance strategies for multipath routing to ensure mobile wireless sensor networks operate energy-efficiently and reliably in complex industrial application environments.

Journal ArticleDOI
TL;DR: UAV to UAV communication is presented as it constitutes the SUAV’s autonomous coordination ability and future research directions and open challenges that need to be addressed, including autonomous SUAVs, are discussed.
Abstract: Within the last decade, Swarm Unmanned Aerial Vehicles (SUAVs) are booming and growing at a surprisingly rapid pace. From military combat, environmental surveillance, and air transport to the blossoming public entertainment sector, there is a wide range of UAV applications. For these above use cases, the accurate location of the target of interest can be requested/queried, which is very important for their mission accomplishment. In the case of GPS-armed SUAVs, this is an easy task. However, the GPS signal can be obscured, affected by environmental conditions, or suppressive jamming. Therefore, location information needs to be improved/assisted by some other localization techniques, which constitutes the main scope of this article. Besides, with the advancements in localization, guidance, and communication technologies; future SUAVs will be operating autonomously by distributing tasks and coordinating the operation of many UAVs. Thus, UAV to UAV communication is presented as it constitutes the SUAV’s autonomous coordination ability. In addition, future research directions and open challenges that need to be addressed, including autonomous SUAVs, are also discussed.

Journal ArticleDOI
TL;DR: In this article , the authors explored whether eye movement behavior can be used as an objective tool to detect visual fatigue and found significant differences before and after visual fatigue task on survey and eye tracker-derived features.
Abstract: The traditional way to detect visual fatigue is to use the questionnaire or to use critical fusion frequency of high-frequency exchanges due to eye fatigue. The objective of this study was to explore whether eye movement behavior can be used as an objective tool to detect visual fatigue. Thirty-three participants were tested in this study. Their subjective visual fatigue survey, critical fusion frequency, and eye tracker of one minute gaze were measured before and after 20 minutes visual fatigue task. There were significant differences before and after visual fatigue task on survey and eye tracker-derived features. By multiple regression analysis with four eye tracker features, total fixation time duration of the inner circle, longest continuous duration of inner circle viewing time, maximum saccade distance, and focus radius, the regression R square value was greater than 0.9 for all critical fusion frequency data and when subjective visual fatigue assessment was greater than 12 points. In conclusion, eye movement behavior can be used to detect visual fatigue more sensitively even than the traditional critical flicker fusion assessment. Eye tracker can also provide well regression model to fit traditional critical fusion frequency measurement and subjective visual fatigue survey.

Journal ArticleDOI
TL;DR: A novel real-time wearable system for finger air-writing recognition in three-dimensional (3D) space based on the Arduino Nano 33 BLE Sense as an edge device, which can run TensorFlow Lite to realize recognition and classification on the device.
Abstract: Nowadays, wearable sensors play a vital role in the detection of human motions, innovating an alternate and intuitive form in human–computer interaction (HCI). In this study, we present a novel real-time wearable system for finger air-writing recognition in three-dimensional (3D) space based on the Arduino Nano 33 BLE Sense as an edge device, which can run TensorFlow Lite to realize recognition and classification on the device. This system enables users to have the freedom and flexibility to write characters (10 digits and 26 English lower-case letters) in free space by moving fingers and uses a deep learning algorithm to recognize 36 characters from the motion data captured by inertial measurement units (IMUs) and processed by a microcontroller, which are both embedded in an Arduino Nano 33 BLE Sense. We prepared 63000 air-writing stroke data samples of 35 subjects containing 18 males and 17 females for convolutional neural network (CNN) training and achieved a high recognition accuracy at 97.95%.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a system that is aimed at predicting heart attacks by integrating the techniques of computer vision and deep learning approaches on the heart images collected from the clinical labs, which are publicly available in the KAGGLE repository.
Abstract: Most people worldwide, irrespective of their age, are suffering from massive cardiac arrest. To detect heart attacks early, many researchers worked on the clinical datasets collected from different open-source datasets like PubMed and UCI repository. However, most of these datasets have collected nearly 13 to 147 raw attributes in textual format and implemented traditional data mining approaches. Traditional machine learning approaches just analyze the data extracted from the images, but the extraction mechanism is inefficient and it requires more number of resources. The authors of this research article proposed a system that is aimed at predicting heart attacks by integrating the techniques of computer vision and deep learning approaches on the heart images collected from the clinical labs, which are publicly available in the KAGGLE repository. The authors collected live images of the heart by scanning the images through IoT sensors. The primary focus is to enhance the quality and quantity of the heart images by passing through two popular components of GAN. GAN introduces noise in the images and tries to replicate the real-time scenarios. Subsequently, the available and newly created images are segmented by applying a multilevel threshold operation to find the region of interest. This step helps the system to predict the accurate attack rate by considering various factors. Earlier researchers have obtained sound accuracy by generating similar heart images and found the ROI parts of the 2D echo images. The proposed methodology has achieved an accuracy of 97.33% and a 90.97% true-positive rate. The reason for selecting the computed tomography (CT-SCAN) images is due to the gray scale images giving more reliable information at a low computational cost.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors established a multi-index comprehensive evaluation system and used AHP-entropy weight method to measure the development level of China's digital economy and, on this basis, analyzes the development levels, dynamic changes, and regional differences of China’s digital economy.
Abstract: At present, China’s economic development is in a critical period of transformation, which needs to get rid of the dependence on the real estate industry and low-end export processing industry, and is in urgent need of new growth engines. The emergence of the digital economy has provided a boost to economic upgrading, but to give full play to the potential of the digital economy, we must have an accurate and full understanding of it. At present, the development of digital economy has become the focus of all circles. Digital economy is a multilevel and complex concept, so this paper establishes a multi-index comprehensive evaluation system and uses AHP-entropy weight method to measure the development level of China’s digital economy and, on this basis, analyzes the development level, dynamic changes, and regional differences of China’s digital economy. The results show that China’s digital economy is on the rise, which is mainly driven by the construction of digital infrastructure and the application of digital technology. By region, there is a big gap between different regions in the development of digital economy, and this gap is expanding continuously. The digital economy in the eastern region is in a leading position, but only in the development of the digital industry.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a one-anchor-based (OAB) object detection algorithm based on the idea of central point sampling in the anchor-free detector, which reduces the complexity of the anchorbased detector, improves the inference speed and reduces the setting of hyperparameters in the traditional matching strategy, rendering the model more flexible.
Abstract: Remote sensing images are widely distributed, small in object size, and complex in background, resulting in low accuracy and slow speed of remote sensing image detection. Existing remote sensing object detection is generally based on the detector with anchors. With the proposal of a feature pyramid network (FPN) and focal loss, an anchorless detector emerges, however, the accuracy of anchorless detection is often low. First, this study analyzes the differences and characteristics of the intersection of union (IoU) and shape matchings based on anchors in mainstream algorithms and indicates that in dense or complex scenes, some labels are not easily assigned to positive samples, which leads to detection failure. Subsequently, we proposean one-anchor-based (OAB) object detection algorithm based on the idea of central point sampling in the anchor-free detector. The positive samples and negative samples are defined according to the central point sampling and distance constraint, and an anchor box is preset for each positive sample to accelerate its convergence. It reduces the complexity of the anchor-based detector, improves the inference speed, and reduces the setting of hyperparameters in the traditional matching strategy, rendering the model more flexible. Finally, in order to suppress background noise in remote sensing images, the vision transformer (ViT) is adopted to connect the neck and head, making it easier for the network to pay attention to key information. Thus, it is not easy to lose in the training process. Experiments on challenging public dataset—DOTA dataset- verified the effectiveness of the proposed algorithm. The experimental results show that the mAP of the optimized OAB-YOLOv5 method is improved by 2.79%, the number of parameters is reduced by 13.2%, and the inference time is reduced by 11% compared with the YOLOv5 baseline.

Journal ArticleDOI
TL;DR: The author proposes a four-rotor UAV sensor fault diagnosis and fault-tolerant control based on genetic algorithm and introduces common sensor failures, and based on the improved BP neural network of the GA algorithm, the genetic algorithm is improved.
Abstract: The quadrotor drone is small in size and light in weight, the mechanical structure is simple, and the requirements for the working environment are low. The development of a quadrotor UAV technology is also the focus of the current technical personnel. The author proposes a four-rotor UAV sensor fault diagnosis and fault-tolerant control based on genetic algorithm and introduces common sensor failures, and based on the improved BP neural network of the GA algorithm, the genetic algorithm is improved. This paper uses a classical BP algorithm, a classical GA-BP algorithm, and an improved GA-BP algorithm for training. Using a total of 150 sets of training data and training function using LevenbregMarquardt (trainlm), MeanSquaredError (performance function using mse), in the same noise background, the improved GA-BP algorithm has the highest detection rate, the classical GA-BP algorithm followed, and classical BP algorithm is the worst. Therefore, using the improved GA-BP algorithm, various errors of the sensor can be detected quickly and accurately.

Journal ArticleDOI
TL;DR: The DT-assisted simulation method of AV is applied in the car- following scenario, which effectively solves the challenges of car-following scenario simulation through virtual-real interaction.
Abstract: The automated system replaces the driver, which makes autonomous vehicle to improve safety and convenience, so the market of autonomous vehicle is huge. However, the real-world application of autonomous vehicles faces many challenges due to the immaturity of automated systems. As a consequence, simulation verification plays an irreplaceable role in the application of autonomous vehicle (AV). Car-following is the most common driving scenario in mixed traffic flows, so it is essential to develop an appropriate and effective simulation method for AV. Combined with the existing AV simulation methods and digital twin (DT) technology, this paper proposes a DT-assisted method for AV simulation in a car-following scenario. The method makes the physical vehicle interact with the DT vehicle, and the DT vehicle can dynamically regulate the physical entities through real-time simulation data; the simulation verification can be displayed in the DT scenario to ensure the security of the simulation. Meanwhile, a DT-assisted simulation framework of AV is proposed, the framework includes physical entity components, DT components, and data processing and evaluation components. Besides, a DT-assisted simulation platform is developed base on Unity engine. Finally, the DT-assisted simulation of AV in the car-following scenario is implemented in field experiment. The experimental results show that the proposed method can be effectively conducted AV simulation in car-following, and the average of communication latency is 52.3 ms, which is smaller than the update frequency 15 Hz (66.6 ms) between DT-assisted platform and AV. The DT-assisted simulation method of AV proposed in this paper is applied in the car-following scenario, which effectively solves the challenges of car-following scenario simulation through virtual-real interaction.

Journal ArticleDOI
TL;DR: There is a significant enhancement in the potential energy of the AGW ranging from 2 to 22 days prior to different earthquakes, and the conditions of geomagnetic disturbances, typhoons, and thunderstorms are examined to eliminate the possible contamination.
Abstract: Atmospheric disturbances caused by seismic activity are a complex phenomenon. The Lithosphere–Atmosphere–Ionosphere Coupling (LAIC) (LAIC) mechanism gives a detailed idea to understand these processes to study the possible impacts of a forthcoming earthquake. The atmospheric gravity wave (AGW) is one of the most accurate parameters for explaining such LAIC process, where seismogenic disturbances can be explained in terms of atmospheric waves caused by temperature changes. The key goal of this work is to study the perturbation in the potential energy associated with stratospheric AGW prior to many large earthquakes. We select seven large earthquakes having Richter scale magnitudes greater than seven ( M > 7.0 ) in Japan (Tohoku and Kumamoto), Mexico (Chiapas), Nepal, and the Indian Ocean region, to study the intensification of AGW using the atmospheric temperature profile as recorded from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) satellite. We observe a significant enhancement in the potential energy of the AGW ranging from 2 to 22 days prior to different earthquakes. We examine the conditions of geomagnetic disturbances, typhoons, and thunderstorms during our study and eliminate the possible contamination due to these events.

Journal ArticleDOI
TL;DR: A blockchain-based system that helps the patient’s data be managed and secured into a single record held by the patient, using the Ethereum network, and facilitates the secure transfer of patient medical records is presented.
Abstract: An electronic health record (EHR) is a technology that allows you to keep track of your health information. It keeps computerized records of several healthcare organizations. Records are exchanged via enterprise-wide data systems as well as other networking technologies and exchanges. Patients nowadays expect immediate access to their health information. However, the health sector comes with immediate access to data, and there are worries about the privacy and security of medical records of patients. As a result, a blockchain-based solution can assist in resolving this issue. The blockchain has the potential to beat the conventional centralized system, which suffers from a severe lack of accessibility. This is a decentralized technology that has recently been presented to provide a new viewpoint on data security and system efficiency. This paper presents a blockchain-based system that helps the patient’s data be managed and secured into a single record held by the patient. This system was developed using the Ethereum network using Ganache, as well as programming languages, tools, and techniques such as Solidity and web3.js. The measured approach suggested in this paper uses this platform to store patients’ data and execute functions in a decentralized system using blockchain smart contracts. Transactions are communicated through the smart contract once it has been launched, providing security and privacy features. Furthermore, the transaction’s desired alterations can be verified and transmitted to the entire distributed network. There is also a cryptocurrency wallet (MetaMask) that holds a centrally controlled, private information system in which records can be quickly accessed and secured by authorities. Doctors and patients can access the system through the wallet. Moreover, all the data of the doctor and patient will be secured and managed through this system. This proposed system is aimed at doing things such as the following: blockchain technology allows users to obtain the same data at the same time, increasing efficiency, developing credibility, and reducing barriers. It enables the secure storage of data by setting specific access for users. Additionally, this proposed system facilitates the secure transfer of patient medical records. Finally, this paper describes a health-record system and a new protocol that are quick and secure to use. It allows greater openness and ownership of sensitive data to be recorded and secured and also promotes the healthcare sector with blockchain.

Journal ArticleDOI
TL;DR: An ML-based model, Smart Crop Selection (SCS), which is based on data of metrological and soil factors, including nitrogen, phosphorus, potassium, CO2, pH, EC, temperature, humidity of soil, and rainfall is proposed, which is not efficient as compared to existing IoT-based systems.
Abstract: Today, farmers are suffering from the low yield of crops. Though right crop selection is the main boosting key to maximize crop yield by doing soil analysis and considering metrological factors, the lack of knowledge about soil fertility and crop selection is the main reason for low crop production. In the changed current climate, the farmers having primitive knowledge about conventional farming are facing challenges about making sagacious decisions on crop selection. The selection of the same crop in every seasonal cycle makes the low soil fertility. This study is aimed at making an efficient and accurate system using IoT devices and machine learning (ML) algorithms that can correctly select a crop for maximal yield. Such a system is reliable as compared to the old laboratory testing manual systems, which bear the chances of human errors. Correct selection of a crop is predominantly a priority in agricultural arena. As a contribution, we propose an ML-based model, Smart Crop Selection (SCS), which is based on data of metrological and soil factors. These factors include nitrogen, phosphorus, potassium, CO2, pH, EC, temperature, humidity of soil, and rainfall. Existing IoT-based systems are not efficient as compared to our proposed model due to limited consideration of these factors. In the proposed model, real-time sensory data is sent to Firebase cloud for analysis. Its results are also visualized on the Android app. SCS ensembles the following five ML algorithms to increase performance and accuracy: Decision tree, SVM, KNN, Random Forest, and Gaussian Naïve Bayes. For rainfall prediction, a dataset containing historical data of the last fifteen years is acquired from Bahawalpur Agricultural Department. This dataset and an ML algorithm, Multiple Linear Regression leverages prediction of the rainfall in future, a much-desired information for the health of any crop. The Root Mean Square Error of the rain fall prediction model is 0.3%, which is quite promising. The SCS model is trained for 11 crops’ prediction, while its accuracy is 97% to 98%.

Journal ArticleDOI
TL;DR: Based on the feature that rainfall can change the sea surface texture, a wave texture difference method for rainfall detection is proposed in this paper , where the polar coordinates of the radar image are converted into Cartesian coordinates to detect rainfall.
Abstract: To suppress the influence of rainfall when extracting sea surface wind and wave parameters using X-band marine radar and control the quality of the collected radar image, it is necessary to detect whether the radar image is contaminated by rainfall. Since the detection accuracy of the statistical characteristics methods (e.g., the zero pixel percentage method and the high-clutter direction method) is limited and the threshold is difficult to determine, the machine learning methods (e.g., the support vector machine-based method and the neural network algorithm) are difficult to select appropriate quality and quantity of data for model training. Therefore, based on the feature that rainfall can change the sea surface texture, a wave texture difference method for rainfall detection is proposed in this paper. Considering the spatial rainfall is uneven, the polar coordinates of the radar image are converted into Cartesian coordinates to detect rainfall. To express the maximum wave difference more accurately, the calculation method of the pixels in the radar texture difference map is redefined. Then, a consecutive pixel method is used to detect the calculated radar texture difference map, and this method can detect adaptively with the change of wind. The data collected from the shore of Haitan Island along the East China Sea are used to validate the effectiveness of the proposed method. Compared with the zero pixel percentage method and the support vector machine-based method, the experimental results demonstrate that the proposed method has better rainfall detection performance. In addition, the research on the applicability of the proposed method shows that the wave texture difference method can finish the task of rainfall detection in most marine environments.

Journal ArticleDOI
TL;DR: In this article , the humidity-sensing properties of semiconductor metal oxides are comprehensively summarized, focusing on effective measures to improve the moisture sensing properties of CuO-based moisture-sensitive materials, including surface modification and nanocomposites.
Abstract: Novel humidity sensors based on semiconducting metal oxides with good humidity-sensing properties have attracted extensive attention, which due to their high sensitivity at room temperature, high safety, low hysteresis, and long-term stability. As a typical p-type semiconductor metal oxide, CuO is considered to be a high-performance moisture-sensitive material; however, with the development of production, the complex working environment has put forward higher requirements for its humidity sensitivity, especially sensitivity and stability. In this regard, workers around the world are working to improve the moisture-sensing properties of sensing elements. In this review, the humidity-sensing properties of CuO-based moisture-sensitive materials are comprehensively summarized, focusing on effective measures to improve the moisture-sensing properties of CuO-based moisture-sensitive materials, including surface modification and nanocomposites. The future research of semiconducting metal oxide humidity-sensitive materials is also prospected.

Journal ArticleDOI
TL;DR: In this article , a multidimensional analysis and application of English teaching quality based on an artificial intelligence model was proposed, and the results showed that the average score of English has been greatly improved by comparing the traditional teaching mode and the new classroom quality monitoring mode under the Internet of Things technology.
Abstract: In view of the lack of teaching resources and the impossibility of real-time sharing and application of teaching resources in English teaching, this paper proposes a multidimensional analysis and application of English teaching quality based on an artificial intelligence model. This paper analyzes the basic framework and application of the Internet of Things technology and puts forward the corresponding hierarchical classification and teaching quality monitoring mode. Secondly, the monitoring framework of the Internet of Things to achieve the quality of English teaching and then the basic theory of the Internet of Things is analyzed. Finally, the experiment shows that the average score of English has been greatly improved by comparing the traditional teaching mode and the new classroom quality monitoring mode under the Internet of Things technology. The main dimensions of teaching quality, such as teachers’ quality, teaching attitude, teaching content, and teaching methods, are analyzed in depth, and the coefficient of most sample data is more than 0.7, which has a good application effect. Whether running on the test set or the mixed test set, the accuracy of the new classroom quality monitoring model proposed in this paper is the highest among the three models. The correct rate on the test set can reach 99.71%, the correct rate on the mixed test set can reach 98.01%, and the correct rate can reach 98.67%, which shows the superiority of the performance of the new classroom quality monitoring model.

Journal ArticleDOI
TL;DR: A dynamic planning algorithm based on feature matching, which uses the consistency and accuracy of feature matching to measure the similarity of two frames and then uses a dynamic planning algorithms to find the optimal matching distance between two gesture sequences.
Abstract: In this paper, we use machine learning algorithms to conduct in-depth research and analysis on the construction of human-computer interaction systems and propose a simple and effective method for extracting salient features based on contextual information. The method can retain the dynamic and static information of gestures intact, which results in a richer and more robust feature representation. Secondly, this paper proposes a dynamic planning algorithm based on feature matching, which uses the consistency and accuracy of feature matching to measure the similarity of two frames and then uses a dynamic planning algorithm to find the optimal matching distance between two gesture sequences. The algorithm ensures the continuity and accuracy of the gesture description and makes full use of the spatiotemporal location information of the features. The features and limitations of common motion target detection methods in motion gesture detection and common machine learning tracking methods in gesture tracking are first analyzed, and then, the kernel correlation filter method is improved by designing a confidence model and introducing a scale filter, and finally, comparison experiments are conducted on a self-built gesture dataset to verify the effectiveness of the improved method. During the training and validation of the model by the corpus, the complementary feature extraction methods are ablated and learned, and the corresponding results obtained are compared with the three baseline methods. But due to this feature, GMMs are not suitable when users want to model the time structure. It has been widely used in classification tasks. By using the kernel function, the support vector machine can transform the original input set into a high-dimensional feature space. After experiments, the speech emotion recognition method proposed in this paper outperforms the baseline methods, proving the effectiveness of complementary feature extraction and the superiority of the deep learning model. The speech is used as the input of the system, and the emotion recognition is performed on the input speech, and the corresponding emotion obtained is successfully applied to the human-computer dialogue system in combination with the online speech recognition method, which proves that the speech emotion recognition applied to the human-computer dialogue system has application research value.