scispace - formally typeset
Search or ask a question

Showing papers in "Scientific Programming in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors provided a prediction method for the early identification of COVID-19 patient's outcome based on patients' characteristics monitored at home, while in quarantine, and the results showed that RF outperformed the other classifiers with an accuracy of 0 95 and area under curve (AUC) of 0 99.
Abstract: The novel coronavirus (COVID-19) outbreak produced devastating effects on the global economy and the health of entire communities Although the COVID-19 survival rate is high, the number of severe cases that result in death is increasing daily A timely prediction of at-risk patients of COVID-19 with precautionary measures is expected to increase the survival rate of patients and reduce the fatality rate This research provides a prediction method for the early identification of COVID-19 patient's outcome based on patients' characteristics monitored at home, while in quarantine The study was performed using 287 COVID-19 samples of patients from the King Fahad University Hospital, Saudi Arabia The data were analyzed using three classification algorithms, namely, logistic regression (LR), random forest (RF), and extreme gradient boosting (XGB) Initially, the data were preprocessed using several preprocessing techniques Furthermore, 10-k cross-validation was applied for data partitioning and SMOTE for alleviating the data imbalance Experiments were performed using twenty clinical features, identified as significant for predicting the survival versus the deceased COVID-19 patients The results showed that RF outperformed the other classifiers with an accuracy of 0 95 and area under curve (AUC) of 0 99 The proposed model can assist the decision-making and health care professional by early identification of at-risk COVID-19 patients effectively [ABSTRACT FROM AUTHOR] Copyright of Scientific Programming is the property of Hindawi Limited and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission However, users may print, download, or email articles for individual use This abstract may be abridged No warranty is given about the accuracy of the copy Users should refer to the original published version of the material for the full abstract (Copyright applies to all Abstracts )

49 citations


Journal ArticleDOI
TL;DR: In this article, the authors implemented a Face Mask and Social Distancing Detection model as an embedded vision system and evaluated the system performance in terms of precision, recall, F1-score, support, sensitivity, specificity, and accuracy that demonstrate the practical applicability.
Abstract: Since the infectious coronavirus disease (COVID-19) was first reported in Wuhan, it has become a public health problem in China and even around the world. This pandemic is having devastating effects on societies and economies around the world. The increase in the number of COVID-19 tests gives more information about the epidemic spread, which may lead to the possibility of surrounding it to prevent further infections. However, wearing a face mask that prevents the transmission of droplets in the air and maintaining an appropriate physical distance between people, and reducing close contact with each other can still be beneficial in combating this pandemic. Therefore, this research paper focuses on implementing a Face Mask and Social Distancing Detection model as an embedded vision system. The pretrained models such as the MobileNet, ResNet Classifier, and VGG are used in our context. People violating social distancing or not wearing masks were detected. After implementing and deploying the models, the selected one achieved a confidence score of 100%. This paper also provides a comparative study of different face detection and face mask classification models. The system performance is evaluated in terms of precision, recall, F1-score, support, sensitivity, specificity, and accuracy that demonstrate the practical applicability. The system performs with F1-score of 99%, sensitivity of 99%, specificity of 99%, and an accuracy of 100%. Hence, this solution tracks the people with or without masks in a real-time scenario and ensures social distancing by generating an alarm if there is a violation in the scene or in public places. This can be used with the existing embedded camera infrastructure to enable these analytics which can be applied to various verticals, as well as in an office building or at airport terminals/gates. [ABSTRACT FROM AUTHOR] Copyright of Scientific Programming is the property of Hindawi Limited and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)

46 citations


Journal ArticleDOI
TL;DR: In this article, a systematic review was conducted to review the studies aiming to discover and classify leukemia by using machine learning, and the average accuracy of the ML methods applied in PBS image analysis to detect leukemia was >97%, indicating that the use of ML could lead to extraordinary outcomes in leukemia detection from PBS images.
Abstract: Introduction. The early detection and diagnosis of leukemia, i.e., the precise differentiation of malignant leukocytes with minimum costs in the early stages of the disease, is a major problem in the domain of disease diagnosis. Despite the high prevalence of leukemia, there is a shortage of flow cytometry equipment, and the methods available at laboratory diagnostic centers are time-consuming. Motivated by the capabilities of machine learning (machine learning (ML)) in disease diagnosis, the present systematic review was conducted to review the studies aiming to discover and classify leukemia by using machine learning. Methods. A systematic search in four databases (PubMed, Scopus, Web of Science, and ScienceDirect) and Google Scholar was performed via a search strategy using Machine Learning (ML), leukemia, peripheral blood smear (PBS) image, detection, diagnosis, and classification as the keywords. Initially, 116 articles were retrieved. After applying the inclusion and exclusion criteria, 16 articles remained as the population of the study. Results. This review study presents a comprehensive and systematic view of the status of all published ML-based leukemia detection and classification models that process PBS images. The average accuracy of the ML methods applied in PBS image analysis to detect leukemia was >97%, indicating that the use of ML could lead to extraordinary outcomes in leukemia detection from PBS images. Among all ML techniques, deep learning (DL) achieved higher precision and sensitivity in detecting different cases of leukemia, compared to its precedents. ML has many applications in analyzing different types of leukemia images, but the use of ML algorithms to detect acute lymphoblastic leukemia (ALL) has attracted the greatest attention in the fields of hematology and artificial intelligence. Conclusion. Using the ML method to process leukemia smear images can improve accuracy, reduce diagnosis time, and provide faster, cheaper, and safer diagnostic services. In addition to the current diagnostic methods, clinical and laboratory experts can also adopt ML methods in laboratory applications and tools.

42 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used a synthetic minority oversampling technique (SMOTE) to handle given imbalance data to predict heart attack. And the results showed that SMOTE-based artificial neural network when tuned properly outperformed all other models and many existing systems.
Abstract: Cardiac disease treatments are often being subjected to the acquisition and analysis of vast quantity of digital cardiac data. These data can be utilized for various beneficial purposes. These data’s utilization becomes more important when we are dealing with critical diseases like a heart attack where patient life is often at stake. Machine learning and deep learning are two famous techniques that are helping in making the raw data useful. Some of the biggest problems that arise from the usage of the aforementioned techniques are massive resource utilization, extensive data preprocessing, need for features engineering, and ensuring reliability in classification results. The proposed research work presents a cost-effective solution to predict heart attack with high accuracy and reliability. It uses a UCI dataset to predict the heart attack via various machine learning algorithms without the involvement of any feature engineering. Moreover, the given dataset has an unequal distribution of positive and negative classes which can reduce performance. The proposed work uses a synthetic minority oversampling technique (SMOTE) to handle given imbalance data. The proposed system discarded the need of feature engineering for the classification of the given dataset. This led to an efficient solution as feature engineering often proves to be a costly process. The results show that among all machine learning algorithms, SMOTE-based artificial neural network when tuned properly outperformed all other models and many existing systems. The high reliability of the proposed system ensures that it can be effectively used in the prediction of the heart attack.

34 citations


Journal ArticleDOI
TL;DR: In this paper, a complex chaos-based Pseudorandom Number Generator (PRNG) and Modified Advanced Encryption Standard (MAES) is proposed to encrypt medical images.
Abstract: Securing medical images is a great challenge to protect medical privacy An image encryption model founded on a complex chaos-based Pseudorandom Number Generator (PRNG) and Modified Advanced Encryption Standard (MAES) is put forward in this paper Our work consists of the following three main points First, we propose the use of a complex PRNG based on two different chaotic systems which are the 2D Logistic map in a complex set and Henon’s system in the key generation procedure Second, in the MAES 128 bits, the subbytes’ operation is performed using four different S-boxes for more complexity Third, both shift-rows’ and mix-columns’ transformations are eliminated and replaced with a random permutation method which increases the complexity More importantly, only four rounds of encryption are performed in a loop that reduces significantly the execution time The overall system is implemented on the Altera Cyclone III board, which is completed with an SD card interface for medical image storage and a VGA interface for image display The HPS software runs on μClinux and is used to control the FPGA encryption-decryption algorithm and image transmission Experimental findings prove that the propounded map used has a keyspace sufficiently large and the proposed image encryption algorithm augments the entropy of the ciphered image compared to the AES standard and reduces the complexity time by 97% The power consumption of the system is 13687 mw and the throughput is 134 Gbit/s The proposed technique is compared to recent image cryptosystems including hardware performances and different security analysis properties, such as randomness, sensitivity, and correlation of the encrypted images and results prove that our cryptographic algorithm is faster, more efficient, and can resist any kind of attacks

31 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the power train design in Formula student race vehicles used in the famed SAE India championship and present a detailed design with an approach of easing manufacturing and assembly along with full-scale prototype manufacturing.
Abstract: This article describes the power train design specifics in Formula student race vehicles used in the famed SAE India championship. To facilitate the physical validation of the design of the power train system of a formula student race car category vehicle engine of 610 cc displacement bike engine (KTM 390 model), a detailed design has been proposed with an approach of easing manufacturing and assembly along with full-scale prototype manufacturing. Many procedures must be followed while selecting a power train, such as engine displacement, fuel type, cooling type, throttle actuation, and creating the gear system to obtain the needed power and torque under various loading situations. Keeping the rules in mind, a well-suited engine was selected for the race track and transmission train was selected which gives the maximum performance. Based on the requirement, a power train was designed with all considerations we need to follow. Aside from torque and power, we designed an air intake with fuel efficiency in mind. Wireless sensors and cloud computing were used to monitor transmission characteristics such as transmission temperature management and vibration. The current study describes the design of an air intake manifold with a maximum restrictor diameter of 20 mm.

30 citations


Journal ArticleDOI
TL;DR: In this paper, a new model for optimizing stock forecasting is proposed, which incorporates a range of technical indicators, including investor sentiment indicators and financial data, and performs dimension reduction on the many influencing factors of the retrieved stock price using depth learning LASSO and PCA approaches.
Abstract: Stock market prediction has always been an important research topic in the financial field. In the past, inventors used traditional analysis methods such as K-line diagrams to predict stock trends, but with the progress of science and technology and the development of market economy, the price trend of a stock is disturbed by various factors. The traditional analysis method is far from being able to resolve the stock price fluctuations in the hidden important information. So, the prediction accuracy is greatly reduced. In this paper, we design a new model for optimizing stock forecasting. We incorporate a range of technical indicators, including investor sentiment indicators and financial data, and perform dimension reduction on the many influencing factors of the retrieved stock price using depth learning LASSO and PCA approaches. In addition, a comparison of the performances of LSTM and GRU for stock market forecasting under various parameters was performed. Our experiments show that (1) both LSTM and GRU models can predict stock prices efficiently, not one better than the other, and (2) for the two different dimension reduction methods, both the two neural models using LASSO reflect better prediction ability than the models using PCA.

27 citations


Journal ArticleDOI
TL;DR: In this article, an interactive online English teaching system based on the Internet of Things (IoT) technology is proposed, and the teaching quality based on this system is evaluated with the help of an algorithm known as grey relational analysis algorithm.
Abstract: Nowadays, due to the pandemic and other problems, the establishment of physical classes is a big headache for both students and teachers, due to which the education system all over the world is shifted to the online system from the physical system. Advance technologies such as the Internet of Things (IoT) are playing a significant part in various sectors of life such as health, business, and education. In order to effectively improve the effect of online English teaching, this study designed an interactive online English teaching system based on the IoT technology. This study proposes three topological structures for the establishment of the proposed IoT-based online English teaching system. Based on the analysis of the three topological structures of the IoT, this study chooses to design each submodule of the front and back of the system in the network IoT environment to realize the daily operation and various functions of the system and to realize the interactive design of both of the teacher and student side. Based on this approach, an online English teaching system is designed, and the teaching quality based on this system is evaluated with the help of an algorithm known as grey relational analysis algorithm. The experimental results show that, after the application of this system, students have access to the teaching materials and content in a short period of time; and the English test scores were improved and were significantly higher as compared to the traditional teaching system. In addition, at the same time, the internal consistency reliability of the proposed system is very high which fully demonstrates the effectiveness of the proposed system.

21 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an adaptive spiral flying sparrow search algorithm (ASFSSA), which reduces the probability of getting stuck into local optimum, has stronger optimization ability than other algorithms, and also finds the shortest and more stable path in robot path planning.
Abstract: The sparrow search algorithm is a new type of swarm intelligence optimization algorithm with better effect, but it still has shortcomings such as easy to fall into local optimality and large randomness. In order to solve these problems, this paper proposes an adaptive spiral flying sparrow search algorithm (ASFSSA), which reduces the probability of getting stuck into local optimum, has stronger optimization ability than other algorithms, and also finds the shortest and more stable path in robot path planning. First, the tent mapping based on random variables is used to initialize the population, which makes the individual position distribution more uniform, enlarges the workspace, and improves the diversity of the population. Then, in the discoverer stage, the adaptive weight strategy is integrated with Levy flight mechanism, and the fusion search method becomes extensive and flexible. Finally, in the follower stage, a variable spiral search strategy is used to make the search scope of the algorithm more detailed and increase the search accuracy. The effectiveness of the improved algorithm ASFSSA is verified by 18 standard test functions. At the same time, ASFSSA is applied to robot path planning. The feasibility and practicability of ASFSSA are verified by comparing the algorithms in the raster map planning routes of two models.

20 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an energy-efficient water management platform (EEWMP), an improved version of SWAMP, which uses field-deployed sensors, sinks, fusion centres, and open-source clouds.
Abstract: Precision agriculture is now essential in today’s world, especially for countries with limited water resources, fertile land, and enormous population. Smart irrigation systems can help countries efficiently utilize fresh water and use the excess water for barren lands. Smart water management platform (SWAMP) is an IoT-based smart irrigation project designed for efficient freshwater utilization in agriculture. The primary aim of SWAMP is to auto manage water reserves, distribution, and consumption of various levels, avoid over-irrigation and under-irrigation problems, and auto manage time to maximize production. This research proposed an energy-efficient water management platform (EEWMP), an improved version of SWAMP. EEWMP is an IoT-based smart irrigation system that uses field-deployed sensors, sinks, fusion centres, and open-source clouds. Both models’ performance is evaluated in energy consumption, network stability period, packet sent to destination, and packet delivery ratio. The experimental results show that EEWMP consumes 30% less energy and increases network stability twice than SWAMP. EEWMP can be used in different irrigation models such as drip irrigation, sprinkler irrigation, surface irrigation, and lateral move irrigation with subtle alterations. Moreover, it can also be used in small farms of third-world countries with their existing communication infrastructures such as 2G or 3G.

19 citations


Journal ArticleDOI
TL;DR: In this article, Jamous et al. used particle swarm optimization to predict the closing price of the shares traded on the stock market, allowing for the largest profit with the minimum risk.
Abstract: Since the declaration of COVID-19 as a pandemic, the world stock markets have suffered huge losses prompting investors to limit or avoid these losses. The stock market was one of the businesses that were affected the most. At the same time, artificial neural networks (ANNs) have already been used for the prediction of the closing prices in stock markets. However, standalone ANN has several limitations, resulting in the lower accuracy of the prediction results. Such limitation is resolved using hybrid models. Therefore, a combination of artificial intelligence networks and particle swarm optimization for efficient stock market prediction was reported in the literature. This method predicted the closing prices of the shares traded on the stock market, allowing for the largest profit with the minimum risk. Nevertheless, the results were not that satisfactory. In order to achieve prediction with a high degree of accuracy in a short time, a new improved method called PSOCoG has been proposed in this paper. To design the neural network to minimize processing time and search time and maximize the accuracy of prediction, it is necessary to identify hyperparameter values with precision. PSOCoG has been employed to select the best hyperparameters in order to construct the best neural network. The created network was able to predict the closing price with high accuracy, and the proposed model ANN-PSOCoG showed that it could predict closing price values with an infinitesimal error, outperforming existing models in terms of error ratio and processing time. Using S&P 500 dataset, ANN-PSOCoG outperformed ANN-SPSO in terms of prediction accuracy by approximately 13%, SPSOCOG by approximately 17%, SPSO by approximately 20%, and ANN by approximately 25%. While using DJIA dataset, ANN-PSOCoG outperformed ANN-SPSO in terms of prediction accuracy by approximately 18%, SPSOCOG by approximately 24%, SPSO by approximately 33%, and ANN by approximately 42%. Besides, the proposed model is evaluated under the effect of COVID-19. The results proved the ability of the proposed model to predict the closing price with high accuracy where the values of MAPE, MAE, and RE were very small for S&P 500, GOLD, NASDAQ-100, and CANUSD datasets. © 2021 Razan Jamous et al.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a new stock price prediction model named as CNN-BiLSTM-ECA, which combines Convolutional Neural Network (CNN), Bidirectional Long Short-term Memory (Bi-STM) network, and Attention Mechanism (AM).
Abstract: Financial data as a kind of multimedia data contains rich information, which has been widely used for data analysis task. However, how to predict the stock price is still a hot research problem for investors and researchers in financial field. Forecasting stock prices becomes an extremely challenging task due to high noise, nonlinearity, and volatility of the stock price time series data. In order to provide better prediction results of stock price, a new stock price prediction model named as CNN-BiLSTM-ECA is proposed, which combines Convolutional Neural Network (CNN), Bidirectional Long Short-term Memory (BiLSTM) network, and Attention Mechanism (AM). More specifically, CNN is utilized to extract the deep features of stock data for reducing the influence of high noise and nonlinearity. Then, BiLSTM network is employed to predict the stock price based on the extracted deep features. Meanwhile, a novel Efficient Channel Attention (ECA) module is introduced into the network model to further improve the sensitivity of the network to the important features and key information. Finally, extensive experiments are conducted on the three stock datasets such as Shanghai Composite Index, China Unicom, and CSI 300. Compared with the existing methods, the experimental results verify the effectiveness and feasibility of the proposed CNN-BILSTM-ECA network model, which can provide an important reference for investors to make decisions.

Journal ArticleDOI
TL;DR: In this article, a framework where smart contracts are used for insurance contracts and stored on blockchain is presented, which can solve all the trust and security issues that rely on a standard insurance policy.
Abstract: Traditional insurance policy settlement is a manual process that is never hassle-free. There are many issues, such as hidden conditions from the insurer or fraud claims by the insured, making the settlement process rough. This process also consumes a significant amount of time that makes the process very inefficient. This whole scenario can be disrupted by the implementation of blockchain and smart contracts in insurance. Blockchain and innovative contract technology can provide immutable data storage, security, transparency, authenticity, and security while any transaction process is triggered. With the implementation of blockchain, the whole insurance process, from authentication to claim settlement, can be done with more transparency and security. A blockchain is a virtual chain of data blocks that is a decentralized technology. Any transaction or change in the blocks is done after the decentralized validator entity, not a single person. The smart contract is a unique facility stored on the blockchain that gets executed when the predetermined conditions are met. This paper presents a framework where smart contracts are used for insurance contracts and stored on blockchain. In the case of a claim, if all the predetermined conditions are met, the transaction happens; otherwise, it is discarded. The conditions are immutable. That means there is scope for alteration from either side. This blockchain and intelligent contract-based framework are hosted on a private Ethereum network. The Solidity programming language is used to create smart contracts. The framework uses the Proof of Authority (PoA) consensus algorithm to validate the transactions. In the case of any faulty transaction request, the consensus algorithm acts according to and cancels the claim. With blockchain and smart contract implementation, this framework can solve all the trust and security issues that rely on a standard insurance policy.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper designed and implemented the wage forecasting model in human resources that uses a gradient descent algorithm, its types, and backpropagation (BP) neural network to improve the accuracy of the forecasting model.
Abstract: The economic environment has changed dramatically around the world in recent years, generating favorable conditions for the growth of small- and medium-sized firms. The socioeconomic development and international integration of China are greatly influenced by the growth in both quality and quantity, the scale of operations, and the internal force of small- and medium-sized businesses. Moreover, in comparison with other developed countries around the world, Chinese small- and medium-sized enterprises continue to face many limitations in terms of size and contribution levels and have not yet fully realized their potentials due to difficulties and poor quality; human resources in this field are still lacking. This study defines the current state of human resources in small and medium firms, the factors that impede development, and the steps that can be taken to overcome these obstacles in order to assist human resource development in this sector during the current period. This study uses machine learning (ML) techniques to manage and analyze human resource data in modern enterprises. The ML techniques realize the functions of the human resource system and reduce the business volume in human resource in order to improve the efficiency and management of the human resource work. In this paper, we designed and implemented the wage forecasting model in human resources that uses a gradient descent algorithm, its types, and backpropagation (BP) neural network to improve the accuracy of the forecasting model. We performed multiple experiments by using a various number of neurons in the hidden layers, different number of iterations, and several types of gradient descent algorithms. The BP neural network model was performed brilliantly by attaining the training accuracy of 89.98% and validation accuracy of 84.05%. The experimental results show the significance and importance of the proposed work.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an integration system of ideological and political multimedia network teaching resources based on wireless network, where several Radio Access Technologies (RATs) are collectively controlled and identify the different activities relating to the CRRM issues.
Abstract: The main purpose of the integration of network media and middle school ideological and political course is to better realize the moral education function of middle school ideological and political development. The use of resource integration technology to manage distributed teaching resources is conducive to improving the level of information construction in colleges and universities, and so on. This paper puts forward the integration system of ideological and political multimedia network teaching resources based on wireless network. Firstly, it presents an overview of the Radio Resource Management (RRM) problem and projected strategies within wireless network framework, where several Radio Access Technologies (RATs) are collectively controlled and identify the different activities relating to the CRRM issues. Secondly, this paper targets providing a summary of the RRM problem and projected solutions within wireless network. Thirdly, the theoretical elements of the design and the teaching resource management assessment process are clarified by using XML as the data exchange carrier to realize the automatic integration of teaching resources. Fourthly, we have used dSPACE platform, by calling its API function, which then call the subscribe function of dSPACE framework and store the subscription information in the database. Whenever new teaching resources enter the subscribed field, the dSPACE framework will automatically send e-mail to the subscribers. Finally, during the operation of the system, all functions operate normally and have strong security, so as to avoid the illegal theft or leakage of the data about the education system.

Journal ArticleDOI
TL;DR: In this paper, a peer-to-peer encrypted system was used in conjunction with a smart contract to improve data security in traditional supply chain management systems and reduce the involvement of third parties in the supply chain system and improving data security.
Abstract: The manufacture of raw materials to deliver the product to the consumer in a traditional supply chain system is a manual process with insufficient data and transaction security. It also takes a significant amount of time, making the entire procedure lengthy. Overall, the undivided process is ineffective and untrustworthy for consumers. If blockchain and smart contract technologies are integrated into traditional supply chain management systems, data security, authenticity, time management, and transaction processes will all be significantly improved. Blockchain is a revolutionary, decentralized technology that protects data from unauthorized access. The entire supply chain management (SCM) will be satisfied with the consumer once smart contracts are implemented. The plan becomes more trustworthy when the mediator is contracted, which is doable in these ways. The tags employed in the conventional SCM process are costly and have limited possibilities. As a result, it is difficult to maintain product secrecy and accountability in the SCM scheme. It is also a common target for wireless attacks (reply attacks, eavesdropping, etc.). In SCM, the phrase “product confidentiality” is very significant. It means that only those who have been validated have access to the information. This paper emphasizes reducing the involvement of third parties in the supply chain system and improving data security. Traditional supply chain management systems have a number of significant flaws. Lack of traceability, difficulty maintaining product safety and quality, failure to monitor and control inventory in warehouses and shops, rising supply chain expenses, and so on, are some of them. The focus of this paper is on minimizing third-party participation in the supply chain system and enhancing data security. This improves accessibility, efficiency, and timeliness throughout the whole process. The primary advantage is that individuals will feel safer throughout the payment process. However, in this study, a peer-to-peer encrypted system was utilized in conjunction with a smart contract. Additionally, there are a few other features. Because this document makes use of an immutable ledger, the hacker will be unable to get access to it. Even if they get access to the system, they will be unable to modify any data. If the goods are defective, the transaction will be halted, and the customer will be reimbursed, with the seller receiving the merchandise. By using cryptographic methods, transaction security will be a feasible alternative for recasting these issues. Finally, this paper will demonstrate how to maintain the method with the maximum level of safety, transparency, and efficiency.

Journal ArticleDOI
TL;DR: In this article, a blended machine learning ensemble model developed from logistic regression, support vector machine, linear discriminant analysis, stochastic gradient descent, and ridge regression was used to predict if a news report is true or not.
Abstract: The exponential growth in fake news and its inherent threat to democracy, public trust, and justice has escalated the necessity for fake news detection and mitigation. Detecting fake news is a complex challenge as it is intentionally written to mislead and hoodwink. Humans are not good at identifying fake news. The detection of fake news by humans is reported to be at a rate of 54% and an additional 4% is reported in the literature as being speculative. The significance of fighting fake news is exemplified during the present pandemic. Consequently, social networks are ramping up the usage of detection tools and educating the public in recognising fake news. In the literature, it was observed that several machine learning algorithms have been applied to the detection of fake news with limited and mixed success. However, several advanced machine learning models are not being applied, although recent studies are demonstrating the efficacy of the ensemble machine learning approach; hence, the purpose of this study is to assist in the automated detection of fake news. An ensemble approach is adopted to help resolve the identified gap. This study proposed a blended machine learning ensemble model developed from logistic regression, support vector machine, linear discriminant analysis, stochastic gradient descent, and ridge regression, which is then used on a publicly available dataset to predict if a news report is true or not. The proposed model will be appraised with the popular classical machine learning models, while performance metrics such as AUC, ROC, recall, accuracy, precision, and f1-score will be used to measure the performance of the proposed model. Results presented showed that the proposed model outperformed other popular classical machine learning models.

Journal ArticleDOI
TL;DR: A Chaotic Biogeography-Based Optimization approach using Information Entropy (CBO-IE) is implemented to perform clustering over healthcare IoT datasets, and the main objective of CBO-IE is to provide proficient and precise data point distribution in datasets by using information entropy concepts and to initialize the population by using chaos theory as discussed by the authors.
Abstract: Data mining is mostly utilized for a huge variety of applications in several fields like education, medical, surveillance, and industries. The clustering is an important method of data mining, in which data elements are divided into groups (clusters) to provide better quality data analysis. The Biogeography-Based Optimization (BO) is the latest metaheuristic approach, which is applied to resolve several complex optimization problems. Here, a Chaotic Biogeography-Based Optimization approach using Information Entropy (CBO-IE) is implemented to perform clustering over healthcare IoT datasets. The main objective of CBO-IE is to provide proficient and precise data point distribution in datasets by using Information Entropy concepts and to initialize the population by using chaos theory. Both Information Entropy and chaos theory are facilitated to improve the convergence speed of BO in global search area for selecting the cluster heads and cluster members more accurately. The CBO-IE is implemented to a MATLAB 2021a tool over eight healthcare IoT datasets, and the results illustrate the superior performance of CBO-IE based on F-Measure, intracluster distance, running time complexity, purity index, statistical analysis, root mean square error, accuracy, and standard deviation as compared to previous techniques of clustering like K-Means, GA, PSO, ALO, and BO approaches.

Journal ArticleDOI
TL;DR: Based on the big data of historical Tai Chi classrooms, this article constructs an interactive classroom system that can effectively improve the quality of Tai Chi ideological and political courses.
Abstract: Tai Chi martial arts education is one of the components of school education. Its educational value is not only to require students to master basic Tai movement technical skills and improve their physical fitness but also to bring students’ ideological progress and cultivate students to respect teachers and lectures. Excellent moral qualities such as politeness, keeping promises, observing the rules, and acting bravely, as well as the cultivation of the spirit of unity and cooperation, and the quality of will also have a certain meaning. However, the scientific Tai Chi ideological and political courses and the construction of Wude education interactive classrooms lack relevant research. Therefore, this article builds a Tai Chi ideological and political interactive classroom system based on big data technology and graph neural network. First, the spatio-temporal graph convolutional neural network is used to reason about the relationship between Tai Chi action categories and strengthen the low-dimensional features of semantic categories and their co-occurrence expressions used for semantic enhancement of current image features. In addition, in order to ensure the efficiency of the Tai Chi scene analysis network, an efficient dual feature extraction basic module is proposed to construct the backbone network, reducing the number of parameters of the entire network and the computational complexity. Experiments show that this method can obtain approximate results, while reducing the amount of floating-point operations by 42.5% and the amount of parameters by 50.2% compared with the work of the same period, and achieves a better balance of efficiency and performance. Secondly, based on the big data of historical Tai Chi classrooms, this article constructs an interactive classroom system that can effectively improve the quality of Tai Chi ideological and political courses.

Journal ArticleDOI
Peng Wang1
TL;DR: In this paper, the recognition of sports training action based on deep learning algorithm has been studied and experimental work has been carried out in order to show the validity of the proposed research.
Abstract: With the rapid development of science and technology in today’s society, various industries are pursuing information digitization and intelligence, and pattern recognition and computer vision are also constantly carrying out technological innovation. Computer vision is to let computers, cameras, and other machines receive information like human beings, analyze and process their semantic information, and make coping strategies. As an important research direction in the field of computer vision, human motion recognition has new solutions with the gradual rise of deep learning. Human motion recognition technology has a high market value, and it has broad application prospects in the fields of intelligent monitoring, motion analysis, human-computer interaction, and medical monitoring. This paper mainly studies the recognition of sports training action based on deep learning algorithm. Experimental work has been carried out in order to show the validity of the proposed research.

Journal ArticleDOI
TL;DR: In this article, a music teaching evaluation model based on the weighted Naive Bayes algorithm is proposed, and a weighted Bayesian classification incremental learning approach is employed to improve the efficiency of the evaluation system.
Abstract: Evaluation of music teaching is a highly subjective task often depending upon experts to assess both the technical and artistic characteristics of performance from the audio signal. This article explores the task of building computational models for evaluating music teaching using machine learning algorithms. As one of the widely used methods to build classifiers, the Naive Bayes algorithm has become one of the most popular music teaching evaluation methods because of its strong prior knowledge, learning features, and high classification performance. In this article, we propose a music teaching evaluation model based on the weighted Naive Bayes algorithm. Moreover, a weighted Bayesian classification incremental learning approach is employed to improve the efficiency of the music teaching evaluation system. Experimental results show that the algorithm proposed in this paper is superior to other algorithms in the context of music teaching evaluation.

Journal ArticleDOI
TL;DR: In this article, the image segmentation algorithm of concrete cracks based on convolutional neural network and designs an end-to-end segmentation model based on ResNet101.
Abstract: Crack is the early expression form of the concrete pavement disease. Early discovery and treatment of it can play an important role in the maintenance of the pavement. With ongoing advancements in computer hardware technology, continual optimization of deep learning algorithms, as compared to standard digital image processing algorithms, utilizing automation of crack detection technology has a deep learning algorithm that is more exact. As a result of the benefits of greater robustness, the study of concrete pavement crack picture has become popular. In view of the poor effect and weak generalization ability of traditional image processing technology on image segmentation of concrete cracks, this paper studies the image segmentation algorithm of concrete cracks based on convolutional neural network and designs an end-to-end segmentation model based on ResNet101. It integrates more low-level features, which make the fracture segmentation results more refined and closer to the practical application scenarios. Compared with other methods, the algorithm in this paper has achieved higher detection accuracy and generalization ability.

Journal ArticleDOI
TL;DR: In this article, an IoT centric cyber-physical twin architecture has been proposed for 6G technology, which helps out in serving stronger communication and also contains several features that help out in assisting communication like maintaining a log record of network data and managing all digital assets like images, audio, video, and so forth.
Abstract: With the rapid growth of Internet of Everything, there is a huge rise in the transportable Internet traffic due to which its associated resources have exceptional obstacles which include reliability, security, expandability, security, and portability which the current available network architectures are unable to deal with. In this paper, an IoT centric cyber-physical twin architecture has been proposed for 6G Technology. The cyber-twin technology helps out in serving stronger communication and also contains several features that help out in assisting communication like maintaining a log record of network data and managing all digital assets like images, audio, video, and so forth. These features of the cyber-twin technology enable the proposed network to deal with those exceptional obstacles and make the system more reliable, safe, workable, and adaptable.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors combined 1D convolution, gating mechanism, residual connection, and attention mechanism and proposed a music feature extraction and classification model based on convolutional neural network, which can extract more relevant sound spectrum characteristics of the music category.
Abstract: With the rapid development of information technology and communication, digital music has grown and exploded. Regarding how to quickly and accurately retrieve the music that users want from huge bulk of music repository, music feature extraction and classification are considered as an important part of music information retrieval and have become a research hotspot in recent years. Traditional music classification approaches use a large number of artificially designed acoustic features. The design of features requires knowledge and in-depth understanding in the domain of music. The features of different classification tasks are often not universal and comprehensive. The existing approach has two shortcomings as follows: ensuring the validity and accuracy of features by manually extracting features and the traditional machine learning classification approaches not performing well on multiclassification problems and not having the ability to be trained on large-scale data. Therefore, this paper converts the audio signal of music into a sound spectrum as a unified representation, avoiding the problem of manual feature selection. According to the characteristics of the sound spectrum, the research has combined 1D convolution, gating mechanism, residual connection, and attention mechanism and proposed a music feature extraction and classification model based on convolutional neural network, which can extract more relevant sound spectrum characteristics of the music category. Finally, this paper designs comparison and ablation experiments. The experimental results show that this approach is better than traditional manual models and machine learning-based approaches.

Journal ArticleDOI
TL;DR: In this paper, a WBIETS system is combined with the traditional English teaching system to expand the system function, analyzes the needs of the English network teaching system, and constructs the system functions modules and logical structure.
Abstract: The traditional English teaching system has certain problems in the acquisition of teaching resources and the innovation of teaching models. In order to improve the effect of subsequent English online teaching, this paper improves the machine learning algorithm to make it a core algorithm that can be used by artificial intelligence systems. Moreover, this paper combines the WBIETS system to expand the system function, analyzes the needs of the English network teaching system, and constructs the system function modules and logical structure. The data layer, logic layer, and presentation layer in the system constructed in this paper are independent of each other and can be effectively expanded when subsequent requirements change. In addition, this paper solves the problem of acquiring English teaching resources through the WBIETS system. To evaluate the performance of the English network teaching system, this paper performs comprehensive mathematical and experimental analysis. The experimental results show that the system constructed in this paper basically meets the actual teaching requirements.

Journal ArticleDOI
TL;DR: In this article, the authors diagnose changes caused by the influence of the COVID-19 pandemic on sustainable tourism by analysing the case of national parks in Poland and provide recommendations for national parks, which can be helpful in achieving sustainable tour-making.
Abstract: Tourist attractiveness of many areas in Poland is based on exceptional natural values, especially those protected by national parks. Recreation opportunities offered by national parks proved to be important during the COVID-19 pandemic, when the conditions for tourism changed. Many tourists gave up previously planned trips abroad in favour of staying in Poland. This raises the question whether tourists visiting national parks during the pandemic rested in compliance with the principles of sustainable tourism. The article is an attempt to diagnose changes caused by the influence of the COVID-19 pandemic on sustainable tourism by analysing the case of national parks in Poland. The article presents: 1) a diagnosis of changes triggered at different stages of restrictions introduced by the government to prevent the spread of the pandemic; 2) a forecast of how the pandemic may affect the development of tourism in terms of supply (tourist companies, hotels, catering, attractions) and demand (tourists). The summary provides recommendations for national parks, which can be helpful in achieving sustainable tour-

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors collected categorical ETM+ and OLI data from 2000, 2010, and 2019 on the mainland coastline and explored the characteristics and spatiotemporal differences across the past 19 years by using remote sensing and geographic information system (GIS) technologies.
Abstract: This study focuses on the coastal features, environments, and dynamics to accurately describe and regularly monitor the Qingdao shoreline in eastern China. It collects categorical ETM+ and OLI data from 2000, 2010, and 2019 on the mainland coastline and explores the characteristics and spatiotemporal differences across the past 19 years by using remote sensing and geographic information system (GIS) technologies. The results show that the length of the Qingdao coastline has increased continuously over the last two decades, for a total increase of 18.14 km. There are different natural and artificial coastlines that have undergone major changes. The human-induced deterioration of coastlines has gradually and substantially risen from 53.63% in 2000 to 68.40% in 2019, while the length of the natural coastlines has decreased dramatically. Jiaozhou Bay focuses on areas with significantly changing coastlines, and major changes have occurred in the west and east of the Qingdao coast. The coastline has largely expanded seaward because of the comprehensive impact of natural and anthropogenic factors. The leading factor in coastal evolution is coastal engineering constructions. In addition, the top three other construction activities are the restoration of the aquaculture pond, salt field, and harbor edifices. The driving force that triggered the shift in the coastline reveals significant temporal heterogeneity.

Journal ArticleDOI
TL;DR: In this article, both the simple and parallel clustering techniques are implemented and analyzed to point out their best features to improve the academic performance of students in the field of education, and the proposed study is more useful for scientific research data sorting.
Abstract: Context. Educational Data Mining (EDM) is a new and emerging research area. Data mining techniques are used in the educational field in order to extract useful information on employee or student progress behaviors. Recent increase in the availability of learning data has given importance and momentum to educational data mining to better understand and optimize the learning process and the environments in which it takes place. Objective. Data are the most valuable commodity for any organization. It is very difficult to extract useful information from such a large and massive collection of data. Data mining techniques are used to forecast and evaluate academic performance of students based on their academic record and participation in the forum. Although several studies have been carried out to evaluate the academic performance of students worldwide, there is a lack of appropriate studies to assess factors that can boost the academic performance of students. Methodology. The current study sought to weigh up factors that contribute to improving student academic performance in Pakistan. In this paper, both the simple and parallel clustering techniques are implemented and analyzed to point out their best features. The Parallel K-Mean algorithms overcome the problems of simple algorithm and the outcomes of the parallel algorithms are always the same, which improves the cluster quality, number of iterations, and elapsed time. Results. Both the algorithms are tested and compared with each other for a dataset of 10,000 and 5000 integer data items. The datasets are evaluated 10 times for minimum elapse time-varying K value from 1 to 10. The proposed study is more useful for scientific research data sorting. Scientific research data statistics are more accurate.

Journal ArticleDOI
Desheng Liu1, Hang Zhen1, Dequan Kong1, Xiaowei Chen1, Lei Zhang1, Mingrun Yuan1, Hui Wang1 
TL;DR: Aiming at solving network delay caused by large chunks of data in industrial Internet of Things, a data compression algorithm based on edge computing is creatively put forward in this paper, which greatly reduces the amount of data transmission under the premise of ensuring the instantaneity and effectiveness of data.
Abstract: Aiming at solving network delay caused by large chunks of data in industrial Internet of Things, a data compression algorithm based on edge computing is creatively put forward in this paper The data collected by sensors need to be handled in advance and are then processed by different single packet quantity K and error threshold e for multiple groups of comparative experiments, which greatly reduces the amount of data transmission under the premise of ensuring the instantaneity and effectiveness of data On the basis of compression processing, an outlier detection algorithm based on isolated forest is proposed, which can accurately identify the anomaly caused by gradual change and sudden change and control and adjust the action of equipment, in order to meet the control requirement As is shown by experimental simulation, the isolated forest algorithm based on partition outperforms box graph and K-means clustering algorithm based on distance in anomaly detection, which verifies the feasibility and advantages of the former in data compression and detection accuracy

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a 3D CU-Net to automatically identify COVID-19 infected areas from 3D chest CT images by extracting rich features and fusing multiscale global information.
Abstract: Coronavirus disease 2019 (COVID-19) has spread rapidly worldwide. The rapid and accurate automatic segmentation of COVID-19 infected areas using chest computed tomography (CT) scans is critical for assessing disease progression. However, infected areas have irregular sizes and shapes. Furthermore, there are large differences between image features. We propose a convolutional neural network, named 3D CU-Net, to automatically identify COVID-19 infected areas from 3D chest CT images by extracting rich features and fusing multiscale global information. 3D CU-Net is based on the architecture of 3D U-Net. We propose an attention mechanism for 3D CU-Net to achieve local cross-channel information interaction in an encoder to enhance different levels of the feature representation. At the end of the encoder, we design a pyramid fusion module with expanded convolutions to fuse multiscale context information from high-level features. The Tversky loss is used to resolve the problems of the irregular size and uneven distribution of lesions. Experimental results show that 3D CU-Net achieves excellent segmentation performance, with Dice similarity coefficients of 96.3% and 77.8% in the lung and COVID-19 infected areas, respectively. 3D CU-Net has high potential to be used for diagnosing COVID-19. [ABSTRACT FROM AUTHOR] Copyright of Scientific Programming is the property of Hindawi Limited and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)