scispace - formally typeset
Search or ask a question

Showing papers in "Complexity in 2021"


Journal ArticleDOI
TL;DR: In this paper, a decision tree based approach is proposed to enhance trust management by exploring the decision tree model in the area of IDS, which can be easily read and even resemble a human approach to decision making by splitting the choice into many small subchoices for IDS.
Abstract: Despite the growing popularity of machine learning models in the cyber-security applications (e.g., an intrusion detection system (IDS)), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence (XAI) has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. The previous studies focused more on the accuracy of the various classification algorithms for trust in IDS. They do not often provide insights into their behavior and reasoning provided by the sophisticated algorithm. Therefore, in this paper, we have addressed XAI concept to enhance trust management by exploring the decision tree model in the area of IDS. We use simple decision tree algorithms that can be easily read and even resemble a human approach to decision-making by splitting the choice into many small subchoices for IDS. We experimented with this approach by extracting rules in a widely used KDD benchmark dataset. We also compared the accuracy of the decision tree approach with the other state-of-the-art algorithms.

89 citations


Journal ArticleDOI
TL;DR: A deep CNN architecture has been proposed in this paper for the diagnosis of COVID-19 based on the chest X-ray image classification, which achieved an overall accuracy as high as 99.5%.
Abstract: Artificial intelligence (AI) techniques in general and convolutional neural networks (CNNs) in particular have attained successful results in medical image analysis and classification. A deep CNN architecture has been proposed in this paper for the diagnosis of COVID-19 based on the chest X-ray image classification. Due to the nonavailability of sufficient-size and good-quality chest X-ray image dataset, an effective and accurate CNN classification was a challenge. To deal with these complexities such as the availability of a very-small-sized and imbalanced dataset with image-quality issues, the dataset has been preprocessed in different phases using different techniques to achieve an effective training dataset for the proposed CNN model to attain its best performance. The preprocessing stages of the datasets performed in this study include dataset balancing, medical experts' image analysis, and data augmentation. The experimental results have shown the overall accuracy as high as 99.5% which demonstrates the good capability of the proposed CNN model in the current application domain. The CNN model has been tested in two scenarios. In the first scenario, the model has been tested using the 100 X-ray images of the original processed dataset which achieved an accuracy of 100%. In the second scenario, the model has been tested using an independent dataset of COVID-19 X-ray images. The performance in this test scenario was as high as 99.5%. To further prove that the proposed model outperforms other models, a comparative analysis has been done with some of the machine learning algorithms. The proposed model has outperformed all the models generally and specifically when the model testing was done using an independent testing set. © 2021 Aijaz Ahmad Reshi et al.

71 citations


Journal ArticleDOI
TL;DR: The results provide insights into an overview of the AI used for education domain, which helps to strengthen the theoretical foundation of AI in education and provides a promising channel for educators and AI engineers to carry out further collaborative research.
Abstract: This study provided a content analysis of studies aiming to disclose how artificial intelligence (AI) has been applied to the education sector and explore the potential research trends and challenges of AI in education. A total of 100 papers including 63 empirical papers (74 studies) and 37 analytic papers were selected from the education and educational research category of Social Sciences Citation Index database from 2010 to 2020. The content analysis showed that the research questions could be classified into development layer (classification, matching, recommendation, and deep learning), application layer (feedback, reasoning, and adaptive learning), and integration layer (affection computing, role-playing, immersive learning, and gamification). Moreover, four research trends, including Internet of Things, swarm intelligence, deep learning, and neuroscience, as well as an assessment of AI in education, were suggested for further investigation. However, we also proposed the challenges in education may be caused by AI with regard to inappropriate use of AI techniques, changing roles of teachers and students, as well as social and ethical issues. The results provide insights into an overview of the AI used for education domain, which helps to strengthen the theoretical foundation of AI in education and provides a promising channel for educators and AI engineers to carry out further collaborative research.

63 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented an experimental study of cryptographic algorithms to classify the types of encryption algorithms into the asymmetric and asymmetric encryption algorithm, and assessed the guessing attack in real-time deep learning complex IoT applications.
Abstract: As the world keeps advancing, the need for automated interconnected devices has started to gain significance; to cater to the condition, a new concept Internet of Things (IoT) has been introduced that revolves around smart devicesʼ conception. These smart devices using IoT can communicate with each other through a network to attain particular objectives, i.e., automation and intelligent decision making. IoT has enabled the users to divide their household burden with machines as these complex machines look after the environment variables and control their behavior accordingly. As evident, these machines use sensors to collect vital information, which is then the complexity analyzed at a computational node that then smartly controls these devicesʼ operational behaviors. Deep learning-based guessing attack protection algorithms have been enhancing IoT security; however, it still has a critical challenge for the complex industries’ IoT networks. One of the crucial aspects of such systems is the need to have a significant training time for processing a large dataset from the networkʼs previous flow of data. Traditional deep learning approaches include decision trees, logistic regression, and support vector machines. However, it is essential to note that this convenience comes with a price that involves security vulnerabilities as IoT networks are prone to be interfered with by hackers who can access the sensor/communication data and later utilize it for malicious purposes. This paper presents the experimental study of cryptographic algorithms to classify the types of encryption algorithms into the asymmetric and asymmetric encryption algorithm. It presents a deep analysis of AES, DES, 3DES, RSA, and Blowfish based on timing complexity, size, encryption, and decryption performances. It has been assessed in terms of the guessing attack in real-time deep learning complex IoT applications. The assessment has been done using the simulation approach and it has been tested the speed of encryption and decryption of the selected encryption algorithms. For each encryption and decryption, the tests executed the same encryption using the same plaintext for five separate times, and the average time is compared. The key size used for each encryption algorithm is the maximum bytes the cipher can allow. To the comparison, the average time required to compute the algorithm by the three devices is used. For the experimental test, a set of plaintexts is used in the simulation—password-sized text and paragraph-sized text—that achieves target fair results compared to the existing algorithms in real-time deep learning networks for IoT applications.

58 citations


Journal ArticleDOI
TL;DR: In this article, bagging ensemble learning method with decision tree has achieved the best performance in predicting heart disease, which is the deadliest disease and one of leading causes of death worldwide.
Abstract: Heart disease is the deadliest disease and one of leading causes of death worldwide. Machine learning is playing an essential role in the medical side. In this paper, ensemble learning methods are used to enhance the performance of predicting heart disease. Two features of extraction methods: linear discriminant analysis (LDA) and principal component analysis (PCA), are used to select essential features from the dataset. The comparison between machine learning algorithms and ensemble learning methods is applied to selected features. The different methods are used to evaluate models: accuracy, recall, precision, F-measure, and ROC.The results show the bagging ensemble learning method with decision tree has achieved the best performance.

52 citations


Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed an ensemble-based deep learning model to classify news as fake or real using LIAR dataset, which achieved an accuracy of 0.898, recall of 0.,916, precision of 0,913, and F-score of0.914, respectively.
Abstract: Pervasive usage and the development of social media networks have provided the platform for the fake news to spread fast among people. Fake news often misleads people and creates wrong society perceptions. The spread of low-quality news in social media has negatively affected individuals and society. In this study, we proposed an ensemble-based deep learning model to classify news as fake or real using LIAR dataset. Due to the nature of the dataset attributes, two deep learning models were used. For the textual attribute “statement,” Bi-LSTM-GRU-dense deep learning model was used, while for the remaining attributes, dense deep learning model was used. Experimental results showed that the proposed study achieved an accuracy of 0.898, recall of 0.916, precision of 0.913, and F-score of 0.914, respectively, using only statement attribute. Moreover, the outcome of the proposed models is remarkable when compared with that of the previous studies for fake news detection using LIAR dataset.

52 citations


Journal ArticleDOI
TL;DR: In this paper, a modified whale optimization algorithm (MWOA) was proposed for parameter identification of solar cells and PV modules, in which both the mutation strategy based on Levy flight and a local search mechanism of pattern search are introduced.
Abstract: The whale optimization algorithm (WOA) is a powerful swarm intelligence method which has been widely used in various fields such as parameter identification of solar cells and PV modules. In order to better balance the exploration and exploitation of WOA, we propose a novel modified WOA (MWOA) in which both the mutation strategy based on Levy flight and a local search mechanism of pattern search are introduced. On the one hand, Levy flight can make the algorithm get rid of the local optimum and avoid stagnation; thus, it is able to prevent the algorithm from losing diversity and to increase the global search capability. On the other hand, pattern search, a direct search method, has not only high convergence rate but also good stability, which can boost the local optimization ability of the WOA. Therefore, the combination of these two mechanisms can greatly improve the capability of WOA to obtain the best solution. In addition, MWOA may be employed to estimate parameters in single diode model (SDM), double diode model (DDM), and PV modules and to identify unknown parameters of two different types of PV modules under diverse light irradiance and temperature conditions. The analytical results demonstrate the validity and the practicality of MWOA for estimating parameters of solar cells and PV modules.

48 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a robust framework system for detecting intrusions based on the Internet of Things (IoT) using sensor devices to collect data from a smart grid environment.
Abstract: Smart grids, advanced information technology, have become the favored intrusion targets due to the Internet of Things (IoT) using sensor devices to collect data from a smart grid environment. These data are sent to the cloud, which is a huge network of super servers that provides different services to different smart infrastructures, such as smart homes and smart buildings. These can provide a large space for attackers to launch destructive cyberattacks. The novelty of this proposed research is the development of a robust framework system for detecting intrusions based on the IoT environment. An IoTID20 dataset attack was employed to develop the proposed system; it is a newly generated dataset from the IoT infrastructure. In this framework, three advanced deep learning algorithms were applied to classify the intrusion: a convolution neural network (CNN), a long short-term memory (LSTM), and a hybrid convolution neural network with the long short-term memory (CNN-LSTM) model. The complexity of the network dataset was dimensionality reduced, and to improve the proposed system, the particle swarm optimization method (PSO) was used to select relevant features from the network dataset. The obtained features were processed using deep learning algorithms. The experimental results showed that the proposed systems achieved accuracy as follows: CNN = 96.60%, LSTM = 99.82%, and CNN-LSTM = 98.80%. The proposed framework attained the desired performance on a new variable dataset, and the system will be implemented in our university IoT environment. The results of comparative predictions between the proposed framework and existing systems showed that the proposed system more efficiently and effectively enhanced the security of the IoT environment from attacks. The experimental results confirmed that the proposed framework based on deep learning algorithms for an intrusion detection system can effectively detect real-world attacks and is capable of enhancing the security of the IoT environment.

46 citations


Journal ArticleDOI
TL;DR: In this paper, the state of charge (SOC) estimation of supercapacitors and lithium batteries in the hybrid energy storage system of electric vehicles was studied. And the experimental results showed that the estimation results reached a high accuracy, and the variation range of estimation error was [−0.94%, 0.34%].
Abstract: This paper studies the state of charge (SOC) estimation of supercapacitors and lithium batteries in the hybrid energy storage system of electric vehicles. According to the energy storage principle of the electric vehicle composite energy storage system, the circuit models of supercapacitors and lithium batteries were established, respectively, and the model parameters were identified online using the recursive least square (RLS) method and Kalman filtering (KF) algorithm. Then, the online estimation of SOC was completed based on the Kalman filtering algorithm and unscented Kalman filtering algorithm. Finally, the experimental platform for SOC estimation was built and Matlab was used for calculation and analysis. The experimental results showed that the SOC estimation results reached a high accuracy, and the variation range of estimation error was [−0.94%, 0.34%]. For lithium batteries, the recursive least square method is combined with the 2RC model to obtain the optimal result, and the estimation error is within the range of [−1.16%, 0.85%] in the case of comprehensive weighing accuracy and calculation amount. Moreover, the system has excellent robustness and high reliability.

46 citations


Journal ArticleDOI
TL;DR: In this article, an enhanced success history adaptive DE with greedy mutation strategy (EBLSHADE) is employed to optimize parameters of photovoltaic (PV) models to propose a parameter optimization method.
Abstract: In the past few decades, a lot of optimization methods have been applied in estimating the parameter of photovoltaic (PV) models and obtained better results, but these methods still have some deficiencies, such as higher time complexity and poor stability. To tackle these problems, an enhanced success history adaptive DE with greedy mutation strategy (EBLSHADE) is employed to optimize parameters of PV models to propose a parameter optimization method in this paper. In the EBLSHADE, the linear population size reduction strategy is used to gradually reduce population to improve the search capabilities and balance the exploitation and exploration capabilities. The less and more greedy mutation strategy is used to enhance the exploitation capability and the exploration capability. Finally, a parameter optimization method based on EBLSHADE is proposed to optimize parameters of PV models. The different PV models are selected to prove the effectiveness of the proposed method. Comparison results demonstrate that the EBLSHADE is an effective and efficient method and the parameter optimization method is beneficial to design, control, and optimize the PV systems.

42 citations


Journal ArticleDOI
TL;DR: In this article, an improved ANN model trained using an artificial backpropagation scaled conjugate gradient neural network (ABP-SCGNN) algorithm was proposed to predict diabetes effectively.
Abstract: Data analytics, machine intelligence, and other cognitive algorithms have been employed in predicting various types of diseases in health care. The revolution of artificial neural networks (ANNs) in the medical discipline emerged for data-driven applications, particularly in the healthcare domain. It ranges from diagnosis of various diseases, medical image processing, decision support system (DSS), and disease prediction. The intention of conducting the research is to ascertain the impact of parameters on diabetes data to predict whether a particular patient has a disease or not. This paper develops an improved ANN model trained using an artificial backpropagation scaled conjugate gradient neural network (ABP-SCGNN) algorithm to predict diabetes effectively. For validating the performance of the proposed model, we conduct a large set of experiments on a Pima Indian Diabetes (PID) dataset using accuracy and mean squared error (MSE) as evaluation metrics. We use different number of neurons in the hidden layer, ranging from 5 to 50, to train the ANN models. The experimental results show that the ABP-SCGNN model, containing 20 neurons, attains 93% accuracy on the validation set, which is higher than using the other ANNs models. This result confirms the model’s effectiveness and efficiency in predicting diabetes disease from the required data attributes.

Journal ArticleDOI
TL;DR: In this paper, an edge server is introduced between the main IoT server and the GSM module to avoid the overburden of the IoT server for data processing and reduce the latency rate.
Abstract: Smart parsimonious and economical ways of irrigation have build up to fulfill the sweet water requirements for the habitants of this world. In other words, water consumption should be frugal enough to save restricted sweet water resources. The major portion of water was wasted due to incompetent ways of irrigation. We utilized a smart approach professionally capable of using ontology to make 50% of the decision, and the other 50% of the decision relies on the sensor data values. The decision from the ontology and the sensor values collectively become the source of the final decision which is the result of a machine learning algorithm (KNN). Moreover, an edge server is introduced between the main IoT server and the GSM module. This method will not only avoid the overburden of the IoT server for data processing but also reduce the latency rate. This approach connects Internet of Things with a network of sensors to resourcefully trace all the data, analyze the data at the edge server, transfer only some particular data to the main IoT server to predict the watering requirements for a field of crops, and display the result by using an android application edge.

Journal ArticleDOI
TL;DR: Asafo-Adjei et al. as mentioned in this paper examined the degree of asymmetry and nonlinear directional causality between global equities and cryptocurrencies in the frequency domain, and established a significant directional, dynamical, and scale-dependent information flow.
Abstract: The world has witnessed the adverse impact of the COVID-19 pandemic. Accordingly, it is expected that information transmission between equities and digital assets has been altered due to the hostile impact of the pandemic outbreak on financial markets. As a result, the ensuing perverse risk among markets is presumed to rise during severe uncertainties occasioned by the COVID-19 pandemic. The impetus of this study is to examine the degree of asymmetry and nonlinear directional causality between global equities and cryptocurrencies in the frequency domain. Hence, we employ both the variational mode decomposition (VMD) and the Renyi effective transfer entropy techniques. Analyses of the study are presented for three sample periods;these are the full sample period, the pre-COVID-19 period, and the COVID-19 pandemic period. We gauge a mixture of asymmetric and nonlinear bidirectional and unidirectional causality between global equities and cryptocurrencies for the sample periods. However, the COVID-19 pandemic period appears to be driving the estimates for the full sample period, which indicates a negative flow. Thus, the direction and significance of the information flow between the markets for the full sample correspond to the one observed during the COVID-19 pandemic period. We, consequently, establish a significant directional, dynamical, and scale-dependent information flow between global equities and cryptocurrencies. Notwithstanding, throughout the study samples, we mainly find a negative significant information flow from global equities to cryptocurrencies. We detect that most cryptocurrencies exhibit similar behaviour of information flow to global equities for each of the sample periods. The outcome provides pertinent signals to investors with diverse investment horizons who would want to diversify, hedge, or employ cryptocurrencies as a safe haven for global equities during uncertainties, specifically the COVID-19 pandemic. © 2021 Emmanuel Asafo-Adjei et al.

Journal ArticleDOI
TL;DR: In this article, a comparative study of two deep learning methods to forecast the confirmed cases and death cases of COVID-19 was performed on time series data in three countries: Egypt, Saudi Arabia, and Kuwait, from 1/5/2020 to 6/12/2020.
Abstract: The novel coronavirus disease (COVID-19) is regarded as one of the most imminent disease outbreaks which threaten public health on various levels worldwide. Because of the unpredictable outbreak nature and the virus’s pandemic intensity, people are experiencing depression, anxiety, and other strain reactions. The response to prevent and control the new coronavirus pneumonia has reached a crucial point. Therefore, it is essential—for safety and prevention purposes—to promptly predict and forecast the virus outbreak in the course of this troublesome time to have control over its mortality. Recently, deep learning models are playing essential roles in handling time-series data in different applications. This paper presents a comparative study of two deep learning methods to forecast the confirmed cases and death cases of COVID-19. Long short-term memory (LSTM) and gated recurrent unit (GRU) have been applied on time-series data in three countries: Egypt, Saudi Arabia, and Kuwait, from 1/5/2020 to 6/12/2020. The results show that LSTM has achieved the best performance in confirmed cases in the three countries, and GRU has achieved the best performance in death cases in Egypt and Kuwait.

Journal ArticleDOI
TL;DR: In this article, an in-depth study and analysis of offloading strategies for lightweight user mobile edge computing tasks using a machine learning approach is presented, and two optimization algorithms are designed: for the relaxation optimization problem, an iterative optimization algorithm based on the Lagrange dual method, and a global optimization algorithm is designed for transmitting power allocation, computational offloading strategy, dynamic adjustment of local computing power, and receiving energy channel selection strategy.
Abstract: This paper presents an in-depth study and analysis of offloading strategies for lightweight user mobile edge computing tasks using a machine learning approach. Firstly, a scheme for multiuser frequency division multiplexing approach in mobile edge computing offloading is proposed, and a mixed-integer nonlinear optimization model for energy consumption minimization is developed. Then, based on the analysis of the concave-convex properties of this optimization model, this paper uses variable relaxation and nonconvex optimization theory to transform the problem into a convex optimization problem. Subsequently, two optimization algorithms are designed: for the relaxation optimization problem, an iterative optimization algorithm based on the Lagrange dual method is designed; based on the branch-and-bound integer programming method, the iterative optimization algorithm is used as the basic algorithm for each step of the operation, and a global optimization algorithm is designed for transmitting power allocation, computational offloading strategy, dynamic adjustment of local computing power, and receiving energy channel selection strategy. Finally, the simulation results verify that the scheduling strategy of the frequency division technique proposed in this paper has good energy consumption minimization performance in mobile edge computation offloading. Our model is highly efficient and has a high degree of accuracy. The anomaly detection method based on a decision tree combined with deep learning proposed in this paper, unlike traditional IoT attack detection methods, overcomes the drawbacks of rule-based security detection methods and enables them to adapt to both established and unknown hostile environments. Experimental results show that the attack detection system based on the model achieves good detection results in the detection of multiple attacks.

Journal ArticleDOI
TL;DR: In this paper, an extended compact genetic algorithm-based ontology entity matching technique (ECGA-OEM) is proposed, which uses both the compact encoding mechanism and linkage learning approach to match the ontologies efficiently.
Abstract: Data heterogeneity is the obstacle for the resource sharing on Semantic Web (SW), and ontology is regarded as a solution to this problem. However, since different ontologies are constructed and maintained independently, there also exists the heterogeneity problem between ontologies. Ontology matching is able to identify the semantic correspondences of entities in different ontologies, which is an effective method to address the ontology heterogeneity problem. Due to huge memory consumption and long runtime, the performance of the existing ontology matching techniques requires further improvement. In this work, an extended compact genetic algorithm-based ontology entity matching technique (ECGA-OEM) is proposed, which uses both the compact encoding mechanism and linkage learning approach to match the ontologies efficiently. Compact encoding mechanism does not need to store and maintain the whole population in the memory during the evolving process, and the utilization of linkage learning protects the chromosome’s building blocks, which is able to reduce the algorithm’s running time and ensure the alignment’s quality. In the experiment, ECGA-OEM is compared with the participants of ontology alignment evaluation initiative (OAEI) and the state-of-the-art ontology matching techniques, and the experimental results show that ECGA-OEM is both effective and efficient.

Journal ArticleDOI
TL;DR: A critical review of the related significant aspects is provided and an overview of existing applications of deep learning in computational visual perception is included, which shows that there is a significant improvement in the accuracy using dropout and data augmentation.
Abstract: Computational visual perception, also known as computer vision, is a field of artificial intelligence that enables computers to process digital images and videos in a similar way as biological vision does. It involves methods to be developed to replicate the capabilities of biological vision. The computer vision’s goal is to surpass the capabilities of biological vision in extracting useful information from visual data. The massive data generated today is one of the driving factors for the tremendous growth of computer vision. This survey incorporates an overview of existing applications of deep learning in computational visual perception. The survey explores various deep learning techniques adapted to solve computer vision problems using deep convolutional neural networks and deep generative adversarial networks. The pitfalls of deep learning and their solutions are briefly discussed. The solutions discussed were dropout and augmentation. The results show that there is a significant improvement in the accuracy using dropout and data augmentation. Deep convolutional neural networks’ applications, namely, image classification, localization and detection, document analysis, and speech recognition, are discussed in detail. In-depth analysis of deep generative adversarial network applications, namely, image-to-image translation, image denoising, face aging, and facial attribute editing, is done. The deep generative adversarial network is unsupervised learning, but adding a certain number of labels in practical applications can improve its generating ability. However, it is challenging to acquire many data labels, but a small number of data labels can be acquired. Therefore, combining semisupervised learning and generative adversarial networks is one of the future directions. This article surveys the recent developments in this direction and provides a critical review of the related significant aspects, investigates the current opportunities and future challenges in all the emerging domains, and discusses the current opportunities in many emerging fields such as handwriting recognition, semantic mapping, webcam-based eye trackers, lumen center detection, query-by-string word, intermittently closed and open lakes and lagoons, and landslides.

Journal ArticleDOI
TL;DR: The fault prediction and abductive fault diagnosis of three-phase induction motors are of great importance for improving their working safety, reliability, and economy; however, it is difficult to... as discussed by the authors.
Abstract: The fault prediction and abductive fault diagnosis of three-phase induction motors are of great importance for improving their working safety, reliability, and economy; however, it is difficult to ...

Journal ArticleDOI
TL;DR: In this paper, the authors introduced novel belong and nonbelong relations between a bipolar soft set and an ordinary point and derived the sufficient conditions of some equivalence of these relations.
Abstract: Bipolar soft set is formulated by two soft sets; one of them provides us the positive information and the other provides us the negative information The philosophy of bipolarity is that human judgment is based on two sides, positive and negative, and we choose the one which is stronger In this paper, we introduce novel belong and nonbelong relations between a bipolar soft set and an ordinary point These relations are considered as one of the unique characteristics of bipolar soft sets which are somewhat expression of the degrees of membership and nonmembership of an element We discuss essential properties and derive the sufficient conditions of some equivalence of these relations We also define the concept of soft mappings between two classes of bipolar soft sets and study the behaviors of an ordinary point under these soft mappings with respect to all relations introduced herein Then, we apply bipolar soft sets to build an optimal choice application We give an algorithm of this application and show the method for implementing this algorithm by an illustrative example In conclusion, it can be noted that the relations defined herein give another viewpoint to explore the concepts of bipolar soft topology, in particular, soft separation axioms and soft covers

Journal ArticleDOI
TL;DR: It is demonstrated that transformer-based models outperform the neural network-based solutions, which led to an increase in the F1 score from 0.83 to 0.95, and it boosted the accuracy by 16% compared to the best in neural networks and transformers.
Abstract: Fake news detection (FND) involves predicting the likelihood that a particular news article (news report, editorial, expose, etc.) is intentionally deceptive. Arabic FND started to receive more attention in the last decade, and many detection approaches demonstrated some ability to detect fake news on multiple datasets. However, most existing approaches do not consider recent advances in natural language processing, i.e., the use of neural networks and transformers. This paper presents a comprehensive comparative study of neural network and transformer-based language models used for Arabic FND. We examine the use of neural networks and transformer-based language models for Arabic FND and show their performance compared to each other. We also conduct an extensive analysis of the possible reasons for the difference in performance results obtained by different approaches. The results demonstrate that transformer-based models outperform the neural network-based solutions, which led to an increase in the F1 score from 0.83 (best neural network-based model, GRU) to 0.95 (best transformer-based model, QARiB), and it boosted the accuracy by 16% compared to the best in neural network-based solutions. Finally, we highlight the main gaps in Arabic FND research and suggest future research directions.

Journal ArticleDOI
TL;DR: In this article, an English teaching ability evaluation algorithm based on big data fuzzy K-means clustering and information fusion is proposed to improve the accuracy of teacher evaluation and the efficiency of teaching resources allocation.
Abstract: Aiming at the problem of inaccurate classification of big data information in traditional English teaching ability evaluation algorithms, an English teaching ability evaluation algorithm based on big data fuzzy K-means clustering and information fusion is proposed Firstly, the author uses the idea of K-means clustering to analyze the collected original error data, such as teacher level, teaching facility investment, and policy relevance level, removes the data that the algorithm considers unreliable, uses the remaining valid data to calculate the weighting factor of the modified fuzzy logic algorithm, and evaluates the weighted average with the node measurement data and gets the final fusion value Secondly, the author integrates the big data information fusion and K-means clustering algorithm, realizes the clustering and integration of the index parameters of English teaching ability, compiles the corresponding English teaching resource allocation plan, and realizes the evaluation of English teaching ability Finally, the results show that using this method to evaluate English teaching ability has better information fusion analysis ability, which improves the accuracy of teaching ability evaluation and the efficiency of teaching resources application

Journal ArticleDOI
TL;DR: In this article, the authors developed the notion and features of the correlation coefficient and the weighted correlation coefficient for PFHSS and introduced the aggregation operators such as Pythagorean fuzzy hypersoft weighted average (PFHSWA) under the PFHSWG scenario and a prioritization technique for order preference by similarity to the ideal solution (TOPSIS) under PFHss based on correlation coefficients and weighted correlation coefficients is presented.
Abstract: The correlation coefficient between two variables plays an important role in statistics. Also, the accuracy of relevance assessment depends on information from a set of discourses. The data collected from numerous statistical studies are full of exceptions. The Pythagorean fuzzy hypersoft set (PFHSS) is a parameterized family that deals with the subattributes of the parameters and an appropriate extension of the Pythagorean fuzzy soft set. It is also the generalization of the intuitionistic fuzzy hypersoft set (IFHSS), which is used to accurately assess insufficiency, anxiety, and uncertainties in decision-making. The PFHSS can accommodate more uncertainties compared to the IFHSS, and it is the most substantial methodology to describe fuzzy information in the decision-making process. The core objective of the this study is to develop the notion and features of the correlation coefficient and the weighted correlation coefficient for PFHSS and to introduce the aggregation operators such as Pythagorean fuzzy hypersoft weighted average (PFHSWA) and Pythagorean fuzzy hypersoft weighted geometric (PFHSWG) operators under the PFHSS scenario. A prioritization technique for order preference by similarity to the ideal solution (TOPSIS) under PFHSS based on correlation coefficients and weighted correlation coefficients is presented. Through the developed methodology, a technique for solving multiattribute group decision-making (MAGDM) problem is planned. Also, the importance of the developed methodology and its application in indicating multipurpose antivirus mask throughout the COVID-19 pandemic period is presented. A brief comparative analysis is described with the advantages, effectiveness, and flexibility of numerous existing studies that demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a financial risk early warning model based on the K-means clustering algorithm, which can effectively avoid the subjective negative impact caused by artificial division thresholds, continuously optimize the prediction process of financial risk and redistribute target dataset to each cluster center for obtaining optimized solution.
Abstract: The early warning of financial risk is to identify and analyze existing financial risk factors, determine the possibility and severity of occurring risks, and provide scientific basis for risk prevention and management. The fragility of financial system and the destructiveness of financial crisis make it extremely important to build a good financial risk early-warning mechanism. The main idea of the K-means clustering algorithm is to gradually optimize clustering results and constantly redistribute target dataset to each clustering center to obtain optimal solution; its biggest advantage lies in its simplicity, speed, and objectivity, being widely used in many research fields such as data processing, image recognition, market analysis, and risk evaluation. On the basis of summarizing and analyzing previous research works, this paper expounded the current research status and significance of financial risk early-warning, elaborated the development background, current status and future challenges of the K-means clustering algorithm, introduced the related works of similarity measure and item clustering, proposed a financial risk indicator system based on the K-means clustering algorithm, performed indicator selection and data processing, constructed a financial risk early-warning model based on the K-means clustering algorithm, conducted the classification of financial risk types and optimization of financial risk control, and finally carried out an empirical experiments and its result analysis. The study results show that the K-means clustering method can effectively avoid the subjective negative impact caused by artificial division thresholds, continuously optimize the prediction process of financial risk and redistribute target dataset to each cluster center for obtaining optimized solution, so the algorithm can more accurately and objectively distinguish the state interval of different financial risks, determine risk occurrence possibility and its severity, and provide a scientific basis for risk prevention and management. The study results of this paper provide a reference for further researches on financial risk early-warning based on K-means clustering algorithm.

Journal ArticleDOI
TL;DR: The hybrid models increased the accuracy for sentiment analysis compared with single models on all types of datasets, especially the combination of deep learning models with SVM, and the reliability of the latter was significantly higher.
Abstract: Sentiment analysis on public opinion expressed in social networks, such as Twitter or Facebook, has been developed into a wide range of applications, but there are still many challenges to be addressed. Hybrid techniques have shown to be potential models for reducing sentiment errors on increasingly complex training data. This paper aims to test the reliability of several hybrid techniques on various datasets of different domains. Our research questions are aimed at determining whether it is possible to produce hybrid models that outperform single models with different domains and types of datasets. Hybrid deep sentiment analysis learning models that combine long short-term memory (LSTM) networks, convolutional neural networks (CNN), and support vector machines (SVM) are built and tested on eight textual tweets and review datasets of different domains. The hybrid models are compared against three single models, SVM, LSTM, and CNN. Both reliability and computation time were considered in the evaluation of each technique. The hybrid models increased the accuracy for sentiment analysis compared with single models on all types of datasets, especially the combination of deep learning models with SVM. The reliability of the latter was significantly higher.

Journal ArticleDOI
TL;DR: This paper presented a BERT-based model to identify biomedical named entities in the Arabic text data (specifically disease and treatment named entities) that investigates the effectiveness of pretraining a monolingual BERT model with a small-scale biomedical dataset on enhancing the model understanding of Arabic biomedical text.
Abstract: The web is being loaded daily with a huge volume of data, mainly unstructured textual data, which increases the need for information extraction and NLP systems significantly. Named-entity recognition task is a key step towards efficiently understanding text data and saving time and effort. Being a widely used language globally, English is taking over most of the research conducted in this field, especially in the biomedical domain. Unlike other languages, Arabic suffers from lack of resources. This work presents a BERT-based model to identify biomedical named entities in the Arabic text data (specifically disease and treatment named entities) that investigates the effectiveness of pretraining a monolingual BERT model with a small-scale biomedical dataset on enhancing the model understanding of Arabic biomedical text. The model performance was compared with two state-of-the-art models (namely, AraBERT and multilingual BERT cased), and it outperformed both models with 85% F1-score.

Journal ArticleDOI
TL;DR: In this article, the authors proposed an improved Markov chain hybrid teaching quality evaluation model and designs comparative experiments and applies it to the hybrid teacher quality evaluation system of universities, and finally verifies its effectiveness through experiments.
Abstract: The Markov chain model teaching evaluation method is a quantitative analysis method based on probability theory and stochastic process theory, which establishes a stochastic mathematical model to analyse the quantitative relationship in the change and development process of real activities. Applying it to achieve a more comprehensive, reasonable, and effective evaluation of the classroom teaching quality of college teachers is of positive significance for promoting the continuous improvement of the teaching level of teachers and the teaching quality of schools. Therefore, after an in-depth study of Markov chain algorithm theory, this research proposes an improved Markov chain hybrid teaching quality evaluation model and designs comparative experiments and applies it to the hybrid teaching quality evaluation system of universities, designs a corresponding hybrid teaching quality evaluation model, and finally verifies its effectiveness through experiments. The mathematical model of mixed classroom teaching quality evaluation given in this research focuses on the development and change of the teaching process. For the teaching process that is closely related to the causality of teaching quality, the model established in this paper is more objective and reasonable for evaluating the quality of teaching.

Journal ArticleDOI
TL;DR: In this article, a mixture of the Laplace transformation and homotopy perturbation technique is used to solve fractional-order Whitham-Broer-Kaup equations.
Abstract: This paper aims to implement an analytical method, known as the Laplace homotopy perturbation transform technique, for the result of fractional-order Whitham–Broer–Kaup equations. The technique is a mixture of the Laplace transformation and homotopy perturbation technique. Fractional derivatives with Mittag-Leffler and exponential laws in sense of Caputo are considered. Moreover, this paper aims to show the Whitham–Broer–Kaup equations with both derivatives to see their difference in a real-world problem. The efficiency of both operators is confirmed by the outcome of the actual results of the Whitham–Broer–Kaup equations. Some problems have been presented to compare the solutions achieved with both fractional-order derivatives.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a hybrid model that combines the long short-term memory (LSTM) and autoencoder models for forecasting the foreign exchange volatility, which can be used to hedge or invest efficiently and make policy decisions based on volatility forecasting.
Abstract: Since the breakdown of the Bretton Woods system in the early 1970s, the foreign exchange (FX) market has become an important focus of both academic and practical research. There are many reasons why FX is important, but one of most important aspects is the determination of foreign investment values. Therefore, FX serves as the backbone of international investments and global trading. Additionally, because fluctuations in FX affect the value of imported and exported goods and services, such fluctuations have an important impact on the economic competitiveness of multinational corporations and countries. Therefore, the volatility of FX rates is a major concern for scholars and practitioners. Forecasting FX volatility is a crucial financial problem that is attracting significant attention based on its diverse implications. Recently, various deep learning models based on artificial neural networks (ANNs) have been widely employed in finance and economics, particularly for forecasting volatility. The main goal of this study was to predict FX volatility effectively using ANN models. To this end, we propose a hybrid model that combines the long short-term memory (LSTM) and autoencoder models. These deep learning models are known to perform well in time-series prediction for forecasting FX volatility. Therefore, we expect that our approach will be suitable for FX volatility prediction because it combines the merits of these two models. Methodologically, we employ the Foreign Exchange Volatility Index (FXVIX) as a measure of FX volatility. In particular, the three major FXVIX indices (EUVIX, BPVIX, and JYVIX) from 2010 to 2019 are considered, and we predict future prices using the proposed hybrid model. Our hybrid model utilizes an LSTM model as an encoder and decoder inside an autoencoder network. Additionally, we investigate FXVIX indices through subperiod analysis to examine how the proposed model’s forecasting performance is influenced by data distributions and outliers. Based on the empirical results, we can conclude that the proposed hybrid method, which we call the autoencoder-LSTM model, outperforms the traditional LSTM method. Additionally, the ability to learn the magnitude of data spread and singularities determines the accuracy of predictions made using deep learning models. In summary, this study established that FX volatility can be accurately predicted using a combination of deep learning models. Our findings have important implications for practitioners. Because forecasting volatility is an essential task for financial decision-making, this study will enable traders and policymakers to hedge or invest efficiently and make policy decisions based on volatility forecasting.

Journal ArticleDOI
TL;DR: In this paper, the authors introduced the new four-parameter Weibull distribution named as the Marshall-Olkin alpha power weibull (MOAPW) distribution, which can achieve better fits than other competitive distributions.
Abstract: This paper introduces the new novel four-parameter Weibull distribution named as the Marshall–Olkin alpha power Weibull (MOAPW) distribution. Some statistical properties of the distribution are examined. Based on Type-I censored and Type-II censored samples, maximum likelihood estimation (MLE), maximum product spacing (MPS), and Bayesian estimation for the MOAPW distribution parameters are discussed. Numerical analysis using real data sets and Monte Carlo simulation are accomplished to compare various estimation methods. This novel model’s supremacy upon some famous distributions is explained using two real data sets and it is shown that the MOAPW model can achieve better fits than other competitive distributions.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a combined framework of stacked autoencoder and radial basis function (RBF) neural network to predict traffic flow, which can effectively capture the temporal correlation and periodicity of traffic flow data and disturbance of weather factors.
Abstract: Short-term traffic flow prediction is an effective means for intelligent transportation system (ITS) to mitigate traffic congestion. However, traffic flow data with temporal features and periodic characteristics are vulnerable to weather effects, making short-term traffic flow prediction a challenging issue. However, the existing models do not consider the influence of weather changes on traffic flow, leading to poor performance under some extreme conditions. In view of the rich features of traffic data and the characteristic of being vulnerable to external weather conditions, the prediction model based on traffic data has certain limitations, so it is necessary to conduct research studies on traffic flow prediction driven by both the traffic data and weather data. This paper proposes a combined framework of stacked autoencoder (SAE) and radial basis function (RBF) neural network to predict traffic flow, which can effectively capture the temporal correlation and periodicity of traffic flow data and disturbance of weather factors. Firstly, SAE is used to process the traffic flow data in multiple time slices to acquire a preliminary prediction. Then, RBF is used to capture the relation between weather disturbance and periodicity of traffic flow so as to gain another prediction. Finally, another RBF is used for the fusion of the above two predictions on decision level, obtaining a reconstructed prediction with higher accuracy. The effectiveness and robustness of the proposed model are verified by experiments.