scispace - formally typeset
Search or ask a question

Showing papers in "Concurrency and Computation: Practice and Experience in 2023"


Journal ArticleDOI
TL;DR: In this paper , two created datasets generated from SDN using Mininet and Ryu controller with different feature extraction tools were used for training a number of supervised binary classification machine learning algorithms such as kNN, AdaBoost, decision tree (DT), random forest, naive Bayes, multilayer perceptron, support vector machine, and XGBoost.
Abstract: Software‐defined networking (SDN) has been developed to separate network control plane from forwarding plane which can decrease operational costs and the time it takes to deploy new services compared to traditional networks. Despite these advantages, this technology brings threats and vulnerabilities. Consequently, developing high‐performance real‐time intrusion detection systems (IDSs) to classify malicious activities is a vital part of SDN architecture. This article introduces two created datasets generated from SDN using Mininet and Ryu controller with different feature extraction tools that contain normal traffic and different types of attacks (Fin flood, UDP flood, ICMP flood, OS probe scan, port probe scan, TCP bandwidth flood, and TCP syn flood) that is used for training a number of supervised binary classification machine learning algorithms such as k‐nearest neighbor, AdaBoost, decision tree (DT), random forest, naive Bayes, multilayer perceptron, support vector machine, and XGBoost. The DT algorithm has achieved high scores to fit a real‐time application achieving F1 score on attack class of 0.9995, F1 score on normal class of 0.9983, and throughput score of 6,737,147.275 samples per second with a total number of three features. In addition, using data preprocessing to reduce the model complexity, thereby increasing the overall throughput to fit a real‐time system.

7 citations


Journal ArticleDOI
TL;DR: In this article , a new periodic shift (PS) pattern is proposed that imposes minimal restrictions on the implementation of collision operators and utilizes virtual memory mapping to provide consistent performance across a range of targets.
Abstract: Lattice Boltzmann methods (LBM) are well suited to highly parallel computational fluid dynamics simulations due to their separability into a perfectly parallel collision step and a propagation step that only communicates within a local neighborhood. The implementation of the propagation step provides constraints for the maximum possible bandwidth‐limited performance, memory layout and usage of vector instructions. This article revisits and extends the work on implicit propagation on directly addressed grids started by A‐A and its shift‐swap‐streaming (SSS) formulation by reconsidering them as transformations of the underlying space filling curve. In this work, a new periodic shift (PS) pattern is proposed that imposes minimal restrictions on the implementation of collision operators and utilizes virtual memory mapping to provide consistent performance across a range of targets. Various implementation approaches as well as time dependency and performance anisotropy are discussed. Benchmark results for SSS and PS on SIMD CPUs including Intel Xeon Phi as well as Nvidia GPUs are provided. Finally, the application of PS as the propagation pattern of the open source LBM framework OpenLB is summarized.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed an adaptation framework model, which makes a relation between the QP (quantization parameter) in H.264 and H.265 codecs and the QoS of 5G wireless technology.
Abstract: Nowadays, smart multimedia network services have become crucial in the healthcare system. The network parameters of Quality of Service (QoS) are widely affecting the efficiency and accuracy of multimedia streaming in wireless environments. This paper proposes an adaptation framework model, which makes a relation between the QP (quantization parameter) in H.264 and H.265 codecs and the QoS of 5G wireless technology. Besides, the effect of QP and packet loss have been studied because of their impact on video streaming. Packet loss of 5G wireless network characteristic is emulated to determine the impact of QP on the received video quality using objective and subjective quality metrics such as PSNR (peak signal to noise ratio), SSIM (structure similarity), and DMOS (differential mean opinion score). In this research, a Testbed is implemented to stream the encoded video from the server to the end users. The application model framework has automatically evaluated the QoE (Quality of Experience). Accordingly, the model detects the defect of network packet loss and selects the optimum QP value to enhance the QoE by the end‐users. The application has been tested on low and high video motions with full high definition (HD) resolution (1920 × 1080) which were taken from ( https://www.xiph.org/downloads/). Test results based on the objective and subjective quality measurements indicate that an optimal QP = 35 and QP = 30 have been chosen for low and high motion respectively to satisfy user QoE requirements.

4 citations


Journal ArticleDOI
TL;DR: In this article , a multifactor authentication protocol has been proposed to provide more secure communication in the Internet of Medical Things (IoMT), which uses biometric and fuzzy extractors for more security purposes.
Abstract: Currently, Internet of Medical Things (IoMT) gained popularity because of an ongoing pandemic. A few developed countries plan to deploy the IoMT for improving the security and safety of frontline workers to decrease the mortality rates of COVID‐19 patients. However, IoMT devices share the information through an open network which leads to increased vulnerability to various attacks. Hence, electronic health management systems remain many security challenges, like recording sensitive patient data, secure communication, transferring patient information to other doctors, providing the data for future medical diagnosis, collecting data from WBAN, etc. In addition, the sensor devices attached to the human body are resource‐limited and have minimal power capacity. Hence, to protect the medical privacy of patients, confidentiality and reliability of the system, the register sensor, doctor and server need to authenticate each other. Therefore, rather than two factors, in this work, a multifactor authentication protocol has been proposed to provide more secure communication. The presented scheme uses biometric and fuzzy extractors for more security purposes. Furthermore, the scheme is proved using informal and formal security verification BAN logic, ProVerif and AVISPA tools. The ProVerif simulation result of the suggested scheme shows that the proposed protocol achieves session key secrecy and mutual authentication

3 citations


Journal ArticleDOI
TL;DR: In this paper , an effective optimized hybrid deep and machine learning framework was developed for detecting various forms of lung and colon cancers, including adenocarcinomas and squamous cell carcinomas.
Abstract: Lung and colon cancers are dangerous diseases that can grow in organs and create a negative impact on human life in certain cases. The histological detection of such malignancies is one of the most critical parts of optimal treatment. As a result, the important objective of this article is to create an effective computerized diagnosis system for identifying adenocarcinomas of the colon as well as, adenocarcinomas and squamous cell carcinomas of the lungs using digital histopathology images and the combination of deep and machine learning techniques. For this, an effective optimized hybrid deep and machine learning framework is developed. This framework consists of two stages. In the first stage, the features of lung and colon images are extracted by principle component analysis network. Then the effective classification is conducted based on extreme learning machine (ELM) with the rider optimization algorithm which classifies lung and colon cancer into five types. The empirical investigation shows that the classification results on the benchmark LC25000 dataset have improved significantly. The use of this model will aid medical professionals in the development of an automatic and reliable system for detecting various forms of lung and colon cancers.

3 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed an effective method for disease detection in Arabica coffee plants using EfficientNetB0 architecture, which was improvised by including a ghost module at its end.
Abstract: Several research works on disease detection in coffee plants have been presented in recent years. Leaf miner and rust are the most prevalent diseases in Arabica coffee plants. Early detection of such diseases allows farmer to take diagnostic actions before the infection spreads to neighboring plants. With advancements in drones and artificial intelligence (AI), the automatic detection of leaf diseases is gaining prominence in the field of smart agriculture. Furthermore, it is critical to develop an accurate method for infestation detection with minimal computational complexity. Existing works for plant disease detection utilize pre‐trained deep learning models with millions of parameters. A feasible trade‐off has to be attained between accuracy and computational complexity for the deployment of such deep networks. This research proposes an effective method for disease detection in Arabica coffee plants using EfficientNetB0 architecture. The architecture of the EfficientNetB0 network was improvised by including a ghost module at its end. This integration allows the network to learn effectively with minimal parameters without compensating for the end accuracy. The proposed model has a total of 4,874,531 parameters which is significantly lesser than most of the state‐of‐the‐art deep learning architectures and achieved an accuracy of 84%.

2 citations


Journal ArticleDOI
TL;DR: In this article , an enhanced conditional random field-long short-term memory (ECRF-LSTM) was proposed for NER in the English language, which is the combination of CRF and chaotic arithmetic optimization algorithm (CAOA).
Abstract: Named entity recognition (NER) is an essential topic in the real world during the advanced development of technologies. Hence, in this article, develop enhanced conditional random field‐long short‐term memory (ECRF‐LSTM) for NER in the English language. The proposed ECRF‐LSTM is the combination of conditional random field‐long short‐term memory (ECRF‐LSTM) and chaotic arithmetic optimization algorithm (CAOA). The proposed research concentrated to perform NER for Indian names from the given input database for Indian digital database management and processing. The Chaotic AOA leads to fast convergence and helps to avoid the local optima. The proposed method is working with three phases preprocessing phase, the feature extraction phase, and the NER phase. In the initial stage, the datasets are collected from the online system. In the preprocessing phase, the removal of the URL, removal of special symbol, username removal, tokenization, and stop word removal are done. After that, the essential features such as domain weight, event weight, textual similarity, spatial similarity, temporal similarity, and relative document‐term frequency difference are extracted and then applied to train the proposed model. To empower the training phase of the CRF‐LSTM method, CAOA is utilized to select optimal weight parameter coefficients of CRF‐LSTM for training the model parameters. The proposed method is validated by statistical measurements and compared with the conventional methods such as convolutional neural network‐particle swarm optimization and convolutional neural network respectively. The proposed method achieves 98.91 accuracy, 97.36 sensitivity, 97.19 specificity, 97.54 precision, and 97.63 recall which is better than the existing methodologies.

2 citations


Journal ArticleDOI
TL;DR: In this article , an enhanced firefly algorithm based virtual machine placement model was proposed to reduce the migration time of the virtual machine deployment in cloud applications. And the experimental results demonstrate that the proposed method offers improved performance and an optimal VM placement scheme with respect to the various constraint factors, including transmission overhead, total execution time, packet size, parallel applications numbers, and virtual machine numbers.
Abstract: The virtual machine placement for the highly reliable cloud application is considered as one of the challenging and critical issues. To tackle such an issue, this article proposes the enhanced firefly algorithm based virtual machine placement model. But the migration time of the virtual machine placement is high and to reduce the migration time of the virtual machine placement, this article utilizes the K‐means clustering algorithm. In addition, to obtain the optimal cluster for the virtual machine placement, the adaptive particle swarm optimization with the coyote optimization algorithm is employed. The experimental results are conducted for the proposed approach using various measures such as transmission overhead, total execution time, packet size, parallel applications numbers, and virtual machine numbers. The results demonstrate that the proposed method offers improved performance and an optimal virtual machine placement scheme with respect to the various constraint factors. The evaluation exposes that the proposed method offers less execution time when compared to other methods.

2 citations


Journal ArticleDOI
TL;DR: In this article , an optimal deep neural networks (optimal DNN) has been used to predict underground water under a changing climate, where weight parameters are optimally selected with the help of fish swarm optimization (FSO).
Abstract: For the humanity to the whole and all the creatures of this world, the underground water is the greatest resource to rely upon that highly forms an indispensable factor toward augmented livelihood. In spite of the lack of detailed knowledge, global warming is found to profoundly influence underground water resources through changes in underground water recharge. Prediction of the underground water under a changing climate is essential to living beings. In this article, underground water prediction using optimal deep neural networks (optimal DNN) has been attempted. Initially, the features of temperature and rainfall among the input data have been selected and after which, the chosen data have been fed to the DNN to predict the underground water. In DNN, weight parameters are optimally selected with the help of fish swarm optimization (FSO). The implementation has been done on MATLAB. The simulation results found that the proposed FSO‐DNN prediction approach outperforms the existing prediction approaches by 78.9% accuracy, 83% sensitivity, 88% specificity, 95.8% positive predictive values, 52.3% negative predictive values, and 95.8% F‐measure.

2 citations


Journal ArticleDOI
TL;DR: In this paper , the authors analyzed the potential unfairness concerns for users with different personalities, which the popularity bias of the recommenders might cause, and split users into groups of high, moderate, and low clusters in terms of each personality trait in the big five factor model.
Abstract: Recommender systems are subject to well‐known popularity bias issues, that is, they expose frequently rated items more in recommendation lists than less‐rated ones. Such a problem could also have varying effects on users with different gender, age, or rating behavior, which significantly diminishes the users' overall satisfaction with recommendations. In this paper, we approach the problem from the view of user personalities for the first time and discover how users are inclined toward popular items based on their personality traits. More importantly, we analyze the potential unfairness concerns for users with different personalities, which the popularity bias of the recommenders might cause. To this end, we split users into groups of high, moderate, and low clusters in terms of each personality trait in the big‐five factor model and investigate how the popularity bias impacts such groups differently by considering several criteria. The experiments conducted with 10 well‐known algorithms of different kinds have concluded that less‐extroverted people and users avoiding new experiences are exposed to more unfair recommendations regarding popularity, despite being the most significant contributors to the system. However, discrepancies in other qualities of the recommendations for these user characteristics, such as accuracy, diversity, and novelty, vary depending on the utilized algorithm.

2 citations


Journal ArticleDOI
TL;DR: In this article , an efficient and secure key management using extended convolutional neural network, that is, hybrid Enhanced Elman Spike Convolutional Neural Network optimized with improved COOT optimization algorithm (Hyb EESCCNN) is proposed for intrusion detection in cloud system.
Abstract: Cloud computing aids users for storing and recovering their information everywhere in the world. Security and efficiency are the two main issues in cloud service. Numerous intrusion detection techniques for the cloud computing environment were proposed, but those techniques do not effectively and accurately detect the attacks. Hence, an efficient and secure key management using extended convolutional neural network, that is, hybrid Enhanced Elman Spike convolutional Neural Network optimized with improved COOT optimization algorithm (Hyb EESCCNN) is proposed for intrusion detection in cloud system. Furthermore, the novel Adaptive Tangent Brakerski-Gentry Vaikuntanathan Homomorphic Encryption (ATBGVHE) method is proposed for providing the security of the system. At first, SHA-512 is used for authenticating cloud users to store its own information in to the cloud server. Then for Intrusion Detection (ID) the input data from the NSL-KDD, UNSWNB15, CICIDS2018, and ToN-IoT datasets are pre-processed. The most relevant features are extracted using Fast Independent Component Analysis (Fast ICA) from the pre-processed output. These extracted data are classified into malicious and non-malicious data using Hyb EESCCNN. After classification, the non-malicious data is secured using an ATBGVHE technique. The outcomes of the proposed methods shows that the NSL-KDD datasets attains 99.9% higher accuracy, UNSW-NB15 datasets offer 99.89% higher accuracy, CSE-CIC-IDS2018 datasets attains 99.8% higher accuracy, ToN-IoT datasets attains 99.8% higher accuracy and 0.02 s lower encryption time compared with existing methods. Finally, case study with real time applications is also analyzed to prove the efficiency of the proposed method.

Journal ArticleDOI
TL;DR: In this paper , an optimal centroid-based routing protocol (OCRP) is proposed for WSN assisted IoT based on hybrid optimization techniques, which aim is to enhance the energy efficiency and network lifetime.
Abstract: The recent survey depicts internet growth has been reached to billion of people and Internet of Things (IoT) as another milestone. IoT is wired or wireless communication technologies to establish a communication channel between devices and services available over the Internet. The wireless sensor networks (WSNs) is one of the most essential technologies assisted in IoT for real-time applications. The energy efficiency is a major issue in IoT and it becomes more complex due to large scalability and the WSNs cannot be applied directly to the IoT. Moreover, routing is a very challenging aspect that takes place in such platform because of it is intrinsic properties. In this article, we propose an optimal centroid-based routing protocol (OCRP) for WSN assisted IoT based on hybrid optimization techniques, which aim is to enhance the energy efficiency and network lifetime. IoT network is composed by serving nodes and end users. In OCRP, a multi-objective swarm optimization (MSO) algorithm is used to perform the clustering, which reduce the chaotic in nature of energy consumption. Then, multiservice queue based ant optimization algorithm is proposed for next neighborhood selection for inter-cluster routing. Finally, our OCRP protocol is experimented with different simulation in network simulator (NS-2) tool with wireless body sensor network (WBSN) application. The simulation results show the effectiveness of OCRP protocol in terms of energy consumption, average data transmission rate and network lifetime.

Journal ArticleDOI
TL;DR: In this article , the authors evaluate and benchmark the performance of LSTM-based malware detection approaches on specific Long Short-Term Memory (LSTM) architectures to provide insight into malware detection.
Abstract: Malicious software forms a threat to many software‐intensive systems and as such several malware detection approaches have been introduced, often based on sequential data analysis. Long short‐term memory (LSTM) is an artificial recurrent neural network (RNN) architecture that is effective for sequential data analysis, however, no study has yet analyzed the performance of different LSTM architectures for the application of malware detection. In this article, we aim to evaluate and benchmark the performance of LSTM‐based malware detection approaches on specific LSTM architectures to provide insight into malware detection. Our method builds LSTM‐based malware prediction models and performs experiments using different LSTM architectures including Vanilla LSTM, stacked LSTM, bi‐directional LSTM, and CNN‐LSTM. We evaluated the performance of each of these architectures and different configurations. Our study, as a contribution, shows that Bidirectional LSTM with hyperparameter optimization is found to be overperforming other selected LSTM architectures. This study shows that different LSTM approaches and architectures are applicable to the malware detection problem. Quality attributes such as efficiency and accuracy, and the software system architecture adopted for the implementation impact the selection of the LSTM approach.

Journal ArticleDOI
TL;DR: In this article , a novel automated number plate recognition methodology has been proposed to identify the number plates accurately with minimal error rates, where a new pretrained location-dependent ultra convolutional neural network (LUCNN) is employed to learn the influential features from the input images.
Abstract: In today's world, identifying the owner and proprietor of a vehicle that violates driving rules or does any unintentional work on the street is a challenging task. Inspection of each driver's license number takes a long time for a highway police officer. To overcome this, many researchers have introduced an automated number plate recognition approach which is usually a computer vision-based technique to identify the vehicle's registration plate. However, the existing recognition approaches are lagged to extract the influential features which degrade the detection accuracy and increase the misclassification errors. In this article, a novel automated number plate recognition methodology has been proposed to identify the number plates accurately with minimal error rates. Primary, a new pretrained location-dependent ultra convolutional neural network (LUCNN) is employed to learn the influential features from the input images. These obtained features are then fed into hybrid single-shot fully convolutional detectors with a support vector machine (SSVM) classifier to separate the vehicle's city, model, and number from the registration location. At varied automobile distances, the proposed LUCNN + SSVM model is able to retrieve the number plate regions in the picture acquired from its back end. The performance results manifest that the proposed LUCNN + SSVM model attains a better accuracy of 98.75% and a lesser error range of 1.25% than the existing recognition models.

Journal ArticleDOI
TL;DR: In this article , the authors combined the ridge estimator with the transformed M•estimator (MT) and the conditionally unbiased bounded influence estimator (CE) to obtain the robust MT estimator and robust CE.
Abstract: The method of maximum likelihood flops when there is linear dependency (multicollinearity) and outlier in the generalized linear models. In this study, we combined the ridge estimator with the transformed M‐estimator (MT) and the conditionally unbiased bounded influence estimator (CE). The two new estimators are called the robust MT estimator and Robust‐CE. A Monte Carlo study revealed that the proposed estimators dominate for the generalized linear models with Poisson response and log link function. The real‐life application results support the simulation outcome.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed the MTL-DNN method based on multi-task learning to solve the problem of insufficient labeled training data for source and target projects, where the common features of multiple related tasks were learned by sharing layers, and the unique features of each task are learned by task specific layers.
Abstract: With the development of smartphones, mobile applications play an irreplaceable role in our daily life, which characteristics often commit code changes to meet new requirements. This characteristic can introduce defects into the software. To provide immediate feedback to developers, previous researchers began to focus on just-in-time (JIT) software defect prediction techniques. JIT defect prediction aims to determine whether code commits will introduce defects into the software. It contains two scenarios, within-project JIT defect prediction and cross-project JIT defect prediction. Regardless of whether within-project JIT defect prediction or cross-project JIT defect prediction all need to have enough labeled data (within-project JIT defect prediction assumes that have plenty of labeled data from the same project, while cross-project JIT defect prediction assumes that have sufficient labeled data from source projects). However, in practice, both the source and target projects may only have limited labeled data. We propose the MTL-DNN method based on multi-task learning to solve this question. This method contains the data preprocessing layer, input layer, shared layers, task-specific layers, and output layer. Where the common features of multiple related tasks are learned by sharing layers, and the unique features of each task are learned by the task-specific layers. For verifying the effectiveness of the MTL-DNN approach, we evaluate our method on 15 Android mobile apps. The experimental results show that our method significantly outperforms the state-of-the-art single-task deep learning and classical machine learning methods. This result shows that the MTL-DNN method can effectively solve the problem of insufficient labeled training data for source and target projects.

Journal ArticleDOI
TL;DR: In this article , a black hole attack detection mechanism is carried out on traditional ad hoc on demand distance vector routing (AODV) protocol and maximal destination sequence number is estimated by using linear regression technique.
Abstract: Ad hoc networks have constraints like energy, memory, computation power, and communication range which makes the nodes vulnerable for attacks. In this article, black hole attack detection mechanism is carried out on traditional ad hoc on demand distance vector routing (AODV) protocol. Initially the proposed work proceeded with black‐hole attack configured on conventional AODV protocol. Black‐hole node will give route reply with highest sequence number to entice the sender to create a route through it. By using intruder detection system threshold mechanism, the black‐hole nodes are identified and isolated. In this article, maximal destination sequence number is estimated by using linear regression technique. Performance comparison is carried out with normal, black hole based and black hole detected routing mechanisms. The simulation findings show that that incorporated method enhances network performance by improving QoS under black hole attack.

Journal ArticleDOI
TL;DR: In this paper , attribute based encryption (ABE) is performed for ensuring better data transmission among the nodes in cloud and improved QKD is adopted in this work for key generation in ABE.
Abstract: Improvement of privacy and security in data centers is challenging with proficient safety key managing. So as to resolve this issue, data centers require proficient quantum cryptographic techniques with authentication methods that are suitable to improve the privacy and security with lesser intricacy. In addition, quantum cryptography (QC) offers maximal security with lesser complication, which raises the security strength and storing capability of big data. This work intends to introduce a QC oriented data security model in cloud via electing the optimal private key. Here, attribute based encryption (ABE) is performed for ensuring better data transmission among the nodes in cloud. For key generation in ABE, improved QKD is adopted in this work. Depending upon this, the decryption and encryption is done. Moreover, for optimal secret key selection in ABE, self-modified Aquila optimization (SM-AO) scheme is deployed. Further, analysis is done depending upon varied metrics.

Journal ArticleDOI
TL;DR: In this article , a Dropout AlexNet-Extreme Learning optimized with Fast Gradient Descent optimization algorithm is proposed for detecting and classifying images of brain tumor, which achieves high computational complexity.
Abstract: Brain tumor is caused by the growth of abnormal cells, which forms a mass and affects the brain functions. The existing methods did not provide sufficient accuracy with high computational complexity. Therefore, in this manuscript, a Dropout AlexNet-Extreme Learning optimized with Fast Gradient Descent optimization algorithm is proposed for detecting and classifying images of brain tumor. Here, the input magnetic resonance imaging images are taken from three datasets: BRATS dataset, ISLES dataset, and RemBRANDT Dataset. Then the imageries are preprocessed to remove the noise as well as improve the superiority of the images. The image features are extracted using the Gray-Level Co-Occurrence Matrix methods. The extracted features are given to the Dropout AlexNet-XtremeLearning Machine architecture for classification. Finally, the DrpXLM classifier classifying the brain images as benign, malignant, and normal. The simulation is implemented in MATLAB. For BRATS dataset, the proposed strategy achieves34.64%, 45.36%, and 33.32% higher accuracy for benign, 37.85%, 28.94%, and 56.74% higher accuracy for malignant and 46.76%, 38.96%, and 44.86% better accuracy for normal compared with the existing methods, like Gaussian filters and long short-term memory based brain tumor detection (GF-LSTM-BTD), Shannon's-Entropy and Social-Group-Optimization based brain tumor detection (SE-SGO-BTD), Alex and Google networks with softmax layer based brain tumor detection (AGN-SOFT-BTD).

Journal ArticleDOI
TL;DR: In this article , the TRNSYS simulation tool and the improved Battle Royal optimizer (IBRO) are coupled for energy optimization of the building based on the simulation to study the impact of the overhangs optimization.
Abstract: China's building energy consumption accounts for 27.5% of China's total energy demand. This amount is increasing because of the considerable increase in the rate of household equipment mostly in air‐conditioning systems. In this study, the TRNSYS simulation tool and improved Battle Royal optimizer (IBRO) is coupled for energy optimization of the building based on the simulation to study the impact of the overhangs optimization. The thermal comfort enhancement is evaluated in three case studies of three different weather conditions in China. To validate the achieved results, a comparison of the optimum values with benchmark cases is performed with consideration of heating and cooling demands, and yearly discomfort percentage. According to the achievements, an improvement can be seen in the comfort level. Also, the cooling demand is decreased by 4.2% for Shanghai.

Journal ArticleDOI
TL;DR: In this paper , an energy-efficient method of resource allocation based on request prediction in multiple cloud data centers (RARP) is proposed, which constructs a resource allocation framework and allocates VMs and PMs based on the principle of minimum remaining resources available to achieve minimum usage of PMs, thus minimizing energy consumption to complete application requests.
Abstract: To meet the ever‐increasing requirements of the applications, cloud service providers have further built and managed multiple cloud data centers in multiple regions across geographies with many physical machines (PMs). However, most of the existing resource allocation algorithms are developed for a single cloud data center, which normally cannot efficiently handle the load burst occasions where a single cloud data center may not be enough to satisfy the demands burst of applications. Therefore, it is necessary to consider how to efficiently manage multiple cloud data centers while meeting application requirements and reducing energy consumption. This paper first systematically analyzes multiple cloud data centers and energy consumption models. Then, an energy‐efficient method of Resource Allocation based on Request Prediction in multiple cloud data centers (RARP) is proposed. The RARP method constructs a resource allocation framework based on request prediction in multiple cloud data centers, which anticipates the application request volume in advance. At the same time, the RARP method allocates VMs and PMs based on the principle of minimum remaining resources available to achieve minimum usage of PMs, thus minimizing energy consumption to complete application requests. Extensive experiments are conducted on the proposed RARP method through the simulation platform CloudSim. Finally, the experimental test results show that the accuracy of request detection and the energy consumption of cloud data centers are significantly better than those of the comparison algorithms.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a FS-integrated classifier optimisation algorithm that incorporates FS during CO, enhances optimisation, and tackles the interdependency problem, which achieved accuracies of 85.10%, 73.24%, and 99.63% with 16 features for the CIC-IDS2017 datasets.
Abstract: In the era of technology, information security has gained significant importance, as intruders constantly conduct attacks to breach information systems. Intelligent network intrusion detection systems (NIDS) are promising for detecting malicious activities; however, it is required to apply feature selection (FS) and classifier optimisation (CO) using cost-effective algorithms to build an accurate and efficient system. Although classifier-dependent FS (CDFS) techniques and CO algorithms have been shown to perform well, they suffer from computational complexity, and their interdependencies negatively affect model performance. This study proposes the FS-integrated classifier optimisation algorithm that incorporates FS during CO, enhances optimisation, and tackles the interdependency problem. Furthermore, since this algorithm does not use an iterative feature selection process, such as forward selection or backward elimination, it provides relatively less complexity than other CDFS techniques. Moreover, an application of the proposed methodology (NIDS) was implemented using the designed framework to validate the model in this problem domain. The proposed methodology achieved accuracies of 85.10%, 73.24% with one feature for the NSLKDD datasets, 83.45% with 32 features for the UNSW-NB15 dataset, 99.41% with eight features, and 99.63% with 16 features for the CIC-IDS2017 datasets. The results showed that the FS-integrated optimisation algorithm had improved the accuracy of the classifier with fewer features. Furthermore, the proposed methodology outperformed other FS, ensemble learning, and deep learning-based methods regarding detection accuracy and false alarm rate. In conclusion, the developed NIDS is an accurate, efficient, straightforward, feasible, and easy-to-implement system that can be created using limited computing power and time as a promising solution to protect traditional and modern computer networks.

Journal ArticleDOI
TL;DR: In this paper , a hybrid capuchin search with genetic algorithm (MHCSGA) based hierarchical resource allocation is established in a novel multi-objective hybrid CAPS algorithm for cloud computing environment.
Abstract: Internet of recent decades considered cloud computing as the most effective and distributed platform. It is a comfortable and quick way to access shared resources over the Internet anytime. The major problem cloud customers face while choosing the resources for a particular application is QoS. In the cloud computing environment, various resources need to be effectively allocated on VMs by reducing makespan and synchronously increasing resource utilization. For that, a novel multi‐objective hybrid capuchin search with genetic algorithm (MHCSGA) based hierarchical resource allocation is established in this work. MHCSGA optimizes the multi‐objective functions like resource utilization, response time, makespan, execution time and throughput. Initially, partitioning around the K‐medoids clustering method is utilized to allocate the resources optimally. During clustering, the tasks are divided into two cluster groups then, the optimization is performed to attain an optimal resource allocation process. The experimental setup is executed using the JAVA tool. For the simulation process, the proposed work uses the GWA‐T‐12 Bitbrains dataset. The makespan achieved by proposed algorithm for 50, 100, 150, and 200 tasks are found to be 10.45, 17.6, 25.67, and 31.34, respectively. The comparison analysis proves that the developed model attains improved performance than the state‐of‐the‐art works.

Journal ArticleDOI
TL;DR: In this article, stacked unidirectional and bidirectional LSTM (long short-term memory) networks are applied in solving credit scoring problems for the first time, and the proposed robust model exploits the full potential of the three-layer stacked LSTMs and BDLSTMs with the treatment and modeling of public datasets in a novel way.
Abstract: Credit scoring is one the most important parts of credit risk management in reducing the risk of client defaults and bankruptcies. Deep learning has received much attention in recent years, but it has not been implemented so intensively in credit scoring compared to other financial domains. In this article, stacked unidirectional and bidirectional LSTM (long short‐term memory) networks as a complex area of deep learning are applied in solving credit scoring problems for the first time. The proposed robust model exploits the full potential of the three‐layer stacked LSTM and BDLSTM (bidirectional LSTM) architecture with the treatment and modeling of public datasets in a novel way since credit scoring is not a time sequence problem. Attributes of each loan instance were transformed into a sequence of the matrix with a fixed sliding window approach with a one‐time step. Our proposed models outperform existing and much more complex deep learning solutions thus we succeeded in preserving simplicity. In this article, measures of different types are employed to carry out consistent conclusions. The results by applying three hidden layers on the German Credit dataset showed an accuracy of 87.19%, for Kaggle dataset accuracy reached 93.69%, and for Microcredit dataset accuracy of 97.80%.

Journal ArticleDOI
TL;DR: In this article , an ensemble classifier with rule mining was used to predict student's academic success, which achieved a 92.77% accuracy rate and a sensitivity rate of 94.87%.
Abstract: Currently, educational data mining act as a major part of student performance prediction approaches and their applications. However, more ensemble methods are needed to improve the student performance prediction, and also which helps increase the learning quality of the Student's performance. The usage of an ensemble classifier with rule mining to predict students' academic success is proposed. In response to this need, this research mainly concentrated on an ensemble classifier with rule mining to predict students' academic success. The feature mining is performed using the weighted Rough Set Theory method, in which the proposed meta‐heuristic algorithm optimizes the weight function. The variable optimization of the ensemble classifier is accomplished with the help of a combination of Harris Hawks Optimization (HHO), and Krill Herd Algorithm (KHA) known as Escape Energy Searched Krill Herd‐Harris Hawks Optimization (EES‐KHHO) for maximizing the prediction rate. Extensive tests are carried out on various datasets, and the findings show that our technique outperforms conventional approaches. Throughout the result analysis, the offered method attains a 92.77% accuracy rate, and also it attains a sensitivity rate of 94.87%. Therefore, the offered student performance prediction model achieves better effectiveness regarding various performance metrics.

Journal ArticleDOI
TL;DR: In this article , a water wave optimized nonsubsampled shearlet transformation technique (NSST) was proposed for multimodal medical image fusion, in which the water wave optimization (WWO) algorithm was used to allocate the weights of the NSST approach's high-frequency subbands.
Abstract: Medical image fusion has advanced to the point that it is now possible to combine multiple medical images for accurate disease diagnosis and treatment. The state‐of‐art techniques based on spatial and transform domains suffer from different limitations such as low fused image quality, spectral degradation, contrast reduction, low edge information preserving, lack of shift‐invariance, high computational complexity, classification accuracy, and sensitivity to noise. The main motivation of this work is to generate a single image with excellent visual clarity that retains the features of the source images. This article proposes a water wave optimized nonsubsampled shearlet transformation technique (NSST) for multimodal medical image fusion, in which the water wave optimization (WWO) algorithm is used to allocate the weights of the NSST approach's high‐frequency subbands. The NSST approach is primarily used in this work due to its ability to withstand shift‐invariance and its potential to improve the visual clarity of the fused multimodal image by preserving the essential features present in the image's various directions and edges. We combined the NSST technique with the WWO algorithm, which processes the edges, details, and contourlets of medical images using a max selection strategy based on the fitness function, to improve image quality and computational costs. The WWO algorithm is mainly applied to the NSST to minimize the L1 distance between the fused and the source images. Hence to overcome this problem a condition CNN optimized with a hybrid tunicate swarm memetic (TSM) algorithm is used to incorporate both the benefits offered by the condition CNN–TSM algorithm and NSST. The TSM optimized condition CNN architecture is used to preserve the coefficients of the image and improve the perceiving capability of the high‐frequency sub‐bands. An inverse NSST is used for fused frequency sub‐band integration. Finally, the efficiency of the proposed methodology is evaluated in terms of enhanced visual feature quality, edge detection, contour detection, and computational performance.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a lightweight Android malware detection method based on sensitive features combination, and four different machine learning classification algorithms were used to evaluate the classification effect of the sensitive feature combination.
Abstract: With the development of mobile communication, Android software has increased sharply. Meanwhile, more and more malware emerges. Identifying malware in time is very important. Currently, most malware identifying methods are static, and the detection accuracy mainly depends on the classification feature and the algorithm. In order to improve the detection accuracy, reducing the dimension and difficulty of feature extraction, we propose a lightweight Android malware detection method based on sensitive features combination. After fully analyzing the static features in Android software, we improve the extraction methods of various features, define four sensitive features, and then form a sensitive features combination to more accurately reflect the characteristics of Android software with fewer features. Finally, four different machine learning classification algorithms were used to evaluate the classification effect of the sensitive features combination. The experiments show that the sensitive features combination has a good classification effect. When combined with the random forest classification algorithm, the accuracy is the highest, which could reach 97.6%.


Journal ArticleDOI
TL;DR: In this article , the authors present a review study of cloud computing security threats, problems, and solutions that use one or more algorithms, including lightweight cryptography, genetics-based cryptography and machine learning (ML) algorithms.
Abstract: Cloud computing (CC) refers to the on-demand availability of network resources, particularly data storage and processing power, without requiring special or direct administration by users. CC, which just made its debut as a collection of public and private data centers, provides clients with a unified platform throughout the Internet. Cloud computing has revolutionized the world, opening up new horizons with bright potential due to its performance, accessibility, low cost, and many other benefits. Due to the exponential rise of cloud computing, systems based on cloud computing now require an effective data security mechanism. Comprehensive security policies, corporate security culture, and cloud security solutions are used to ensure the level of cloud data security. Many techniques exist to protect data communication in the cloud environment, including encryption. Encryption algorithms play an important role in information security systems and various cloud computing-based systems. Current researchers have focused on lightweight cryptography, genetics-based cryptography, and machine learning (ML) algorithms for security in CC. This review study analyses CC security threats, problems, and solutions that use one or more algorithms. The work discusses several lightweight cryptographies, genetics-based cryptography and different ML algorithms that are used to overcome cloud security issues, including supervised, unsupervised, semi-supervised, and reinforcement learning. Moreover, we enlist future research directions to secure CC models.

Journal ArticleDOI
TL;DR: In this paper , a self-attention based progressive generative adversarial network optimized with arithmetic optimization algorithm (AOA) is proposed for kidney stone detection, the proposed method attains 54.78%, 34.89%, and 20.96% higher accuracy and 3.45%, 4.08%, and 5.06% greater AUC compared with existing methods, such as a ANN•OGGA‐KSD, HMANN•BPA•KSD and ANN•CSOA-KSD respectively.
Abstract: Self‐attention based progressive generative adversarial network optimized with arithmetic optimization algorithm (AOA) is proposed in this manuscript for kidney stone detection. Initially, the input kidney stone images are gathered via CT kidney dataset. Then, the input image is preprocessed by utilizing APPDRC filtering approach. Also, the preprocessed images are given to the multi‐level thresholding segmentation technique for segmented the image. Then the segmented images are given to the TF‐IDF feature extraction method for extracting the features. Then the extracted features are fed to feature selection using Weibull distributive generalized multidimensional scaling methods for selecting the features. Then the selected features are SPGGAN classification method for kidney stone detection. Generally, SPGGAN not reveal any adoption of optimization methods compute optimum parameters for assuring correct kidney stone detection. Thus, AOA is used for optimizing the weight parameters of SPGGAN and it is implemented in python, its performance is examined under certain performances metrics, such as accuracy, precision, sensitivity, specificity, F‐measure, computational time, and ROC. The proposed SPGGAN‐AOA‐KSD method attains 54.78%, 34.89%, and 20.96% higher accuracy and 3.45%, 4.08%, and 5.06% greater AUC compared with existing methods, such as a ANN‐OGGA‐KSD, HMANN‐BPA‐KSD, and ANN‐CSOA‐KSD respectively.